{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/drive/1tMByxJ6XCVETuk8VBMaGB5jGOLXwAUFr?usp=sharing\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WljjH8K3s7kG"
      },
      "source": [
        "# XL to XS - Prompt Engineering for Smaller Models\n",
        "\n",
        "This notebook gives you the ability to go from a large model to smaller model -- reducing costs massively while keeping quality high.\n",
        "\n",
        "This extends the Opus to Haiku notebook ( in [`gpt-prompt-engineer`]((https://github.com/mshumer/gpt-prompt-engineer)) repo by  [Matt Shumer](https://twitter.com/mattshumer_)) to cover any large and small model combination using Portkey's [AI Gateway](https://github.com/portkey-ai/gateway)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "dQmMZdkG_RA5"
      },
      "outputs": [],
      "source": [
        "import requests\n",
        "\n",
        "PORTKEY_API_KEY = \"\" # Configure your AI Gateway Key (https://app.portkey.ai/signup)\n",
        "\n",
        "PROVIDER = \"\" # Any of `openai`, `anthropic`, `azure-openai`, `anyscale`, `mistral`, `gemini` and more\n",
        "PROVIDER_API_KEY = \"\" # Enter the API key of the provider used above\n",
        "LARGE_MODEL = \"\" # The large model to use\n",
        "\n",
        "\n",
        "# If you want to use a different provider for the smaller model, uncomment these 2 lines\n",
        "# SMALL_PROVIDER = \"\" # Any of `openai`, `anthropic`, `azure-openai`, `anyscale`, `mistral`, `gemini`\n",
        "# SMALL_PROVIDER_API_KEY = \"\"\n",
        "\n",
        "SMALL_MODEL = \"\" # The small model to use"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "B84V9aohvCbr"
      },
      "source": [
        "### Portkey Client Init\n",
        "\n",
        "Using Portkey clients for the large and small models. The gateway will allow us to make calls to any model without chaning our code."
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install portkey_ai"
      ],
      "metadata": {
        "id": "BfTZMUNwwhxe"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "wXeqMQpzzosx"
      },
      "outputs": [],
      "source": [
        "#@title Run this to prep the main functions\n",
        "\n",
        "from portkey_ai import Portkey\n",
        "\n",
        "client_large = Portkey(\n",
        "    Authorization= \"Bearer \"+PROVIDER_API_KEY,\n",
        "    provider=PROVIDER,\n",
        "    api_key=PORTKEY_API_KEY,\n",
        "    metadata={\"_user\": \"gpt-prompt-engineer\"},\n",
        "    config={\"cache\": {\"mode\": \"simple\"}}\n",
        ")\n",
        "\n",
        "try:\n",
        "    authorization_token = \"Bearer \" + SMALL_PROVIDER_API_KEY\n",
        "except NameError:\n",
        "    authorization_token = \"Bearer \" + PROVIDER_API_KEY\n",
        "\n",
        "try:\n",
        "    provider_name = SMALL_PROVIDER\n",
        "except NameError:\n",
        "    provider_name = PROVIDER\n",
        "\n",
        "client_small = Portkey(\n",
        "    Authorization=authorization_token,\n",
        "    provider=provider_name,\n",
        "    api_key=PORTKEY_API_KEY,  # Ensure this is defined and contains the correct API key.\n",
        "    metadata={\"_user\": \"gpt-prompt-engineer\"},\n",
        "    config={\"cache\": {\"mode\": \"simple\"}}\n",
        ")\n",
        "\n",
        "import json\n",
        "import re\n",
        "\n",
        "def generate_candidate_prompts(task, prompt_example, response_example):\n",
        "    messages = [{\n",
        "            \"role\": \"system\",\n",
        "            \"content\":\"\"\"<task>Given an example training sample, create seven additional samples for the same task that are even better. Each example should contain a <prompt> and a <response>.</task>\n",
        "\n",
        "<rules>\n",
        "1. Ensure the new examples are diverse and unique from one another.\n",
        "2. They should all be perfect. If you make a mistake, this system won't work.\n",
        "</rules>\n",
        "\n",
        "Respond in this format:\n",
        "<response_format>\n",
        "<example_one>\n",
        "<prompt>\n",
        "PUT_PROMPT_HERE\n",
        "</prompt>\n",
        "<response>\n",
        "PUT_RESPONSE_HERE\n",
        "</response>\n",
        "</example_one>\n",
        "\n",
        "<example_two>\n",
        "<prompt>\n",
        "PUT_PROMPT_HERE\n",
        "</prompt>\n",
        "<response>\n",
        "PUT_RESPONSE_HERE\n",
        "</response>\n",
        "</example_two>\n",
        "\n",
        "...\n",
        "</response_format>\"\"\"\n",
        "        }, {\n",
        "            \"role\": \"user\",\n",
        "            \"content\": f\"\"\"<training_task>{task}</training_task>\n",
        "\n",
        "<prompt_example>\n",
        "{prompt_example}\n",
        "</prompt_example>\n",
        "\n",
        "<response_example>\n",
        "{response_example}\n",
        "</response_example>\"\"\"},\n",
        "    ]\n",
        "\n",
        "    response = client_large.chat.completions.create(\n",
        "        model=LARGE_MODEL,\n",
        "        max_tokens=4000,\n",
        "        temperature=0.5,\n",
        "        messages=messages\n",
        "    )\n",
        "    response_text = response.choices[0]['message']['content']\n",
        "\n",
        "    # Parse out the prompts and responses\n",
        "    prompts_and_responses = []\n",
        "    examples = re.findall(r'<example_\\w+>(.*?)</example_\\w+>', response_text, re.DOTALL)\n",
        "    for example in examples:\n",
        "        prompt = re.findall(r'<prompt>(.*?)</prompt>', example, re.DOTALL)[0].strip()\n",
        "        response = re.findall(r'<response>(.*?)</response>', example, re.DOTALL)[0].strip()\n",
        "        prompts_and_responses.append({'prompt': prompt, 'response': response})\n",
        "\n",
        "    return prompts_and_responses\n",
        "\n",
        "def generate_system_prompt(task, prompt_examples):\n",
        "    messages = [\n",
        "        {\"role\": \"system\", \"content\": \"\"\"<your_role>Given a user-description of their <task> a set of prompt / response pairs (it'll be in JSON for easy reading) for the types of outputs we want to generate given inputs, write a fantastic system prompt that describes the task to be done perfectly.</your_role>\n",
        "\n",
        "<rules>\n",
        "1. Do this perfectly.\n",
        "2. Respond only with the system prompt, and nothing else. No other text will be allowed.\n",
        "</rules>\n",
        "\n",
        "Respond in this format:\n",
        "<system_prompt>\n",
        "WRITE_SYSTEM_PROMPT_HERE\n",
        "</system_prompt>\"\"\"\n",
        "        },\n",
        "        {\"role\": \"user\", \"content\": f\"\"\"<task>{task}</task>\n",
        "\n",
        "<prompt_response_examples>\n",
        "{str(prompt_examples)}\n",
        "</prompt_response_examples>\"\"\"\n",
        "        }]\n",
        "\n",
        "    response = client_large.chat.completions.create(\n",
        "        model=LARGE_MODEL,\n",
        "        max_tokens=1000,\n",
        "        temperature=0.5,\n",
        "        messages=messages\n",
        "    )\n",
        "    response_text = response.choices[0]['message']['content']\n",
        "\n",
        "    # Parse out the prompt\n",
        "    system_prompt = response_text.split('<system_prompt>')[1].split('</system_prompt>')[0].strip()\n",
        "\n",
        "    return system_prompt\n",
        "\n",
        "def test_haiku(generated_examples, prompt_example, system_prompt):\n",
        "    messages = [{\"role\": \"system\", \"content\": system_prompt}]\n",
        "\n",
        "    for example in generated_examples:\n",
        "      messages.append({\"role\": \"user\", \"content\": example['prompt']})\n",
        "      messages.append({\"role\": \"assistant\", \"content\": example['response']})\n",
        "\n",
        "    messages.append({\"role\": \"user\", \"content\": prompt_example.strip()})\n",
        "\n",
        "    response = client_small.chat.completions.create(\n",
        "        model = SMALL_MODEL,\n",
        "        max_tokens=2000,\n",
        "        temperature=0.5,\n",
        "        messages=messages\n",
        "    )\n",
        "    response_text = response.choices[0]['message']['content']\n",
        "\n",
        "    return response_text\n",
        "\n",
        "def run_haiku_conversion_process(task, prompt_example, response_example):\n",
        "\n",
        "    print('Generating the prompts / responses...')\n",
        "    # Generate candidate prompts\n",
        "    generated_examples = generate_candidate_prompts(task, prompt_example, response_example)\n",
        "\n",
        "    print('Prompts / responses generated. Now generating system prompt...')\n",
        "\n",
        "    # Generate the system prompt\n",
        "    system_prompt = generate_system_prompt(task, generated_examples)\n",
        "\n",
        "    print('System prompt generated:', system_prompt)\n",
        "\n",
        "\n",
        "    print('\\n\\nTesting the new prompt on '+SMALL_MODEL+', using your input example...')\n",
        "    # Test the generated examples and system prompt with the Haiku model\n",
        "    small_model_response = test_haiku(generated_examples, prompt_example, system_prompt)\n",
        "\n",
        "    print(SMALL_MODEL+' responded with:')\n",
        "    print(small_model_response)\n",
        "\n",
        "    print('\\n\\n!! CHECK THE FILE DIRECTORY, THE PROMPT IS NOW SAVED THERE !!')\n",
        "\n",
        "    # Create a dictionary with all the relevant information\n",
        "    result = {\n",
        "        \"task\": task,\n",
        "        \"initial_prompt_example\": prompt_example,\n",
        "        \"initial_response_example\": response_example,\n",
        "        \"generated_examples\": generated_examples,\n",
        "        \"system_prompt\": system_prompt,\n",
        "        \"small_model_response\": small_model_response\n",
        "    }\n",
        "\n",
        "    # Save the Haiku prompt to a Python file\n",
        "    with open(\"haiku_prompt.py\", \"w\") as file:\n",
        "        file.write('system_prompt = \"\"\"' + system_prompt + '\"\"\"\\n\\n')\n",
        "\n",
        "        file.write('messages = [\\n')\n",
        "        for example in generated_examples:\n",
        "            file.write('    {\"role\": \"user\", \"content\": \"\"\"' + example['prompt'] + '\"\"\"},\\n')\n",
        "            file.write('    {\"role\": \"assistant\", \"content\": \"\"\"' + example['response'] + '\"\"\"},\\n')\n",
        "\n",
        "        file.write('    {\"role\": \"user\", \"content\": \"\"\"' + prompt_example.strip() + '\"\"\"}\\n')\n",
        "        file.write(']\\n')\n",
        "\n",
        "    return result"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZujTAzhuBMea"
      },
      "source": [
        "## Fill in your task, prompt_example, and response_example here.\n",
        "Make sure you keep the quality really high here... this is the most important step!"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "XSZqqOoQ-5_E"
      },
      "outputs": [],
      "source": [
        "task = \"refactoring complex code\"\n",
        "\n",
        "prompt_example = \"\"\"def calculate_total(prices, tax, discount, shipping_fee, gift_wrap_fee, membership_discount):\n",
        "\n",
        "    total = 0\n",
        "\n",
        "    for i in range(len(prices)):\n",
        "\n",
        "        total += prices[i]\n",
        "\n",
        "    if membership_discount != 0:\n",
        "\n",
        "        total = total - (total * (membership_discount / 100))\n",
        "\n",
        "    if discount != 0:\n",
        "\n",
        "        total = total - (total * (discount / 100))\n",
        "\n",
        "    total = total + (total * (tax / 100))\n",
        "\n",
        "    if total < 50:\n",
        "\n",
        "        total += shipping_fee\n",
        "\n",
        "    else:\n",
        "\n",
        "        total += shipping_fee / 2\n",
        "\n",
        "    if gift_wrap_fee != 0:\n",
        "\n",
        "        total += gift_wrap_fee * len(prices)\n",
        "\n",
        "    if total > 1000:\n",
        "\n",
        "        total -= 50\n",
        "\n",
        "    elif total > 500:\n",
        "\n",
        "        total -= 25\n",
        "\n",
        "    total = round(total, 2)\n",
        "\n",
        "    if total < 0:\n",
        "\n",
        "        total = 0\n",
        "\n",
        "    return total\"\"\"\n",
        "\n",
        "response_example = \"\"\"def calculate_total(prices, tax_rate, discount_rate, shipping_fee, gift_wrap_fee, membership_discount_rate):\n",
        "\n",
        "    def apply_percentage_discount(amount, percentage):\n",
        "\n",
        "        return amount * (1 - percentage / 100)\n",
        "\n",
        "    def calculate_shipping_fee(total):\n",
        "\n",
        "        return shipping_fee if total < 50 else shipping_fee / 2\n",
        "\n",
        "    def apply_tier_discount(total):\n",
        "\n",
        "        if total > 1000:\n",
        "\n",
        "            return total - 50\n",
        "\n",
        "        elif total > 500:\n",
        "\n",
        "            return total - 25\n",
        "\n",
        "        return total\n",
        "\n",
        "    subtotal = sum(prices)\n",
        "\n",
        "    subtotal = apply_percentage_discount(subtotal, membership_discount_rate)\n",
        "\n",
        "    subtotal = apply_percentage_discount(subtotal, discount_rate)\n",
        "\n",
        "\n",
        "\n",
        "    total = subtotal * (1 + tax_rate / 100)\n",
        "\n",
        "    total += calculate_shipping_fee(total)\n",
        "\n",
        "    total += gift_wrap_fee * len(prices)\n",
        "\n",
        "\n",
        "\n",
        "    total = apply_tier_discount(total)\n",
        "\n",
        "    total = max(0, round(total, 2))\n",
        "\n",
        "\n",
        "\n",
        "    return total\"\"\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cMO3cJzWA-O0"
      },
      "source": [
        "### Now, let's run this system and get our new prompt!\n",
        "At the end, you'll see a new file pop up in the directory that contains everything you'll need to reduce your costs while keeping quality high w/ Haiku!"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "O-Bn0rupAJqb",
        "outputId": "52d922bc-8d93-4bff-e26b-42c8c99166a5"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Generating the prompts / responses...\n",
            "Prompts / responses generated. Now generating system prompt...\n",
            "System prompt generated: You are an expert code refactoring assistant. Your task is to take a given piece of code and refactor it to be more concise, efficient, and maintainable while preserving its original functionality. Focus on improving code readability, eliminating redundancies, optimizing performance, and applying best practices and design patterns where appropriate. Provide a clear, refactored version of the code that showcases your expertise in writing clean, high-quality code.\n",
            "\n",
            "\n",
            "Testing the new prompt on claude-3-haiku-20240307, using your input example...\n",
            "claude-3-haiku-20240307 responded with:\n",
            "def calculate_total(prices, tax, discount, shipping_fee, gift_wrap_fee, membership_discount):\n",
            "    subtotal = sum(prices)\n",
            "    \n",
            "    if membership_discount:\n",
            "        subtotal *= (1 - membership_discount / 100)\n",
            "    \n",
            "    if discount:\n",
            "        subtotal *= (1 - discount / 100)\n",
            "    \n",
            "    total = subtotal * (1 + tax / 100)\n",
            "    \n",
            "    if total < 50:\n",
            "        total += shipping_fee\n",
            "    else:\n",
            "        total += shipping_fee / 2\n",
            "    \n",
            "    total += gift_wrap_fee * len(prices)\n",
            "    \n",
            "    if total > 1000:\n",
            "        total -= 50\n",
            "    elif total > 500:\n",
            "        total -= 25\n",
            "    \n",
            "    return max(round(total, 2), 0)\n",
            "\n",
            "\n",
            "!! CHECK THE FILE DIRECTORY, THE PROMPT IS NOW SAVED THERE !!\n"
          ]
        }
      ],
      "source": [
        "result = run_haiku_conversion_process(task, prompt_example, response_example)"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### View logs on Portkey\n",
        "Go to the logs tab in Portkey to inspect the 3 calls made and the results returned. Note that cache is enabled in the calls, so all calls after the first one would return instantaneously."
      ],
      "metadata": {
        "id": "-nUxJhF-wG2x"
      }
    }
  ],
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3 (ipykernel)",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.10.9"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}