{
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "<h1 align=\"center\">\n",
        "  <a href=\"https://portkey.ai\">\n",
        "    <img width=\"300\" src=\"https://analyticsindiamag.com/wp-content/uploads/2023/08/Logo-on-white-background.png\" alt=\"portkey\">\n",
        "  </a>\n",
        "</h1>"
      ],
      "metadata": {
        "id": "eKDYSB3Neg2H"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1BS2K6Svx7yEoYJpOz9yCYvspWHMqA6AA?usp=sharing)"
      ],
      "metadata": {
        "id": "FzmbYVYuArB9"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Use Claude 3.5 Sonnet with OpenAI Compatibility using Portkey!\n",
        "\n",
        "Since Portkey is fully compatible with the OpenAI signature, you can connect to the Portkey AI Gateway through OpenAI Client.\n",
        "\n",
        "- Set the `base_url` as `PORTKEY_GATEWAY_URL`\n",
        "- Add `default_headers` to consume the headers needed by Portkey using the `createHeaders` helper method."
      ],
      "metadata": {
        "id": "9x3bsYNScHG0"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "You will need Portkey and anthropic API keys to run this notebook.\n",
        "\n",
        "- Sign up for Portkey and generate your API key [here](https://app.portkey.ai/).\n",
        "- Get your anthropic API key [here](https://console.anthropic.com/keys)"
      ],
      "metadata": {
        "id": "l3OZLUkNcDfD"
      }
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7mzpClSpAgTW",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "75f2b5bf-8332-4ff7-9037-2f6b61ff3fd5"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m86.4/86.4 kB\u001b[0m \u001b[31m1.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m326.8/326.8 kB\u001b[0m \u001b[31m8.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m75.6/75.6 kB\u001b[0m \u001b[31m2.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m12.7/12.7 MB\u001b[0m \u001b[31m28.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.9/77.9 kB\u001b[0m \u001b[31m1.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m2.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h"
          ]
        }
      ],
      "source": [
        "!pip install -qU portkey-ai openai"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WacBHzekCXVf"
      },
      "source": [
        "## With OpenAI Client"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "tvmQtgsWBur2",
        "outputId": "ca9eb1f6-2281-454a-bdf1-817d0966ccb3"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Hi there! How can I assist you today?\n"
          ]
        }
      ],
      "source": [
        "from openai import OpenAI\n",
        "from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders\n",
        "from google.colab import userdata\n",
        "\n",
        "client = OpenAI(\n",
        "    api_key= userdata.get('ANTHROPIC_API_KEY'), ## replace it your anthropic API key\n",
        "    base_url=PORTKEY_GATEWAY_URL,\n",
        "    default_headers=createHeaders(\n",
        "        provider=\"anthropic\",\n",
        "        api_key= userdata.get('PORTKEY_API_KEY'), ## replace it your Portkey API key\n",
        "    )\n",
        ")\n",
        "\n",
        "chat_complete = client.chat.completions.create(\n",
        "    model=\"claude-3-5-sonnet-20240620\",\n",
        "    max_tokens = 1000,\n",
        "    messages=[{\"role\": \"user\",\n",
        "               \"content\": \"Say hi!\"}],\n",
        ")\n",
        "\n",
        "print(chat_complete.choices[0].message.content)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HvxiTJcxDUCN"
      },
      "source": [
        "## With Portkey Client"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "04Rw2w74DXg5"
      },
      "source": [
        "Note: You can safely store your Anthropic API key in [Portkey](https://app.portkey.ai/) and access models using virtual key\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5-TaHCg9DHtI"
      },
      "outputs": [],
      "source": [
        "from portkey_ai import Portkey\n",
        "\n",
        "portkey = Portkey(\n",
        "    api_key = userdata.get('PORTKEY_API_KEY'),   # replace with your Portkey API key\n",
        "    virtual_key= \"anthropic-9e8db9\",   # replace with your virtual key for anthropic AI\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "H3O6_690DjR5",
        "outputId": "18b9bae2-22a4-4eca-f9b8-263913a26ae8"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "{\n",
            "    \"id\": \"msg_018P6LNLMATHF38xGpCvbEoP\",\n",
            "    \"choices\": [\n",
            "        {\n",
            "            \"finish_reason\": \"end_turn\",\n",
            "            \"index\": 0,\n",
            "            \"logprobs\": null,\n",
            "            \"message\": {\n",
            "                \"content\": \"I am an AI assistant called Claude, created by Anthropic to be helpful, harmless, and honest. I don't have a physical body or avatar - I'm a language model trained to engage in conversation and help with tasks. How can I assist you today?\",\n",
            "                \"role\": \"assistant\",\n",
            "                \"function_call\": null,\n",
            "                \"tool_calls\": null\n",
            "            }\n",
            "        }\n",
            "    ],\n",
            "    \"created\": 1718894698,\n",
            "    \"model\": \"claude-3-5-sonnet-20240620\",\n",
            "    \"object\": \"chat_completion\",\n",
            "    \"system_fingerprint\": null,\n",
            "    \"usage\": {\n",
            "        \"prompt_tokens\": 11,\n",
            "        \"completion_tokens\": 58,\n",
            "        \"total_tokens\": 69\n",
            "    }\n",
            "}\n"
          ]
        }
      ],
      "source": [
        "completion = portkey.chat.completions.create(\n",
        "    messages= [{ \"role\": 'user', \"content\": 'Who are you?'}],\n",
        "    model= 'claude-3-5-sonnet-20240620',\n",
        "    max_tokens=250\n",
        ")\n",
        "\n",
        "print(completion)"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Advanced Routing - Load Balancing"
      ],
      "metadata": {
        "id": "GajylwyNkIDc"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "With load balancing, you can distribute load effectively across multiple API keys or providers based on custom weights to ensure high availability and optimal performance.\n",
        "\n",
        "Let's take an example where we might want to split traffic between anthropic's `llama-3-70b` and OpenAI's `gpt-3.5` giving a weightage of 70-30.\n",
        "\n",
        "The gateway configuration for this would look like the following:"
      ],
      "metadata": {
        "id": "p1Cod5TuoSql"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "config = {\n",
        "  \"strategy\": {\n",
        "      \"mode\": \"loadbalance\",\n",
        "  },\n",
        "  \"targets\": [\n",
        "    {\n",
        "      \"virtual_key\": \"anthropic-431005\", # anthropic virtual key\n",
        "      \"override_params\": {\"model\": \"llama3-70b-8192\"},\n",
        "      \"weight\": 0.7\n",
        "    },\n",
        "    {\n",
        "      \"virtual_key\": \"gpt3-8070a6\", # OpenAI virtual key\n",
        "      \"override_params\": {\"model\": \"gpt-3.5-turbo-0125\"},\n",
        "      \"weight\": 0.3\n",
        "\n",
        "    }\n",
        "  ]\n",
        "}"
      ],
      "metadata": {
        "id": "JMhxoycAkNBp"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "from openai import OpenAI\n",
        "from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders\n",
        "from google.colab import userdata\n",
        "\n",
        "client = OpenAI(\n",
        "    api_key=\"X\",\n",
        "    base_url=PORTKEY_GATEWAY_URL,\n",
        "    default_headers=createHeaders(\n",
        "        api_key=userdata.get(\"PORTKEY_API_KEY\"),\n",
        "        config=config\n",
        "    )\n",
        ")\n",
        "\n",
        "chat_complete = client.chat.completions.create(\n",
        "    model=\"X\",\n",
        "    messages=[{\"role\": \"user\",\n",
        "               \"content\": \"Just say hi!\"}],\n",
        ")\n",
        "\n",
        "print(chat_complete.model)\n",
        "print(chat_complete.choices[0].message.content)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "EppbDt5llNsn",
        "outputId": "49c0003d-2bf9-4daa-d400-99a4d2c76537"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "gpt-3.5-turbo-0125\n",
            "Hi! How can I assist you today?\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GwQmjjr0GePo"
      },
      "source": [
        "## Observability with Portkey"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "344KJaH3GfG_"
      },
      "source": [
        "By routing requests through Portkey you can track a number of metrics like - tokens used, latency, cost, etc.\n",
        "\n",
        "Here's a screenshot of the dashboard you get with Portkey!\n",
        "\n",
        "![portkey_view.JPG]()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qFCSC4GkJd2S"
      },
      "outputs": [],
      "source": []
    }
  ],
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}