{"cells":[{"cell_type":"markdown","source":["[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1-yKXZ8Bdt5azMldeRvud4MQN61DRfr3C?usp=sharing)"],"metadata":{"id":"i8vYht2iBOB0"}},{"cell_type":"markdown","source":["<h1 align=\"center\">\n","  <a href=\"https://portkey.ai\">\n","    <img width=\"300\" src=\"https://analyticsindiamag.com/wp-content/uploads/2023/08/Logo-on-white-background.png\" alt=\"portkey\">\n","  </a>\n","</h1>"],"metadata":{"id":"wrkldbuofViJ"}},{"cell_type":"markdown","source":["# Portkey + OpenAI"],"metadata":{"id":"tO8z3C5Qe0i-"}},{"cell_type":"markdown","source":["[Portkey](https://app.portkey.ai/) is the Control Panel for AI apps. With it's popular AI Gateway and Observability Suite, hundreds of teams ship reliable, cost-efficient, and fast apps.\n","\n","With Portkey, you can\n","\n"," - Connect to 150+ models through a unified API,\n"," - View 40+ metrics & logs for all requests,\n"," - Enable semantic cache to reduce latency & costs,\n"," - Implement automatic retries & fallbacks for failed requests,\n"," - Add custom tags to requests for better tracking and analysis and more.\n"],"metadata":{"id":"uHYmGi-Qb5eL"}},{"cell_type":"markdown","source":["You will need Portkey and OpenAIAI API keys to run this notebook.\n","\n","- Sign up for Portkey and generate your API key [here](https://app.portkey.ai/).\n","- Get your OpenAI API key [here](https://console.OpenAI.com/keys)"],"metadata":{"id":"l3OZLUkNcDfD"}},{"cell_type":"code","execution_count":null,"metadata":{"id":"7mzpClSpAgTW","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1714227654315,"user_tz":-330,"elapsed":22579,"user":{"displayName":"Satvik Paramkusham","userId":"09992778153373457651"}},"outputId":"5813e02f-5a99-4cca-aa78-828c4f2d2066"},"outputs":[{"output_type":"stream","name":"stdout","text":["\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m62.3/62.3 kB\u001b[0m \u001b[31m682.0 kB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m311.6/311.6 kB\u001b[0m \u001b[31m4.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m75.6/75.6 kB\u001b[0m \u001b[31m4.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m12.7/12.7 MB\u001b[0m \u001b[31m37.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.9/77.9 kB\u001b[0m \u001b[31m4.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m3.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[?25h"]}],"source":["!pip install -qU portkey-ai openai"]},{"cell_type":"markdown","metadata":{"id":"WacBHzekCXVf"},"source":["## With OpenAI Client"]},{"cell_type":"code","execution_count":null,"metadata":{"colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"elapsed":47181,"status":"ok","timestamp":1714227812506,"user":{"displayName":"Satvik Paramkusham","userId":"09992778153373457651"},"user_tz":-330},"id":"tvmQtgsWBur2","outputId":"2b2f2441-2e2f-4d63-ff28-fc348c019851"},"outputs":[{"output_type":"stream","name":"stdout","text":["Generative AI is used to create artificial intelligence systems that can generate creative and original content, such as images, videos, music, and text. Its purpose is to enhance the ability of AI systems to be more creative, innovative, and adaptable in their problem-solving capabilities. Generative AI can be used in a variety of industries, such as art and design, gaming, content creation, and marketing, to generate new ideas and inspire new ways of thinking. It can also be used to automate the process of creating content, saving time and resources for businesses and individuals.\n"]}],"source":["from openai import OpenAI\n","from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders\n","from google.colab import userdata\n","\n","client = OpenAI(\n","    api_key= userdata.get('OPENAI_API_KEY'), ## replace it your OpenAI API key\n","    base_url=PORTKEY_GATEWAY_URL,\n","    default_headers=createHeaders(\n","        provider=\"openai\",\n","        api_key= userdata.get('PORTKEY_API_KEY'), ## replace it your Portkey API key\n","    )\n",")\n","\n","chat_complete = client.chat.completions.create(\n","    model=\"gpt-3.5-turbo\",\n","    messages=[{\"role\": \"user\",\n","               \"content\": \"What's the purpose of Generative AI?\"}],\n",")\n","\n","print(chat_complete.choices[0].message.content)"]},{"cell_type":"markdown","metadata":{"id":"HvxiTJcxDUCN"},"source":["## With Portkey Client"]},{"cell_type":"markdown","metadata":{"id":"04Rw2w74DXg5"},"source":["Note: You can safely store your OpenAI API key in [Portkey](https://app.portkey.ai/) and access models using virtual key\n"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"5-TaHCg9DHtI"},"outputs":[],"source":["from portkey_ai import Portkey\n","\n","portkey = Portkey(\n","    api_key = userdata.get('PORTKEY_API_KEY'),   # replace with your Portkey API key\n","    virtual_key= \"gpt3-8070a6\",   # replace with your virtual key for OpenAI AI\n",")"]},{"cell_type":"code","execution_count":null,"metadata":{"colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"elapsed":1412,"status":"ok","timestamp":1714227877164,"user":{"displayName":"Satvik Paramkusham","userId":"09992778153373457651"},"user_tz":-330},"id":"H3O6_690DjR5","outputId":"b1ed7ec3-b6bb-49ea-dc6f-d2d3836f6d36"},"outputs":[{"output_type":"stream","name":"stdout","text":["{\n","    \"id\": \"chatcmpl-9IdGeMPxpJbYEXpFNQWYzIBPw1Cr4\",\n","    \"choices\": [\n","        {\n","            \"finish_reason\": \"stop\",\n","            \"index\": 0,\n","            \"logprobs\": null,\n","            \"message\": {\n","                \"content\": \"I am a virtual assistant designed to help answer questions and provide information to the best of my abilities.\",\n","                \"role\": \"assistant\",\n","                \"function_call\": null,\n","                \"tool_calls\": null\n","            }\n","        }\n","    ],\n","    \"created\": 1714227876,\n","    \"model\": \"gpt-3.5-turbo-0125\",\n","    \"object\": \"chat.completion\",\n","    \"system_fingerprint\": \"fp_3b956da36b\",\n","    \"usage\": {\n","        \"prompt_tokens\": 11,\n","        \"completion_tokens\": 20,\n","        \"total_tokens\": 31\n","    }\n","}\n"]}],"source":["completion = portkey.chat.completions.create(\n","    messages= [{ \"role\": 'user', \"content\": 'Who are you?'}],\n","    model= 'gpt-3.5-turbo'\n",")\n","\n","print(completion)"]},{"cell_type":"markdown","source":["## Advanced Routing - Load Balancing across multiple API keys"],"metadata":{"id":"GajylwyNkIDc"}},{"cell_type":"markdown","source":["With load balancing, you can distribute load effectively across multiple API keys or providers based on custom weights to ensure high availability and optimal performance.\n","\n","Let's take an example where we might want to split traffic between OpenAI's `llama-3-70b` and OpenAI's `gpt-3.5` giving a weightage of 70-30.\n","\n","The gateway configuration for this would look like the following:"],"metadata":{"id":"p1Cod5TuoSql"}},{"cell_type":"code","source":["config = {\n","  \"strategy\": {\n","      \"mode\": \"loadbalance\"\n","    },\n","  \"targets\": [\n","    {\n","      \"provider\": \"openai\",\n","      \"api_key\": \"sk-***\"\n","    },\n","    {\n","      \"provider\": \"openai\",\n","      \"api_key\": \"sk-***\"\n","    }\n","  ]\n","}"],"metadata":{"id":"JMhxoycAkNBp"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["from openai import OpenAI\n","from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders\n","from google.colab import userdata\n","\n","client = OpenAI(\n","    api_key=\"X\",\n","    base_url=PORTKEY_GATEWAY_URL,\n","    default_headers=createHeaders(\n","        api_key=userdata.get(\"PORTKEY_API_KEY\"),\n","        config=config\n","    )\n",")\n","\n","chat_complete = client.chat.completions.create(\n","    model=\"gpt-3.5-turbo\",\n","    messages=[{\"role\": \"user\",\n","               \"content\": \"Just say hi!\"}],\n",")\n","\n","print(chat_complete.choices[0].message.content)"],"metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"EppbDt5llNsn","executionInfo":{"status":"ok","timestamp":1714222465539,"user_tz":-330,"elapsed":4496,"user":{"displayName":"Satvik Paramkusham","userId":"09992778153373457651"}},"outputId":"49c0003d-2bf9-4daa-d400-99a4d2c76537"},"execution_count":null,"outputs":[{"output_type":"stream","name":"stdout","text":["gpt-3.5-turbo-0125\n","Hi! How can I assist you today?\n"]}]},{"cell_type":"markdown","metadata":{"id":"GwQmjjr0GePo"},"source":["## Observability with Portkey"]},{"cell_type":"markdown","metadata":{"id":"344KJaH3GfG_"},"source":["By routing requests through Portkey you can track a number of metrics like - tokens used, latency, cost, etc.\n","\n","Here's a screenshot of the dashboard you get with Portkey!\n","\n","![portkey_view.JPG]()"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"qFCSC4GkJd2S"},"outputs":[],"source":[]}],"metadata":{"colab":{"provenance":[]},"kernelspec":{"display_name":"Python 3","name":"python3"},"language_info":{"name":"python"}},"nbformat":4,"nbformat_minor":0}