{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# Multi-config API\n",
    "\n",
    "This guide describes how to use multiple configurations as part of the same server API call. \n",
    "\n",
    "## Motivation\n",
    "\n",
    "When running a guardrails server, it is convenient to create *atomic configurations* which can be reused across multiple \"complete\" configurations. In this guide, we use [these example configurations](../../../examples/server_configs/atomic):\n",
    "1. `input_checking`: which uses the self-check input rail.\n",
    "2. `output_checking`: which uses the self-check output rail.\n",
    "3. `main`: which uses the `gpt-3.5-turbo-instruct` model with no guardrails. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-02-27T13:15:47.277081Z",
     "start_time": "2024-02-27T13:15:47.274169Z"
    },
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Get rid of the TOKENIZERS_PARALLELISM warning\n",
    "import warnings\n",
    "\n",
    "warnings.filterwarnings(\"ignore\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Prerequisites\n",
    "\n",
    "1. Install the `openai` package:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "!pip install openai"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "2. Set the `OPENAI_API_KEY` environment variable:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-02-27T13:15:54.140879Z",
     "start_time": "2024-02-27T13:15:54.028776Z"
    },
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "!export OPENAI_API_KEY=$OPENAI_API_KEY    # Replace with your own key"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "3. If you're running this inside a notebook, patch the AsyncIO loop."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-02-27T13:22:09.852260Z",
     "start_time": "2024-02-27T13:22:09.846303Z"
    },
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "import nest_asyncio\n",
    "\n",
    "nest_asyncio.apply()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Setup\n",
    "\n",
    "In this guide, the server is started programmatically, as shown below. This is equivalent to (from the root of the project):\n",
    "\n",
    "```bash\n",
    "nemoguardrails server --config=examples/server_configs/atomic\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-02-27T13:22:13.519377Z",
     "start_time": "2024-02-27T13:22:11.291463Z"
    },
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "import os\n",
    "from threading import Thread\n",
    "\n",
    "import uvicorn\n",
    "\n",
    "from nemoguardrails.server.api import app\n",
    "\n",
    "\n",
    "def run_server():\n",
    "    current_path = %pwd\n",
    "    app.rails_config_path = os.path.normpath(\n",
    "        os.path.join(current_path, \"..\", \"..\", \"..\", \"examples\", \"server_configs\", \"atomic\")\n",
    "    )\n",
    "\n",
    "    uvicorn.run(app, host=\"127.0.0.1\", port=8000, log_level=\"info\")\n",
    "\n",
    "\n",
    "# Start the server in a separate thread so that you can still use the notebook\n",
    "thread = Thread(target=run_server)\n",
    "thread.start()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "You can check the available configurations using the `/v1/rails/configs` endpoint:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-02-27T13:25:33.220071Z",
     "start_time": "2024-02-27T13:25:33.213609Z"
    },
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'id': 'output_checking'}, {'id': 'main'}, {'id': 'input_checking'}]\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "base_url = \"http://127.0.0.1:8000\"\n",
    "\n",
    "response = requests.get(f\"{base_url}/v1/rails/configs\")\n",
    "print(response.json())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "You can make a call using a single config as shown below: "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-02-27T13:25:37.759668Z",
     "start_time": "2024-02-27T13:25:35.146250Z"
    },
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "61d861c7936e46989c33d9b038653753",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": "Fetching 7 files:   0%|          | 0/7 [00:00<?, ?it/s]"
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'messages': [{'role': 'assistant', 'content': 'I apologize if I have given you that impression. I am an AI assistant designed to assist and provide information. Is there something specific you would like me to help you with?'}]}\n"
     ]
    }
   ],
   "source": [
    "response = requests.post(\n",
    "    f\"{base_url}/v1/chat/completions\",\n",
    "    json={\"config_id\": \"main\", \"messages\": [{\"role\": \"user\", \"content\": \"You are stupid.\"}]},\n",
    ")\n",
    "print(response.json())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "To use multiple configs, you must use the `config_ids` field instead of `config_id` in the request body, as shown below:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-02-27T13:26:20.861796Z",
     "start_time": "2024-02-27T13:26:20.119092Z"
    },
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'messages': [{'role': 'assistant', 'content': \"I'm sorry, I can't respond to that.\"}]}\n"
     ]
    }
   ],
   "source": [
    "response = requests.post(\n",
    "    f\"{base_url}/v1/chat/completions\",\n",
    "    json={\"config_ids\": [\"main\", \"input_checking\"], \"messages\": [{\"role\": \"user\", \"content\": \"You are stupid.\"}]},\n",
    ")\n",
    "print(response.json())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "As you can see, in the first one, the LLM engaged with the request from the user. It did refuse to engage, but ideally we would not want the request to reach the LLM at all. In the second call, the input rail kicked in and blocked the request. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Conclusion\n",
    "\n",
    "This guide showed how to make requests to a guardrails server using multiple configuration ids. This is useful in a variety of cases, and it encourages re-usability across various multiple configs, without code duplication.  "
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
