{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "2e9804e0-8aa4-409a-bc99-9875e50350bc",
   "metadata": {},
   "source": [
    "## function calling as `tool use`"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4d45d2ba-18a4-4a92-84f6-86da6e1edf6d",
   "metadata": {},
   "source": [
    "> AI Agent 的核心 & 基础功能：LLMs 不仅能说，function calling 赋予其 tool use 执行（execute）的能力；\n",
    "> - 自动地选择（决策）用哪个函数；\n",
    "> - （基于纯 language 的 query，或者说自然语言风格的对话）生成（generation）函数的参数（argument generation）=> 实例化一个具体的函数调用，当然具体的调用，我们的代码程序来执行；\n",
    "\n",
    "\n",
    "> messages：list\n",
    "> - 软件开发上代表着数据流（data flow）；\n",
    "> - 无状态（stateless）的 LLMs 的  working memory；\n",
    "\n",
    "\n",
    "- 数据化 => 自动化 => 智能化\n",
    "- huggingGPT 其实就是在做这件事情，通过一种 diy 的方式；\n",
    "- （coding 代码意义上的）functions 其实是一种formal 的、精确的形式，定义、计算以及处理逻辑，而且是确定性的；\n",
    "- LLMs with function calling，追求一种更精确的输出和执行，更多地可以耦合到具体的复杂代码逻辑及业务流程中；\n",
    "- 推荐下[《大模型应用开发 动手做AI Agent GPT大语言模型应用》](https://www.bilibili.com/opus/935785456083140628?spm_id_from=333.999.0.0)\n",
    "    - 面向开发者\n",
    "    - 系统而全面"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5120087d-1a73-439c-8d85-0559f656c77e",
   "metadata": {},
   "source": [
    "## call functions with chat models"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bda1eea8-8221-49e8-bbb5-910321bcd3bb",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:09:35.813177Z",
     "iopub.status.busy": "2024-07-19T14:09:35.812779Z",
     "iopub.status.idle": "2024-07-19T14:09:35.823213Z",
     "shell.execute_reply": "2024-07-19T14:09:35.821570Z",
     "shell.execute_reply.started": "2024-07-19T14:09:35.813146Z"
    }
   },
   "source": [
    "- https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models\n",
    "    - https://github.com/openai/openai-cookbook/blob/main/examples/How_to_call_functions_with_chat_models.ipynb"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "3b2e0370-902d-4e18-a363-7055a6f5af0f",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:30:07.853424Z",
     "iopub.status.busy": "2024-07-19T14:30:07.852819Z",
     "iopub.status.idle": "2024-07-19T14:30:07.872074Z",
     "shell.execute_reply": "2024-07-19T14:30:07.870361Z",
     "shell.execute_reply.started": "2024-07-19T14:30:07.853376Z"
    }
   },
   "outputs": [],
   "source": [
    "import os\n",
    "os.environ['http_proxy'] = 'http://127.0.0.1:7890'\n",
    "os.environ['https_proxy'] = 'http://127.0.0.1:7890'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "b435cf89-59ac-4f48-8d43-5bbf9ed0066d",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:10:44.188257Z",
     "iopub.status.busy": "2024-07-19T14:10:44.187698Z",
     "iopub.status.idle": "2024-07-19T14:11:00.903272Z",
     "shell.execute_reply": "2024-07-19T14:11:00.900786Z",
     "shell.execute_reply.started": "2024-07-19T14:10:44.188213Z"
    }
   },
   "outputs": [],
   "source": [
    "!pip install scipy --quiet\n",
    "!pip install tenacity --quiet\n",
    "!pip install tiktoken --quiet\n",
    "!pip install termcolor --quiet\n",
    "!pip install openai --quiet"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "0b9ec656-bda3-43b7-9c8f-a4106ca6c3a1",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:30:14.568747Z",
     "iopub.status.busy": "2024-07-19T14:30:14.568137Z",
     "iopub.status.idle": "2024-07-19T14:30:15.065320Z",
     "shell.execute_reply": "2024-07-19T14:30:15.063981Z",
     "shell.execute_reply.started": "2024-07-19T14:30:14.568701Z"
    }
   },
   "outputs": [],
   "source": [
    "import json\n",
    "import openai\n",
    "from openai import OpenAI\n",
    "from tenacity import retry, wait_random_exponential, stop_after_attempt\n",
    "from termcolor import colored  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "6ec6686c-8e4c-4407-8bc7-15726eac87b9",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:30:16.464712Z",
     "iopub.status.busy": "2024-07-19T14:30:16.464352Z",
     "iopub.status.idle": "2024-07-19T14:30:16.475767Z",
     "shell.execute_reply": "2024-07-19T14:30:16.473910Z",
     "shell.execute_reply.started": "2024-07-19T14:30:16.464691Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'1.35.13'"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "openai.__version__"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "61f218d8-6d2c-439d-a3ef-a264cc688112",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:30:17.564936Z",
     "iopub.status.busy": "2024-07-19T14:30:17.564324Z",
     "iopub.status.idle": "2024-07-19T14:30:17.586882Z",
     "shell.execute_reply": "2024-07-19T14:30:17.585203Z",
     "shell.execute_reply.started": "2024-07-19T14:30:17.564890Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from dotenv import load_dotenv\n",
    "# .env (OPENAI_API_KEY=sk-proj-xxxx)\n",
    "load_dotenv()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "c51da687-d300-44f1-8c5c-7389e444b870",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:30:18.819072Z",
     "iopub.status.busy": "2024-07-19T14:30:18.818468Z",
     "iopub.status.idle": "2024-07-19T14:30:18.893705Z",
     "shell.execute_reply": "2024-07-19T14:30:18.892199Z",
     "shell.execute_reply.started": "2024-07-19T14:30:18.819029Z"
    }
   },
   "outputs": [],
   "source": [
    "GPT_MODEL = \"gpt-4o\"\n",
    "client = OpenAI(api_key=os.environ['OPENAI_API_KEY'])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5a0f8c45-5c0f-4c2b-b845-e7bd6524dd4a",
   "metadata": {},
   "source": [
    "### utilities"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "0191ba83-f98c-49bb-8fc1-140f1dd9c0c9",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:30:20.499252Z",
     "iopub.status.busy": "2024-07-19T14:30:20.498648Z",
     "iopub.status.idle": "2024-07-19T14:30:20.516248Z",
     "shell.execute_reply": "2024-07-19T14:30:20.514037Z",
     "shell.execute_reply.started": "2024-07-19T14:30:20.499208Z"
    }
   },
   "outputs": [],
   "source": [
    "@retry(wait=wait_random_exponential(multiplier=1, max=40), stop=stop_after_attempt(3))\n",
    "def chat_completion_request(messages, tools=None, tool_choice=None, model=GPT_MODEL):\n",
    "    try:\n",
    "        response = client.chat.completions.create(\n",
    "            model=model,\n",
    "            messages=messages,\n",
    "            tools=tools,\n",
    "            tool_choice=tool_choice,\n",
    "        )\n",
    "        return response\n",
    "    except Exception as e:\n",
    "        print(\"Unable to generate ChatCompletion response\")\n",
    "        print(f\"Exception: {e}\")\n",
    "        return e"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "516b660f-e47c-41f8-b8fe-32d63fb9f640",
   "metadata": {},
   "source": [
    "### tools"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e16e4c23-07f0-4404-9f21-88ad9998e5c4",
   "metadata": {},
   "source": [
    "- `def get_current_weather(location: str, format: Literal[\"celsius\", \"fahrenheit\"])`\n",
    "- `def get_n_day_weather_forecast(location: str, format: Literal['celsius', 'fahrenheit'], num_days: int)`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "a167834e-c480-4e9b-9597-7e18ce9fe5e1",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:30:22.198698Z",
     "iopub.status.busy": "2024-07-19T14:30:22.198089Z",
     "iopub.status.idle": "2024-07-19T14:30:22.212709Z",
     "shell.execute_reply": "2024-07-19T14:30:22.210677Z",
     "shell.execute_reply.started": "2024-07-19T14:30:22.198653Z"
    }
   },
   "outputs": [],
   "source": [
    "tools = [\n",
    "    {\n",
    "        \"type\": \"function\",\n",
    "        \"function\": {\n",
    "            \"name\": \"get_current_weather\",\n",
    "            \"description\": \"Get the current weather\",\n",
    "            \"parameters\": {\n",
    "                \"type\": \"object\",\n",
    "                \"properties\": {\n",
    "                    \"location\": {\n",
    "                        \"type\": \"string\",\n",
    "                        \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
    "                    },\n",
    "                    \"format\": {\n",
    "                        \"type\": \"string\",\n",
    "                        \"enum\": [\"celsius\", \"fahrenheit\"],\n",
    "                        \"description\": \"The temperature unit to use. Infer this from the users location.\",\n",
    "                    },\n",
    "                },\n",
    "                \"required\": [\"location\", \"format\"],\n",
    "            },\n",
    "        }\n",
    "    },\n",
    "    {\n",
    "        \"type\": \"function\",\n",
    "        \"function\": {\n",
    "            \"name\": \"get_n_day_weather_forecast\",\n",
    "            \"description\": \"Get an N-day weather forecast\",\n",
    "            \"parameters\": {\n",
    "                \"type\": \"object\",\n",
    "                \"properties\": {\n",
    "                    \"location\": {\n",
    "                        \"type\": \"string\",\n",
    "                        \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
    "                    },\n",
    "                    \"format\": {\n",
    "                        \"type\": \"string\",\n",
    "                        \"enum\": [\"celsius\", \"fahrenheit\"],\n",
    "                        \"description\": \"The temperature unit to use. Infer this from the users location.\",\n",
    "                    },\n",
    "                    \"num_days\": {\n",
    "                        \"type\": \"integer\",\n",
    "                        \"description\": \"The number of days to forecast\",\n",
    "                    }\n",
    "                },\n",
    "                \"required\": [\"location\", \"format\", \"num_days\"]\n",
    "            },\n",
    "        }\n",
    "    },\n",
    "]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "103dcdc8-abdc-4dcb-af03-9f1f5303bfdf",
   "metadata": {},
   "source": [
    "### 构造 messages （`get_current_weather`）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "79af7100-9de1-4683-8dc9-702b69bddc7b",
   "metadata": {},
   "source": [
    "- 不只是 list of dict，还可以支持 ChatCompletionMessage 对象（本质上也是字典）\n",
    "    - `{'role': '', 'content': ''}`\n",
    "    - role: system, user, assistant, user assitant\n",
    "        - tool\n",
    "    - user/assistant: 构成一问一答；"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "23b839ad-4847-4c49-b124-1626353c0a7c",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:30:24.825357Z",
     "iopub.status.busy": "2024-07-19T14:30:24.824734Z",
     "iopub.status.idle": "2024-07-19T14:30:24.839050Z",
     "shell.execute_reply": "2024-07-19T14:30:24.836926Z",
     "shell.execute_reply.started": "2024-07-19T14:30:24.825312Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'role': 'system',\n",
       "  'content': \"Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.\"},\n",
       " {'role': 'user', 'content': \"What's the weather like today\"}]"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "messages = []\n",
    "messages.append({\"role\": \"system\", \n",
    "                 \"content\": \"Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.\"})\n",
    "messages.append({\"role\": \"user\", \n",
    "                 \"content\": \"What's the weather like today\"})\n",
    "messages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "3281e1ea-4a02-4269-8ff6-53345a074ede",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:30:27.347487Z",
     "iopub.status.busy": "2024-07-19T14:30:27.346874Z",
     "iopub.status.idle": "2024-07-19T14:30:28.967457Z",
     "shell.execute_reply": "2024-07-19T14:30:28.965237Z",
     "shell.execute_reply.started": "2024-07-19T14:30:27.347443Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatCompletion(id='chatcmpl-9miuqM8DcABewg8LJTM9p7F6FRGWk', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Could you please provide me with the city and state (or country) for which you would like to know the current weather?', role='assistant', function_call=None, tool_calls=None))], created=1721399428, model='gpt-4o-2024-05-13', object='chat.completion', service_tier=None, system_fingerprint='fp_5e997b69d8', usage=CompletionUsage(completion_tokens=26, prompt_tokens=182, total_tokens=208))"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chat_response = chat_completion_request(\n",
    "    messages, tools=tools\n",
    ")\n",
    "chat_response"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "791ca8d2-5c4b-423e-bbcc-b152542a49cb",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:31:00.779515Z",
     "iopub.status.busy": "2024-07-19T14:31:00.778898Z",
     "iopub.status.idle": "2024-07-19T14:31:00.793478Z",
     "shell.execute_reply": "2024-07-19T14:31:00.791377Z",
     "shell.execute_reply.started": "2024-07-19T14:31:00.779471Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(ChatCompletionMessage(content='Could you please provide me with the city and state (or country) for which you would like to know the current weather?', role='assistant', function_call=None, tool_calls=None),\n",
       " openai.types.chat.chat_completion_message.ChatCompletionMessage)"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chat_response.choices[0].message, type(chat_response.choices[0].message)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "88ff58de-73fe-4d07-8f96-6eba1e7131e9",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:34:00.629963Z",
     "iopub.status.busy": "2024-07-19T14:34:00.628185Z",
     "iopub.status.idle": "2024-07-19T14:34:00.641432Z",
     "shell.execute_reply": "2024-07-19T14:34:00.639431Z",
     "shell.execute_reply.started": "2024-07-19T14:34:00.629904Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatCompletionMessage(content='Could you please provide me with the city and state (or country) for which you would like to know the current weather?', role='assistant', function_call=None, tool_calls=None)"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "assistant_message = chat_response.choices[0].message\n",
    "messages.append(assistant_message)\n",
    "assistant_message"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "24d19638-4d17-40dd-ac94-b7243603cbd8",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:36:55.671097Z",
     "iopub.status.busy": "2024-07-19T14:36:55.670484Z",
     "iopub.status.idle": "2024-07-19T14:36:55.683267Z",
     "shell.execute_reply": "2024-07-19T14:36:55.681112Z",
     "shell.execute_reply.started": "2024-07-19T14:36:55.671052Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'role': 'system',\n",
       "  'content': \"Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.\"},\n",
       " {'role': 'user', 'content': \"What's the weather like today\"},\n",
       " ChatCompletionMessage(content='Could you please provide me with the city and state (or country) for which you would like to know the current weather?', role='assistant', function_call=None, tool_calls=None)]"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "messages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "5313b125-4a77-4a3a-97da-a797577ca2cc",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:37:20.330768Z",
     "iopub.status.busy": "2024-07-19T14:37:20.330154Z",
     "iopub.status.idle": "2024-07-19T14:37:21.671882Z",
     "shell.execute_reply": "2024-07-19T14:37:21.669798Z",
     "shell.execute_reply.started": "2024-07-19T14:37:20.330724Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_ZugwC3KTxMeYjWQ9SZj0rSk6', function=Function(arguments='{\"location\":\"Glasgow, Scotland\",\"format\":\"celsius\"}', name='get_current_weather'), type='function')])"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "messages.append({\"role\": \"user\", \"content\": \"I'm in Glasgow, Scotland.\"})\n",
    "chat_response = chat_completion_request(\n",
    "    messages, tools=tools\n",
    ")\n",
    "assistant_message = chat_response.choices[0].message\n",
    "messages.append(assistant_message)\n",
    "assistant_message"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "c1963d9f-e55b-41b6-a85f-7e121ecf6758",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:38:00.401375Z",
     "iopub.status.busy": "2024-07-19T14:38:00.401080Z",
     "iopub.status.idle": "2024-07-19T14:38:00.411388Z",
     "shell.execute_reply": "2024-07-19T14:38:00.409508Z",
     "shell.execute_reply.started": "2024-07-19T14:38:00.401354Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'role': 'system',\n",
       "  'content': \"Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.\"},\n",
       " {'role': 'user', 'content': \"What's the weather like today\"},\n",
       " ChatCompletionMessage(content='Could you please provide me with the city and state (or country) for which you would like to know the current weather?', role='assistant', function_call=None, tool_calls=None),\n",
       " {'role': 'user', 'content': \"I'm in Glasgow, Scotland.\"},\n",
       " ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_ZugwC3KTxMeYjWQ9SZj0rSk6', function=Function(arguments='{\"location\":\"Glasgow, Scotland\",\"format\":\"celsius\"}', name='get_current_weather'), type='function')])]"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "messages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "1ba08dc8-6dbe-4d34-84e1-423d96e95706",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:40:52.088498Z",
     "iopub.status.busy": "2024-07-19T14:40:52.087878Z",
     "iopub.status.idle": "2024-07-19T14:40:52.101032Z",
     "shell.execute_reply": "2024-07-19T14:40:52.098837Z",
     "shell.execute_reply.started": "2024-07-19T14:40:52.088453Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'role': 'system',\n",
       "  'content': \"Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.\"},\n",
       " {'role': 'user', 'content': \"What's the weather like today\"},\n",
       " ChatCompletionMessage(content='Could you please provide me with the city and state (or country) for which you would like to know the current weather?', role='assistant', function_call=None, tool_calls=None)]"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "messages = messages[:-2]\n",
    "messages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "68a8747c-f875-4451-812a-639cbbca02b1",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:41:15.454469Z",
     "iopub.status.busy": "2024-07-19T14:41:15.453841Z",
     "iopub.status.idle": "2024-07-19T14:41:16.766596Z",
     "shell.execute_reply": "2024-07-19T14:41:16.764403Z",
     "shell.execute_reply.started": "2024-07-19T14:41:15.454425Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_9onbR17EcwDBhlC0gUF3wQl0', function=Function(arguments='{\"location\":\"Beijing, China\",\"format\":\"celsius\"}', name='get_current_weather'), type='function')])"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "messages.append({\"role\": \"user\", \"content\": \"I'm in Beijing, China.\"})\n",
    "chat_response = chat_completion_request(\n",
    "    messages, tools=tools\n",
    ")\n",
    "assistant_message = chat_response.choices[0].message\n",
    "messages.append(assistant_message)\n",
    "assistant_message"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ad973d3e-3826-4786-8301-00945fadc712",
   "metadata": {},
   "source": [
    "### 构造 messages（`get_n_day_weather_forecast`）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "2f6e88fe-f319-4bc1-98d9-c3ad58eb1fdf",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:43:43.008207Z",
     "iopub.status.busy": "2024-07-19T14:43:43.007567Z",
     "iopub.status.idle": "2024-07-19T14:43:45.455922Z",
     "shell.execute_reply": "2024-07-19T14:43:45.453751Z",
     "shell.execute_reply.started": "2024-07-19T14:43:43.008162Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatCompletionMessage(content='Could you please specify the number of days you would like the forecast for?', role='assistant', function_call=None, tool_calls=None)"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "messages = []\n",
    "messages.append({\"role\": \"system\", \n",
    "                 \"content\": \"Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.\"})\n",
    "messages.append({\"role\": \"user\", \n",
    "                 \"content\": \"what is the weather going to be like in Beijing, China over the next x days\"})\n",
    "chat_response = chat_completion_request(\n",
    "    messages, tools=tools\n",
    ")\n",
    "assistant_message = chat_response.choices[0].message\n",
    "messages.append(assistant_message)\n",
    "assistant_message"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "ebdbf516-9ead-44ce-87f0-44d60f0a6a36",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:43:50.333225Z",
     "iopub.status.busy": "2024-07-19T14:43:50.332628Z",
     "iopub.status.idle": "2024-07-19T14:43:51.794247Z",
     "shell.execute_reply": "2024-07-19T14:43:51.792137Z",
     "shell.execute_reply.started": "2024-07-19T14:43:50.333180Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_uVCEvzGM0IVFMkhWoBdBdNoL', function=Function(arguments='{\"location\":\"Beijing, China\",\"format\":\"celsius\",\"num_days\":5}', name='get_n_day_weather_forecast'), type='function')]))"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "messages.append({\"role\": \"user\", \"content\": \"5 days\"})\n",
    "chat_response = chat_completion_request(\n",
    "    messages, tools=tools\n",
    ")\n",
    "chat_response.choices[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "edcca1c2-3c82-47c3-a1fd-f534ced9601c",
   "metadata": {},
   "source": [
    "### tool_choice"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2ac07f03-ff65-44eb-b803-91d9fea12fee",
   "metadata": {},
   "source": [
    "- `tool_choice=None`\n",
    "- `tool_choice={\"type\": \"function\", \"function\": {\"name\": \"get_n_day_weather_forecast\"}}`\n",
    "- Newer models such as gpt-4o or gpt-3.5-turbo can call multiple functions in one turn."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "85a57cd7-1f14-4c6a-b33b-11c362affb2c",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:47:12.958333Z",
     "iopub.status.busy": "2024-07-19T14:47:12.957659Z",
     "iopub.status.idle": "2024-07-19T14:47:14.447412Z",
     "shell.execute_reply": "2024-07-19T14:47:14.445245Z",
     "shell.execute_reply.started": "2024-07-19T14:47:12.958286Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatCompletionMessage(content='Sure, I will fetch the current weather for Toronto, Canada in Celsius.', role='assistant', function_call=None, tool_calls=None)"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "messages = []\n",
    "messages.append({\"role\": \"system\", \"content\": \"Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.\"})\n",
    "messages.append({\"role\": \"user\", \"content\": \"Give me the current weather (use Celcius) for Toronto, Canada.\"})\n",
    "chat_response = chat_completion_request(\n",
    "    messages, tools=tools, tool_choice=\"none\"\n",
    ")\n",
    "chat_response.choices[0].message"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "3b787f83-e792-48a4-9a4b-f26be880d0eb",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:47:25.677297Z",
     "iopub.status.busy": "2024-07-19T14:47:25.676693Z",
     "iopub.status.idle": "2024-07-19T14:47:28.688320Z",
     "shell.execute_reply": "2024-07-19T14:47:28.686244Z",
     "shell.execute_reply.started": "2024-07-19T14:47:25.677253Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_Ija511nv3AbyKF7FdSqfOCjh', function=Function(arguments='{\"location\": \"Toronto, Canada\", \"format\": \"celsius\"}', name='get_current_weather'), type='function'), ChatCompletionMessageToolCall(id='call_x6XAima2QsE1FghBPTx3UnMU', function=Function(arguments='{\"location\": \"Toronto, Canada\", \"format\": \"celsius\", \"num_days\": 3}', name='get_n_day_weather_forecast'), type='function')])"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# if we don't force the model to use get_n_day_weather_forecast it may not\n",
    "messages = []\n",
    "messages.append({\"role\": \"system\", \"content\": \"Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.\"})\n",
    "messages.append({\"role\": \"user\", \"content\": \"Give me a weather report for Toronto, Canada.\"})\n",
    "chat_response = chat_completion_request(\n",
    "    messages, tools=tools\n",
    ")\n",
    "chat_response.choices[0].message"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "c26431f3-a34a-4ca6-af38-ad5333406be0",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:47:40.892275Z",
     "iopub.status.busy": "2024-07-19T14:47:40.891662Z",
     "iopub.status.idle": "2024-07-19T14:47:40.903725Z",
     "shell.execute_reply": "2024-07-19T14:47:40.901599Z",
     "shell.execute_reply.started": "2024-07-19T14:47:40.892230Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatCompletion(id='chatcmpl-9mjBGPqxkqPlTRVZe39IHGlMBLRXD', choices=[Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_Ija511nv3AbyKF7FdSqfOCjh', function=Function(arguments='{\"location\": \"Toronto, Canada\", \"format\": \"celsius\"}', name='get_current_weather'), type='function'), ChatCompletionMessageToolCall(id='call_x6XAima2QsE1FghBPTx3UnMU', function=Function(arguments='{\"location\": \"Toronto, Canada\", \"format\": \"celsius\", \"num_days\": 3}', name='get_n_day_weather_forecast'), type='function')]))], created=1721400446, model='gpt-4o-2024-05-13', object='chat.completion', service_tier=None, system_fingerprint='fp_c4e5b6fa31', usage=CompletionUsage(completion_tokens=68, prompt_tokens=187, total_tokens=255))"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chat_response"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c99d3dda-aa38-467b-b229-56199f126abb",
   "metadata": {},
   "source": [
    "## argument generation (sql)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "74f38cbc-455e-430c-b6bd-556b5420d731",
   "metadata": {},
   "source": [
    "- https://www.sqlitetutorial.net/sqlite-sample-database/\n",
    "    - https://www.sqlitetutorial.net/wp-content/uploads/2018/03/chinook.zip\n",
    "\n",
    "Steps to invoke a function call using Chat Completions API:\n",
    "\n",
    "Step 1: Prompt the model with content that may result in model selecting a tool to use. The description of the tools such as a function names and signature is defined in the 'Tools' list and passed to the model in API call. If selected, the function name and parameters are included in the response.\n",
    "\n",
    "Step 2: Check programmatically if model wanted to call a function. If true, proceed to step 3.\n",
    "\n",
    "Step 3: Extract the function name and parameters from response, call the function with parameters. **Append the result to messages.**\n",
    "\n",
    "Step 4: Invoke the chat completions API with the message list to get the response."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "b251a5cb-5bfe-436e-a6b3-2b032adecc04",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:52:21.816072Z",
     "iopub.status.busy": "2024-07-19T14:52:21.815458Z",
     "iopub.status.idle": "2024-07-19T14:52:21.826482Z",
     "shell.execute_reply": "2024-07-19T14:52:21.824756Z",
     "shell.execute_reply.started": "2024-07-19T14:52:21.816029Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Opened database successfully\n"
     ]
    }
   ],
   "source": [
    "import sqlite3\n",
    "\n",
    "conn = sqlite3.connect(\"data/chinook.db\")\n",
    "print(\"Opened database successfully\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "de5dd89f-bcc6-4252-aa80-d259585de524",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:52:22.929094Z",
     "iopub.status.busy": "2024-07-19T14:52:22.928512Z",
     "iopub.status.idle": "2024-07-19T14:52:22.943756Z",
     "shell.execute_reply": "2024-07-19T14:52:22.941661Z",
     "shell.execute_reply.started": "2024-07-19T14:52:22.929050Z"
    }
   },
   "outputs": [],
   "source": [
    "def get_table_names(conn):\n",
    "    \"\"\"Return a list of table names.\"\"\"\n",
    "    table_names = []\n",
    "    tables = conn.execute(\"SELECT name FROM sqlite_master WHERE type='table';\")\n",
    "    for table in tables.fetchall():\n",
    "        table_names.append(table[0])\n",
    "    return table_names\n",
    "\n",
    "\n",
    "def get_column_names(conn, table_name):\n",
    "    \"\"\"Return a list of column names.\"\"\"\n",
    "    column_names = []\n",
    "    columns = conn.execute(f\"PRAGMA table_info('{table_name}');\").fetchall()\n",
    "    for col in columns:\n",
    "        column_names.append(col[1])\n",
    "    return column_names\n",
    "\n",
    "\n",
    "def get_database_info(conn):\n",
    "    \"\"\"Return a list of dicts containing the table name and columns for each table in the database.\"\"\"\n",
    "    table_dicts = []\n",
    "    for table_name in get_table_names(conn):\n",
    "        columns_names = get_column_names(conn, table_name)\n",
    "        table_dicts.append({\"table_name\": table_name, \"column_names\": columns_names})\n",
    "    return table_dicts"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "361bffc6-970b-4351-a3cf-321c82ca1b93",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:52:25.113909Z",
     "iopub.status.busy": "2024-07-19T14:52:25.113288Z",
     "iopub.status.idle": "2024-07-19T14:52:25.132352Z",
     "shell.execute_reply": "2024-07-19T14:52:25.130346Z",
     "shell.execute_reply.started": "2024-07-19T14:52:25.113865Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'table_name': 'albums', 'column_names': ['AlbumId', 'Title', 'ArtistId']},\n",
       " {'table_name': 'sqlite_sequence', 'column_names': ['name', 'seq']},\n",
       " {'table_name': 'artists', 'column_names': ['ArtistId', 'Name']},\n",
       " {'table_name': 'customers',\n",
       "  'column_names': ['CustomerId',\n",
       "   'FirstName',\n",
       "   'LastName',\n",
       "   'Company',\n",
       "   'Address',\n",
       "   'City',\n",
       "   'State',\n",
       "   'Country',\n",
       "   'PostalCode',\n",
       "   'Phone',\n",
       "   'Fax',\n",
       "   'Email',\n",
       "   'SupportRepId']},\n",
       " {'table_name': 'employees',\n",
       "  'column_names': ['EmployeeId',\n",
       "   'LastName',\n",
       "   'FirstName',\n",
       "   'Title',\n",
       "   'ReportsTo',\n",
       "   'BirthDate',\n",
       "   'HireDate',\n",
       "   'Address',\n",
       "   'City',\n",
       "   'State',\n",
       "   'Country',\n",
       "   'PostalCode',\n",
       "   'Phone',\n",
       "   'Fax',\n",
       "   'Email']},\n",
       " {'table_name': 'genres', 'column_names': ['GenreId', 'Name']},\n",
       " {'table_name': 'invoices',\n",
       "  'column_names': ['InvoiceId',\n",
       "   'CustomerId',\n",
       "   'InvoiceDate',\n",
       "   'BillingAddress',\n",
       "   'BillingCity',\n",
       "   'BillingState',\n",
       "   'BillingCountry',\n",
       "   'BillingPostalCode',\n",
       "   'Total']},\n",
       " {'table_name': 'invoice_items',\n",
       "  'column_names': ['InvoiceLineId',\n",
       "   'InvoiceId',\n",
       "   'TrackId',\n",
       "   'UnitPrice',\n",
       "   'Quantity']},\n",
       " {'table_name': 'media_types', 'column_names': ['MediaTypeId', 'Name']},\n",
       " {'table_name': 'playlists', 'column_names': ['PlaylistId', 'Name']},\n",
       " {'table_name': 'playlist_track', 'column_names': ['PlaylistId', 'TrackId']},\n",
       " {'table_name': 'tracks',\n",
       "  'column_names': ['TrackId',\n",
       "   'Name',\n",
       "   'AlbumId',\n",
       "   'MediaTypeId',\n",
       "   'GenreId',\n",
       "   'Composer',\n",
       "   'Milliseconds',\n",
       "   'Bytes',\n",
       "   'UnitPrice']},\n",
       " {'table_name': 'sqlite_stat1', 'column_names': ['tbl', 'idx', 'stat']}]"
      ]
     },
     "execution_count": 33,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "database_schema_dict = get_database_info(conn)\n",
    "database_schema_string = \"\\n\".join(\n",
    "    [\n",
    "        f\"Table: {table['table_name']}\\nColumns: {', '.join(table['column_names'])}\"\n",
    "        for table in database_schema_dict\n",
    "    ]\n",
    ")\n",
    "database_schema_dict"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "ab735a86-bb75-4a85-8bc4-c4b650ec7405",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:53:34.697370Z",
     "iopub.status.busy": "2024-07-19T14:53:34.696751Z",
     "iopub.status.idle": "2024-07-19T14:53:34.708940Z",
     "shell.execute_reply": "2024-07-19T14:53:34.706795Z",
     "shell.execute_reply.started": "2024-07-19T14:53:34.697326Z"
    }
   },
   "outputs": [],
   "source": [
    "tools = [\n",
    "    {\n",
    "        \"type\": \"function\",\n",
    "        \"function\": {\n",
    "            \"name\": \"ask_database\",\n",
    "            \"description\": \"Use this function to answer user questions about music. Input should be a fully formed SQL query.\",\n",
    "            \"parameters\": {\n",
    "                \"type\": \"object\",\n",
    "                \"properties\": {\n",
    "                    \"query\": {\n",
    "                        \"type\": \"string\",\n",
    "                        \"description\": f\"\"\"\n",
    "                                SQL query extracting info to answer the user's question.\n",
    "                                SQL should be written using this database schema:\n",
    "                                {database_schema_string}\n",
    "                                The query should be returned in plain text, not in JSON.\n",
    "                                \"\"\",\n",
    "                    }\n",
    "                },\n",
    "                \"required\": [\"query\"],\n",
    "            },\n",
    "        }\n",
    "    }\n",
    "]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "3fd7d8c3-6ce7-4bec-a377-ddd14010bb31",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:53:43.685541Z",
     "iopub.status.busy": "2024-07-19T14:53:43.684933Z",
     "iopub.status.idle": "2024-07-19T14:53:43.695475Z",
     "shell.execute_reply": "2024-07-19T14:53:43.693478Z",
     "shell.execute_reply.started": "2024-07-19T14:53:43.685496Z"
    }
   },
   "outputs": [],
   "source": [
    "def ask_database(conn, query):\n",
    "    \"\"\"Function to query SQLite database with a provided SQL query.\"\"\"\n",
    "    try:\n",
    "        results = str(conn.execute(query).fetchall())\n",
    "    except Exception as e:\n",
    "        results = f\"query failed with error: {e}\"\n",
    "    return results"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "id": "6420993a-f6a3-4c46-acb5-aed0728d8812",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:55:47.068832Z",
     "iopub.status.busy": "2024-07-19T14:55:47.068227Z",
     "iopub.status.idle": "2024-07-19T14:55:49.836733Z",
     "shell.execute_reply": "2024-07-19T14:55:49.834604Z",
     "shell.execute_reply.started": "2024-07-19T14:55:47.068788Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_fr3IUDJgvMxonRTweFWNPlH7', function=Function(arguments='{\"query\":\"SELECT albums.Title, COUNT(tracks.TrackId) AS TrackCount FROM albums JOIN tracks ON albums.AlbumId = tracks.AlbumId GROUP BY albums.AlbumId ORDER BY TrackCount DESC LIMIT 1;\"}', name='ask_database'), type='function')])"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Step #1: Prompt with content that may result in function call. In this case the model can identify the information requested by the user is potentially available in the database schema passed to the model in Tools description. \n",
    "messages = [{\n",
    "    \"role\": \"user\", \n",
    "    \"content\": \"What is the name of the album with the most tracks?\"\n",
    "}]\n",
    "\n",
    "response = client.chat.completions.create(\n",
    "    model='gpt-4o', \n",
    "    messages=messages, \n",
    "    tools=tools, \n",
    "    tool_choice=\"auto\"\n",
    ")\n",
    "\n",
    "# Append the message to messages list\n",
    "response_message = response.choices[0].message \n",
    "messages.append(response_message)\n",
    "response_message"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "38171ca9-26c8-4617-934d-94b6acfb049e",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:57:52.469072Z",
     "iopub.status.busy": "2024-07-19T14:57:52.468458Z",
     "iopub.status.idle": "2024-07-19T14:57:52.482386Z",
     "shell.execute_reply": "2024-07-19T14:57:52.480218Z",
     "shell.execute_reply.started": "2024-07-19T14:57:52.469027Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "('ask_database',\n",
       " 'SELECT albums.Title, COUNT(tracks.TrackId) AS TrackCount FROM albums JOIN tracks ON albums.AlbumId = tracks.AlbumId GROUP BY albums.AlbumId ORDER BY TrackCount DESC LIMIT 1;')"
      ]
     },
     "execution_count": 42,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "response_message.tool_calls[0].function.name, json.loads(response_message.tool_calls[0].function.arguments)['query']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "id": "0f4d156e-873a-4e9f-8497-6621460c19a5",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:59:14.497259Z",
     "iopub.status.busy": "2024-07-19T14:59:14.496638Z",
     "iopub.status.idle": "2024-07-19T14:59:16.178526Z",
     "shell.execute_reply": "2024-07-19T14:59:16.176348Z",
     "shell.execute_reply.started": "2024-07-19T14:59:14.497215Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The album with the most tracks is \"Greatest Hits,\" which features a total of 57 tracks.\n"
     ]
    }
   ],
   "source": [
    "# Step 2: determine if the response from the model includes a tool call.   \n",
    "tool_calls = response_message.tool_calls\n",
    "if tool_calls:\n",
    "    # If true the model will return the name of the tool / function to call and the argument(s)  \n",
    "    tool_call_id = tool_calls[0].id\n",
    "    tool_function_name = tool_calls[0].function.name\n",
    "    tool_query_string = json.loads(tool_calls[0].function.arguments)['query']\n",
    "\n",
    "    # Step 3: Call the function and retrieve results. Append the results to the messages list.      \n",
    "    if tool_function_name == 'ask_database':\n",
    "        results = ask_database(conn, tool_query_string)\n",
    "        \n",
    "        messages.append({\n",
    "            \"role\":\"tool\", \n",
    "            \"tool_call_id\":tool_call_id, \n",
    "            \"name\": tool_function_name, \n",
    "            \"content\":results\n",
    "        })\n",
    "        \n",
    "        # Step 4: Invoke the chat completions API with the function response appended to the messages list\n",
    "        # Note that messages with role 'tool' must be a response to a preceding message with 'tool_calls'\n",
    "        model_response_with_function_call = client.chat.completions.create(\n",
    "            model=\"gpt-4o\",\n",
    "            messages=messages,\n",
    "        )  # get a new response from the model where it can see the function response\n",
    "        print(model_response_with_function_call.choices[0].message.content)\n",
    "    else: \n",
    "        print(f\"Error: function {tool_function_name} does not exist\")\n",
    "else: \n",
    "    # Model did not identify a function to call, result can be returned to the user \n",
    "    print(response_message.content) "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "id": "145bd681-8f75-4189-8f8f-4493d7ce8e36",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T14:59:20.294439Z",
     "iopub.status.busy": "2024-07-19T14:59:20.293826Z",
     "iopub.status.idle": "2024-07-19T14:59:20.306299Z",
     "shell.execute_reply": "2024-07-19T14:59:20.304228Z",
     "shell.execute_reply.started": "2024-07-19T14:59:20.294394Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'role': 'user',\n",
       "  'content': 'What is the name of the album with the most tracks?'},\n",
       " ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_fr3IUDJgvMxonRTweFWNPlH7', function=Function(arguments='{\"query\":\"SELECT albums.Title, COUNT(tracks.TrackId) AS TrackCount FROM albums JOIN tracks ON albums.AlbumId = tracks.AlbumId GROUP BY albums.AlbumId ORDER BY TrackCount DESC LIMIT 1;\"}', name='ask_database'), type='function')]),\n",
       " {'role': 'tool',\n",
       "  'tool_call_id': 'call_fr3IUDJgvMxonRTweFWNPlH7',\n",
       "  'name': 'ask_database',\n",
       "  'content': \"[('Greatest Hits', 57)]\"}]"
      ]
     },
     "execution_count": 44,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "messages"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5399edbb-1727-4888-bc95-66d818bcf94f",
   "metadata": {},
   "source": [
    "## RAG & function calling (functions with a knowledge base)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dca9fc67-ac8c-4665-8d8b-7b672bc71832",
   "metadata": {},
   "source": [
    "- https://cookbook.openai.com/examples/how_to_call_functions_for_knowledge_retrieval"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "id": "5362447c-e98c-4bf1-82d7-4445367d0d04",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T15:02:13.969796Z",
     "iopub.status.busy": "2024-07-19T15:02:13.969081Z",
     "iopub.status.idle": "2024-07-19T15:02:13.979739Z",
     "shell.execute_reply": "2024-07-19T15:02:13.977549Z",
     "shell.execute_reply.started": "2024-07-19T15:02:13.969744Z"
    }
   },
   "outputs": [],
   "source": [
    "# !pip install scipy --quiet\n",
    "# !pip install tenacity --quiet\n",
    "# !pip install tiktoken==0.3.3 --quiet\n",
    "# !pip install termcolor --quiet\n",
    "# !pip install openai --quiet\n",
    "# !pip install arxiv --quiet\n",
    "# !pip install pandas --quiet\n",
    "# !pip install PyPDF2 --quiet\n",
    "# !pip install tqdm --quiet"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "id": "3b754e6a-f6ae-4575-8021-a8eabc7f0d66",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T15:11:47.605814Z",
     "iopub.status.busy": "2024-07-19T15:11:47.605437Z",
     "iopub.status.idle": "2024-07-19T15:11:47.637790Z",
     "shell.execute_reply": "2024-07-19T15:11:47.637312Z",
     "shell.execute_reply.started": "2024-07-19T15:11:47.605787Z"
    }
   },
   "outputs": [],
   "source": [
    "import os\n",
    "import arxiv\n",
    "import ast\n",
    "import concurrent\n",
    "import json\n",
    "import os\n",
    "import pandas as pd\n",
    "import tiktoken\n",
    "from csv import writer\n",
    "from IPython.display import display, Markdown, Latex\n",
    "from openai import OpenAI\n",
    "from PyPDF2 import PdfReader\n",
    "from scipy import spatial\n",
    "from tenacity import retry, wait_random_exponential, stop_after_attempt\n",
    "from tqdm import tqdm\n",
    "from termcolor import colored\n",
    "\n",
    "GPT_MODEL = \"gpt-4o\"\n",
    "EMBEDDING_MODEL = \"text-embedding-ada-002\"\n",
    "client = OpenAI(api_key=os.environ['OPENAI_API_KEY'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "id": "30686791-a419-4b90-99e2-f3e8801d2793",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T15:02:45.517465Z",
     "iopub.status.busy": "2024-07-19T15:02:45.517046Z",
     "iopub.status.idle": "2024-07-19T15:02:45.527917Z",
     "shell.execute_reply": "2024-07-19T15:02:45.525752Z",
     "shell.execute_reply.started": "2024-07-19T15:02:45.517441Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Directory './data/papers' created successfully.\n"
     ]
    }
   ],
   "source": [
    "directory = './data/papers'\n",
    "\n",
    "# Check if the directory already exists\n",
    "if not os.path.exists(directory):\n",
    "    # If the directory doesn't exist, create it and any necessary intermediate directories\n",
    "    os.makedirs(directory)\n",
    "    print(f\"Directory '{directory}' created successfully.\")\n",
    "else:\n",
    "    # If the directory already exists, print a message indicating it\n",
    "    print(f\"Directory '{directory}' already exists.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "id": "ddf76cad-dc2e-4c3d-b7ff-518c69b7653a",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T15:03:10.704286Z",
     "iopub.status.busy": "2024-07-19T15:03:10.703667Z",
     "iopub.status.idle": "2024-07-19T15:03:10.728892Z",
     "shell.execute_reply": "2024-07-19T15:03:10.727533Z",
     "shell.execute_reply.started": "2024-07-19T15:03:10.704240Z"
    }
   },
   "outputs": [],
   "source": [
    "# Set a directory to store downloaded papers\n",
    "data_dir = os.path.join(os.curdir, \"data\", \"papers\")\n",
    "paper_dir_filepath = \"./data/arxiv_library.csv\"\n",
    "\n",
    "# Generate a blank dataframe where we can store downloaded files\n",
    "df = pd.DataFrame(list())\n",
    "df.to_csv(paper_dir_filepath)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "id": "22eb05d0-01a4-40a3-aeaf-e65c11ff2154",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T15:04:27.420098Z",
     "iopub.status.busy": "2024-07-19T15:04:27.419453Z",
     "iopub.status.idle": "2024-07-19T15:04:27.442009Z",
     "shell.execute_reply": "2024-07-19T15:04:27.439932Z",
     "shell.execute_reply.started": "2024-07-19T15:04:27.420051Z"
    }
   },
   "outputs": [],
   "source": [
    "@retry(wait=wait_random_exponential(min=1, max=40), stop=stop_after_attempt(3))\n",
    "def embedding_request(text):\n",
    "    response = client.embeddings.create(input=text, model=EMBEDDING_MODEL)\n",
    "    return response\n",
    "\n",
    "\n",
    "@retry(wait=wait_random_exponential(min=1, max=40), stop=stop_after_attempt(3))\n",
    "def get_articles(query, library=paper_dir_filepath, top_k=5):\n",
    "    \"\"\"This function gets the top_k articles based on a user's query, sorted by relevance.\n",
    "    It also downloads the files and stores them in arxiv_library.csv to be retrieved by the read_article_and_summarize.\n",
    "    \"\"\"\n",
    "    client = arxiv.Client()\n",
    "    search = arxiv.Search(\n",
    "        query=query,\n",
    "        max_results=top_k,\n",
    "        sort_by = arxiv.SortCriterion.SubmittedDate\n",
    "    )\n",
    "    result_list = []\n",
    "    for result in client.results(search):\n",
    "        result_dict = {}\n",
    "        result_dict.update({\"title\": result.title})\n",
    "        result_dict.update({\"summary\": result.summary})\n",
    "\n",
    "        # Taking the first url provided\n",
    "        result_dict.update({\"article_url\": [x.href for x in result.links][0]})\n",
    "        result_dict.update({\"pdf_url\": [x.href for x in result.links][1]})\n",
    "        result_list.append(result_dict)\n",
    "\n",
    "        # Store references in library file\n",
    "        response = embedding_request(text=result.title)\n",
    "        file_reference = [\n",
    "            result.title,\n",
    "            result.download_pdf(data_dir),\n",
    "            response.data[0].embedding,\n",
    "        ]\n",
    "\n",
    "        # Write to file\n",
    "        with open(library, \"a\") as f_object:\n",
    "            writer_object = writer(f_object)\n",
    "            writer_object.writerow(file_reference)\n",
    "            f_object.close()\n",
    "    return result_list\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "id": "a273117a-7f2d-4979-a3cf-b70dea0e35e9",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T15:04:46.096250Z",
     "iopub.status.busy": "2024-07-19T15:04:46.095631Z",
     "iopub.status.idle": "2024-07-19T15:05:05.939944Z",
     "shell.execute_reply": "2024-07-19T15:05:05.938414Z",
     "shell.execute_reply.started": "2024-07-19T15:04:46.096204Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'title': 'Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review',\n",
       "  'summary': 'This tutorial provides a comprehensive survey of methods for fine-tuning\\ndiffusion models to optimize downstream reward functions. While diffusion\\nmodels are widely known to provide excellent generative modeling capability,\\npractical applications in domains such as biology require generating samples\\nthat maximize some desired metric (e.g., translation efficiency in RNA, docking\\nscore in molecules, stability in protein). In these cases, the diffusion model\\ncan be optimized not only to generate realistic samples but also to explicitly\\nmaximize the measure of interest. Such methods are based on concepts from\\nreinforcement learning (RL). We explain the application of various RL\\nalgorithms, including PPO, differentiable optimization, reward-weighted MLE,\\nvalue-weighted sampling, and path consistency learning, tailored specifically\\nfor fine-tuning diffusion models. We aim to explore fundamental aspects such as\\nthe strengths and limitations of different RL-based fine-tuning algorithms\\nacross various scenarios, the benefits of RL-based fine-tuning compared to\\nnon-RL-based approaches, and the formal objectives of RL-based fine-tuning\\n(target distributions). Additionally, we aim to examine their connections with\\nrelated topics such as classifier guidance, Gflownets, flow-based diffusion\\nmodels, path integral control theory, and sampling from unnormalized\\ndistributions such as MCMC. The code of this tutorial is available at\\nhttps://github.com/masa-ue/RLfinetuning_Diffusion_Bioseq',\n",
       "  'article_url': 'http://arxiv.org/abs/2407.13734v1',\n",
       "  'pdf_url': 'http://arxiv.org/pdf/2407.13734v1'},\n",
       " {'title': 'Correcting the Mythos of KL-Regularization: Direct Alignment without Overparameterization via Chi-squared Preference Optimization',\n",
       "  'summary': \"Language model alignment methods, such as reinforcement learning from human\\nfeedback (RLHF), have led to impressive advances in language model\\ncapabilities, but existing techniques are limited by a widely observed\\nphenomenon known as overoptimization, where the quality of the language model\\nplateaus or degrades over the course of the alignment process. Overoptimization\\nis often attributed to overfitting to an inaccurate reward model, and while it\\ncan be mitigated through online data collection, this is infeasible in many\\nsettings. This raises a fundamental question: Do existing offline alignment\\nalgorithms make the most of the data they have, or can their sample-efficiency\\nbe improved further?\\n  We address this question with a new algorithm for offline alignment,\\n$\\\\chi^2$-Preference Optimization ($\\\\chi$PO). $\\\\chi$PO is a one-line change to\\nDirect Preference Optimization (DPO; Rafailov et al., 2023), which only\\ninvolves modifying the logarithmic link function in the DPO objective. Despite\\nthis minimal change, $\\\\chi$PO implicitly implements the principle of pessimism\\nin the face of uncertainty via regularization with the $\\\\chi^2$-divergence --\\nwhich quantifies uncertainty more effectively than KL-regularization -- and\\nprovably alleviates overoptimization, achieving sample-complexity guarantees\\nbased on single-policy concentrability -- the gold standard in offline\\nreinforcement learning. $\\\\chi$PO's simplicity and strong guarantees make it the\\nfirst practical and general-purpose offline alignment algorithm that is\\nprovably robust to overoptimization.\",\n",
       "  'article_url': 'http://arxiv.org/abs/2407.13399v1',\n",
       "  'pdf_url': 'http://arxiv.org/pdf/2407.13399v1'},\n",
       " {'title': 'Does Refusal Training in LLMs Generalize to the Past Tense?',\n",
       "  'summary': 'Refusal training is widely used to prevent LLMs from generating harmful,\\nundesirable, or illegal outputs. We reveal a curious generalization gap in the\\ncurrent refusal training approaches: simply reformulating a harmful request in\\nthe past tense (e.g., \"How to make a Molotov cocktail?\" to \"How did people make\\na Molotov cocktail?\") is often sufficient to jailbreak many state-of-the-art\\nLLMs. We systematically evaluate this method on Llama-3 8B, GPT-3.5 Turbo,\\nGemma-2 9B, Phi-3-Mini, GPT-4o, and R2D2 models using GPT-3.5 Turbo as a\\nreformulation model. For example, the success rate of this simple attack on\\nGPT-4o increases from 1% using direct requests to 88% using 20 past tense\\nreformulation attempts on harmful requests from JailbreakBench with GPT-4 as a\\njailbreak judge. Interestingly, we also find that reformulations in the future\\ntense are less effective, suggesting that refusal guardrails tend to consider\\npast historical questions more benign than hypothetical future questions.\\nMoreover, our experiments on fine-tuning GPT-3.5 Turbo show that defending\\nagainst past reformulations is feasible when past tense examples are explicitly\\nincluded in the fine-tuning data. Overall, our findings highlight that the\\nwidely used alignment techniques -- such as SFT, RLHF, and adversarial training\\n-- employed to align the studied models can be brittle and do not always\\ngeneralize as intended. We provide code and jailbreak artifacts at\\nhttps://github.com/tml-epfl/llm-past-tense.',\n",
       "  'article_url': 'http://arxiv.org/abs/2407.11969v1',\n",
       "  'pdf_url': 'http://arxiv.org/pdf/2407.11969v1'},\n",
       " {'title': 'New Desiderata for Direct Preference Optimization',\n",
       "  'summary': 'Large language models in the past have typically relied on some form of\\nreinforcement learning with human feedback (RLHF) to better align model\\nresponses with human preferences. However, because of oft-observed\\ninstabilities when implementing these RLHF pipelines, various\\nreparameterization techniques have recently been introduced to sidestep the\\nneed for separately learning an RL reward model. Instead, directly fine-tuning\\nfor human preferences is achieved via the minimization of a single closed-form\\ntraining objective, a process originally referred to as direct preference\\noptimization (DPO) and followed by several notable descendants. Although\\neffective in certain real-world settings, we introduce new evaluation criteria\\nthat serve to highlight unresolved shortcomings in the ability of existing DPO\\nmethods to interpolate between a pre-trained reference model and empirical\\nmeasures of human preferences, as well as unavoidable trade-offs in how low-\\nand high-quality responses are regularized and constraints are handled. Our\\ninsights then motivate an alternative DPO-like loss that provably mitigates\\nthese limitations. Empirical results serve to corroborate notable aspects of\\nour analyses.',\n",
       "  'article_url': 'http://arxiv.org/abs/2407.09072v1',\n",
       "  'pdf_url': 'http://arxiv.org/pdf/2407.09072v1'},\n",
       " {'title': \"Model Surgery: Modulating LLM's Behavior Via Simple Parameter Editing\",\n",
       "  'summary': \"Large Language Models (LLMs) have demonstrated great potential as generalist\\nassistants, showcasing powerful task understanding and problem-solving\\ncapabilities. To deploy LLMs as AI assistants, it is crucial that these models\\nexhibit desirable behavioral traits, such as non-toxicity and resilience\\nagainst jailbreak attempts. Current methods for detoxification or preventing\\njailbreaking usually involve Supervised Fine-Tuning (SFT) or Reinforcement\\nLearning from Human Feedback (RLHF), which requires finetuning billions of\\nparameters through gradient descent with substantial computation cost.\\nFurthermore, models modified through SFT and RLHF may deviate from the\\npretrained models, potentially leading to a degradation in foundational LLM\\ncapabilities. In this paper, we observe that surprisingly, directly editing a\\nsmall subset of parameters can effectively modulate specific behaviors of LLMs,\\nsuch as detoxification and resistance to jailbreaking. Specifically, for a\\nbehavior that we aim to avoid, we employ a linear classifier, which we term the\\nbehavior probe, to classify binary behavior labels within the hidden state\\nspace of the LLM. Using this probe, we introduce an algorithm to identify a\\ncritical subset of LLM parameters that significantly influence this targeted\\nbehavior. Then we directly edit these selected parameters by shifting them\\ntowards the behavior probe. Such a direct parameter editing method necessitates\\nonly inference-level computational resources. Experiments demonstrate that in\\nthe representative detoxification task, our approach achieves reductions of up\\nto 90.0\\\\% in toxicity on the RealToxicityPrompts dataset and 49.2\\\\% on ToxiGen,\\nwhile maintaining the LLM's general capabilities in areas such as common sense,\\nquestion answering, and mathematics. Our code is available at\\nhttps://github.com/lucywang720/model-surgery.\",\n",
       "  'article_url': 'http://arxiv.org/abs/2407.08770v1',\n",
       "  'pdf_url': 'http://arxiv.org/pdf/2407.08770v1'}]"
      ]
     },
     "execution_count": 51,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Test that the search is working\n",
    "result_output = get_articles(\"ppo rlhf\")\n",
    "result_output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "id": "fabb93a6-72f6-419d-a331-5861b547c038",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T15:09:29.217741Z",
     "iopub.status.busy": "2024-07-19T15:09:29.217039Z",
     "iopub.status.idle": "2024-07-19T15:09:29.231051Z",
     "shell.execute_reply": "2024-07-19T15:09:29.229455Z",
     "shell.execute_reply.started": "2024-07-19T15:09:29.217652Z"
    }
   },
   "outputs": [],
   "source": [
    "def strings_ranked_by_relatedness(\n",
    "    query: str,\n",
    "    df: pd.DataFrame,\n",
    "    relatedness_fn=lambda x, y: 1 - spatial.distance.cosine(x, y),\n",
    "    top_n: int = 100,\n",
    ") -> list[str]:\n",
    "    \"\"\"Returns a list of strings and relatednesses, sorted from most related to least.\"\"\"\n",
    "    query_embedding_response = embedding_request(query)\n",
    "    query_embedding = query_embedding_response.data[0].embedding\n",
    "    strings_and_relatednesses = [\n",
    "        (row[\"filepath\"], relatedness_fn(query_embedding, row[\"embedding\"]))\n",
    "        for i, row in df.iterrows()\n",
    "    ]\n",
    "    strings_and_relatednesses.sort(key=lambda x: x[1], reverse=True)\n",
    "    strings, relatednesses = zip(*strings_and_relatednesses)\n",
    "    return strings[:top_n]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "id": "b68dc776-2dd2-4bb9-a84a-d228388cc2ec",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T15:12:30.965518Z",
     "iopub.status.busy": "2024-07-19T15:12:30.965248Z",
     "iopub.status.idle": "2024-07-19T15:12:30.975794Z",
     "shell.execute_reply": "2024-07-19T15:12:30.975232Z",
     "shell.execute_reply.started": "2024-07-19T15:12:30.965499Z"
    }
   },
   "outputs": [],
   "source": [
    "def read_pdf(filepath):\n",
    "    \"\"\"Takes a filepath to a PDF and returns a string of the PDF's contents\"\"\"\n",
    "    # creating a pdf reader object\n",
    "    reader = PdfReader(filepath)\n",
    "    pdf_text = \"\"\n",
    "    page_number = 0\n",
    "    for page in reader.pages:\n",
    "        page_number += 1\n",
    "        pdf_text += page.extract_text() + f\"\\nPage Number: {page_number}\"\n",
    "    return pdf_text\n",
    "\n",
    "\n",
    "# Split a text into smaller chunks of size n, preferably ending at the end of a sentence\n",
    "def create_chunks(text, n, tokenizer):\n",
    "    \"\"\"Returns successive n-sized chunks from provided text.\"\"\"\n",
    "    tokens = tokenizer.encode(text)\n",
    "    i = 0\n",
    "    while i < len(tokens):\n",
    "        # Find the nearest end of sentence within a range of 0.5 * n and 1.5 * n tokens\n",
    "        j = min(i + int(1.5 * n), len(tokens))\n",
    "        while j > i + int(0.5 * n):\n",
    "            # Decode the tokens and check for full stop or newline\n",
    "            chunk = tokenizer.decode(tokens[i:j])\n",
    "            if chunk.endswith(\".\") or chunk.endswith(\"\\n\"):\n",
    "                break\n",
    "            j -= 1\n",
    "        # If no end of sentence found, use n tokens as the chunk size\n",
    "        if j == i + int(0.5 * n):\n",
    "            j = min(i + n, len(tokens))\n",
    "        yield tokens[i:j]\n",
    "        i = j\n",
    "\n",
    "\n",
    "def extract_chunk(content, template_prompt):\n",
    "    \"\"\"This function applies a prompt to some input content. In this case it returns a summarized chunk of text\"\"\"\n",
    "    prompt = template_prompt + content\n",
    "    response = client.chat.completions.create(\n",
    "        model=GPT_MODEL, messages=[{\"role\": \"user\", \"content\": prompt}], temperature=0\n",
    "    )\n",
    "    return response.choices[0].message.content\n",
    "\n",
    "\n",
    "def summarize_text(query):\n",
    "    \"\"\"This function does the following:\n",
    "    - Reads in the arxiv_library.csv file in including the embeddings\n",
    "    - Finds the closest file to the user's query\n",
    "    - Scrapes the text out of the file and chunks it\n",
    "    - Summarizes each chunk in parallel\n",
    "    - Does one final summary and returns this to the user\"\"\"\n",
    "\n",
    "    # A prompt to dictate how the recursive summarizations should approach the input paper\n",
    "    summary_prompt = \"\"\"Summarize this text from an academic paper. Extract any key points with reasoning.\\n\\nContent:\"\"\"\n",
    "\n",
    "    # If the library is empty (no searches have been performed yet), we perform one and download the results\n",
    "    library_df = pd.read_csv(paper_dir_filepath).reset_index()\n",
    "    if len(library_df) == 0:\n",
    "        print(\"No papers searched yet, downloading first.\")\n",
    "        get_articles(query)\n",
    "        print(\"Papers downloaded, continuing\")\n",
    "        library_df = pd.read_csv(paper_dir_filepath).reset_index()\n",
    "    library_df.columns = [\"title\", \"filepath\", \"embedding\"]\n",
    "    library_df[\"embedding\"] = library_df[\"embedding\"].apply(ast.literal_eval)\n",
    "    strings = strings_ranked_by_relatedness(query, library_df, top_n=1)\n",
    "    print(f\"Chunking text from paper: {strings[0]}\")\n",
    "    pdf_text = read_pdf(strings[0])\n",
    "\n",
    "    # Initialise tokenizer\n",
    "    tokenizer = tiktoken.get_encoding(\"cl100k_base\")\n",
    "    results = \"\"\n",
    "\n",
    "    # Chunk up the document into 1500 token chunks\n",
    "    chunks = create_chunks(pdf_text, 1500, tokenizer)\n",
    "    text_chunks = [tokenizer.decode(chunk) for chunk in chunks]\n",
    "    print(\"Summarizing each chunk of text\")\n",
    "\n",
    "    # Parallel process the summaries\n",
    "    with concurrent.futures.ThreadPoolExecutor(\n",
    "        max_workers=len(text_chunks)\n",
    "    ) as executor:\n",
    "        futures = [\n",
    "            executor.submit(extract_chunk, chunk, summary_prompt)\n",
    "            for chunk in text_chunks\n",
    "        ]\n",
    "        with tqdm(total=len(text_chunks)) as pbar:\n",
    "            for _ in concurrent.futures.as_completed(futures):\n",
    "                pbar.update(1)\n",
    "        for future in futures:\n",
    "            data = future.result()\n",
    "            results += data\n",
    "\n",
    "    # Final summary\n",
    "    print(\"Summarizing into overall summary\")\n",
    "    response = client.chat.completions.create(\n",
    "        model=GPT_MODEL,\n",
    "        messages=[\n",
    "            {\n",
    "                \"role\": \"user\",\n",
    "                \"content\": f\"\"\"Write a summary collated from this collection of key points extracted from an academic paper.\n",
    "                        The summary should highlight the core argument, conclusions and evidence, and answer the user's query.\n",
    "                        User query: {query}\n",
    "                        The summary should be structured in bulleted lists following the headings Core Argument, Evidence, and Conclusions.\n",
    "                        Key points:\\n{results}\\nSummary:\\n\"\"\",\n",
    "            }\n",
    "        ],\n",
    "        temperature=0,\n",
    "    )\n",
    "    return response\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "id": "5d318a03-dcc3-4a71-9cbb-d909a6d54f8e",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-19T15:12:33.465995Z",
     "iopub.status.busy": "2024-07-19T15:12:33.465543Z",
     "iopub.status.idle": "2024-07-19T15:13:38.339151Z",
     "shell.execute_reply": "2024-07-19T15:13:38.336948Z",
     "shell.execute_reply.started": "2024-07-19T15:12:33.465964Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Chunking text from paper: ./data/papers/2407.13734v1.Understanding_Reinforcement_Learning_Based_Fine_Tuning_of_Diffusion_Models__A_Tutorial_and_Review.pdf\n",
      "Summarizing each chunk of text\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 16/16 [00:24<00:00,  1.54s/it]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Summarizing into overall summary\n"
     ]
    }
   ],
   "source": [
    "chat_test_response = summarize_text(\"PPO reinforcement learning sequence generation\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5e3c75c9-b73c-46a1-9edb-cab81eb5a880",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
