{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "abce8543-176a-4a58-953d-5148b2c3a1ac",
   "metadata": {
    "ExecutionIndicator": {
     "show": true
    },
    "execution": {
     "iopub.execute_input": "2025-02-22T01:43:38.742286Z",
     "iopub.status.busy": "2025-02-22T01:43:38.741935Z",
     "iopub.status.idle": "2025-02-22T01:43:39.104973Z",
     "shell.execute_reply": "2025-02-22T01:43:39.104508Z",
     "shell.execute_reply.started": "2025-02-22T01:43:38.742262Z"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "from jinja2 import Template"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "a0bb4014-db65-435e-81ab-b91adc474177",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-02-22T01:43:56.305195Z",
     "iopub.status.busy": "2025-02-22T01:43:56.304817Z",
     "iopub.status.idle": "2025-02-22T01:43:56.309940Z",
     "shell.execute_reply": "2025-02-22T01:43:56.309506Z",
     "shell.execute_reply.started": "2025-02-22T01:43:56.305175Z"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "sys = '''\n",
    "你是一个有帮助的助手，可以访问以下功能函数：\n",
    "{{api_list}}\n",
    "\n",
    "1、请根据用户的对话内容判断是否需要调用功能函数列表中函数以及选择恰当调用的时机。\n",
    "2、要使用这些功能，请回复：  {\"name\": \"function_name\", \"arguments\": {\"arg_1\": \"value_1\", \"arg_1\": \"value_1\", ...}}，name是调用的函数的名称，arguments是函数所需的参数。\n",
    "3、你必须处理的边缘情况： - 如果没有与用户请求匹配的功能，你将礼貌地回应说你无法帮助。\"\n",
    "'''\n",
    "template_sys=Template(sys)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "98306e65-61d0-471e-bb3d-31ccbeed79ae",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-02-22T01:44:00.701452Z",
     "iopub.status.busy": "2025-02-22T01:44:00.701127Z",
     "iopub.status.idle": "2025-02-22T01:44:00.770015Z",
     "shell.execute_reply": "2025-02-22T01:44:00.769516Z",
     "shell.execute_reply.started": "2025-02-22T01:44:00.701431Z"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "data = pd.read_json('./hecheng_api.json')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "ec8cd851-f6ca-4d92-a8d2-b9e8ef3b736c",
   "metadata": {
    "ExecutionIndicator": {
     "show": false
    },
    "execution": {
     "iopub.execute_input": "2025-02-22T01:44:16.882786Z",
     "iopub.status.busy": "2025-02-22T01:44:16.882431Z",
     "iopub.status.idle": "2025-02-22T01:44:16.887644Z",
     "shell.execute_reply": "2025-02-22T01:44:16.887083Z",
     "shell.execute_reply.started": "2025-02-22T01:44:16.882761Z"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "prompt_dialogue = \\\n",
    "\"\"\"下面是一组对话示例，对话过程中ASSISTANT使用到了给定的api:\n",
    "<对话示例>\n",
    "api列表:\n",
    "[\n",
    "{\n",
    "    \"name\": \"generate_password\",\n",
    "    \"description\": \"Generate a random password\",\n",
    "    \"parameters\": {\n",
    "        \"type\": \"object\",\n",
    "        \"properties\": {\n",
    "            \"length\": {\n",
    "                \"type\": \"integer\",\n",
    "                \"description\": \"The length of the password\"\n",
    "            },\n",
    "            \"include_symbols\": {\n",
    "                \"type\": \"boolean\",\n",
    "                \"description\": \"Whether to include symbols in the password\",\n",
    "                \"default\": true\n",
    "            }\n",
    "        },\n",
    "        \"required\": [\n",
    "            \"length\"\n",
    "        ]\n",
    "    }\n",
    "}\n",
    "]\n",
    "<对话示例>\n",
    "[\n",
    "    {\"from\": \"human\", \"value\": \"I need a new password. Can you generate one for me?\"},\n",
    "    {\"from\": \"gpt\", \"value\": \"Of course. How long would you like your password to be? And would you like it to include symbols?\"},\n",
    "    {\"from\": \"human\", \"value\": \" I would like it to be 12 characters long and yes, please include symbols.\"},\n",
    "    {\"from\": \"function_call\", \"value\": \"{\\\"name\\\": \\\"generate_password\\\", \\\"arguments\\\": {\\\"length\\\": 12, \\\"include_symbols\\\": true}}\"},\n",
    "    {\"from\": \"observation\", \"value\": \"{\\\"password\\\": \\\"4&7j#9@1Q6*\\\"}\"},\n",
    "    {\"from\": \"gpt\", \"value\": \"Here is your new password: 4&7j#9@1Q6*. Please make sure to save it in a secure location.\"}\n",
    "]\n",
    "</对话示例>\n",
    "<对话示例说明>\n",
    "针对如上示例的几点说明：\n",
    "（1）其中function_call的回复代表要开始使用api了。回复的信息为json的字符串表示，内容包含了要调用的api和传入参数：\"name\"为要调用的api名字，\"arguments\"里包含了该api传入参数的值\n",
    "（2）observation的回答内容是api调用返回的结果\n",
    "（3）observation之后的gpt的回答，是gpt理解observation之后的回答\n",
    " (4) gpt和function_call的回复在偶数行，human和observation的回复在奇数行\n",
    " (5) api列表中存储多个api定义\n",
    "</对话示例说明>\n",
    "\n",
    "在理解了如上内容后，请根据如下要求生成内容。\n",
    "要求：\n",
    "输出时严格遵守给出的对话样例格式,\n",
    "1. api列表：\n",
    "    {{ api }}\n",
    "2. 对话过程中gpt会随机使用到要求1中的一个api,且对话的轮次在3-5轮,确保gpt和function_call在偶数行human和observation在奇数行。\n",
    "3. 对话内容必须要包含observation的内容，并且这部分内容不能用占位符之类的含糊省略描述，可以想象生成合乎逻辑的api调用返回结果。\n",
    "4. 对话内容连贯自然。对话过程中human不能直接提到1中的api，是否调用api要靠gpt通过逻辑推理自行判断。\n",
    "5. 若human没有提供该api的required传入参数，gpt要主动询问USER这些参数的取值。如果有defalut取值的传入参数human未提供，gpt要告知human这些参数的dafault取值。\n",
    "6. 对话内容为中文。\n",
    "7. human有一定概率在一轮对话中一次性表达意图和提供所有api传入参数值，也有可能分多轮对话给出所有信息。在生成对话的时候最大可能保证human表述方式的多样性。\n",
    "8. 生成{{ num_diags }}组满足要求的对话，格式遵循<对话示例>中的对话部分，要包含human, gpt的内容。不同对话之间gpt的语言表达方式差异尽可能大。直接返回json, 不需要用'''json '''括起来，每组对话为json的一个list元素，不要返回其他内容。\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1d2a4ae6-7a64-4f1b-8b13-6b7e0507a97c",
   "metadata": {
    "ExecutionIndicator": {
     "show": false
    },
    "execution": {
     "iopub.execute_input": "2025-02-19T07:51:20.735702Z",
     "iopub.status.busy": "2025-02-19T07:51:20.735435Z",
     "iopub.status.idle": "2025-02-19T07:56:53.007007Z",
     "shell.execute_reply": "2025-02-19T07:56:53.006011Z",
     "shell.execute_reply.started": "2025-02-19T07:51:20.735685Z"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "import openai\n",
    "from jinja2 import Template\n",
    "# from prompt import prompt_dialogue\n",
    "\n",
    "import json\n",
    "import httpx\n",
    "from typing import List, Dict, Callable, Generator, Union\n",
    "from functools import partial\n",
    "from tqdm import tqdm\n",
    "\n",
    "def configure_openai_client(url: str, api_key: str) -> None:\n",
    "    openai.api_key = api_key\n",
    "    openai.base_url = url\n",
    "    openai.proxy = \"\"\n",
    "\n",
    "\n",
    "class LLMClient:\n",
    "    def __init__(\n",
    "        self,\n",
    "        url: str,\n",
    "        api_key: str = \"EMPTY\",\n",
    "        chat_mode: bool = True,\n",
    "        buffer_size: int = 20,\n",
    "        timeout: int = 60\n",
    "    ):\n",
    "        configure_openai_client(url, api_key)\n",
    "        self.chat_mode = chat_mode\n",
    "        self.buffer_size = buffer_size\n",
    "        self.timeout = timeout\n",
    "        self.llm_function = openai.chat.completions.create\n",
    "\n",
    "    def _prepare_params(\n",
    "        self,\n",
    "        messages: List[Union[Dict, str]],\n",
    "        stream: bool,\n",
    "        **kwargs\n",
    "    ) -> Dict:\n",
    "        params = {\n",
    "            **kwargs,\n",
    "            \"stream\": stream,\n",
    "            \"timeout\": self.timeout\n",
    "        }\n",
    "        if self.chat_mode:\n",
    "            params[\"messages\"] = messages\n",
    "        else:\n",
    "            params[\"prompt\"] = messages\n",
    "        return params\n",
    "\n",
    "    def generate(\n",
    "        self,\n",
    "        messages: List[Union[Dict, str]],\n",
    "        stream: bool = False,\n",
    "        **kwargs\n",
    "    ) -> Union[str, Generator[str, None, None]]:\n",
    "        params = self._prepare_params(messages, stream, **kwargs)\n",
    "        \n",
    "        if not stream:\n",
    "            response = self.llm_function(**params)\n",
    "            return response.choices[0].message.content\n",
    "\n",
    "        return self._handle_streaming_output(params)\n",
    "\n",
    "    def _handle_streaming_output(self, params: Dict) -> Generator[str, None, None]:\n",
    "        buffer = \"\"\n",
    "        for chunk in self.llm_function(**params):\n",
    "            if len(chunk.choices) > 0 and chunk.choices[0].delta.content:\n",
    "                buffer += chunk.choices[0].delta.content\n",
    "                if len(buffer) >= self.buffer_size:\n",
    "                    yield buffer\n",
    "                    buffer = \"\"\n",
    "        if buffer:\n",
    "            yield buffer\n",
    "if __name__ == \"__main__\":\n",
    "    i = 0\n",
    "    df = pd.DataFrame()\n",
    "    for api_l in tqdm(list(data['conversations'])):\n",
    "\n",
    "        t = Template(prompt_dialogue)\n",
    "        prompt = t.render(api=api_l, num_diags=1)\n",
    "        model_url = \"\"\n",
    "        llm_client = LLMClient(model_url, \"\")\n",
    "    \n",
    "        response = llm_client.generate(\n",
    "            messages=[{\"role\": \"user\", \"content\": prompt}],\n",
    "            model='glm-4-air',\n",
    "            stream=False,\n",
    "            temperature = 0.7\n",
    "        )\n",
    "        try:\n",
    "            json.loads(response)\n",
    "        except ValueError:\n",
    "            continue\n",
    "        # print(response)\n",
    "        new_row = pd.DataFrame([{'conversations':response,'tools':api_l}])\n",
    "        df = pd.concat([df,new_row],ignore_index=True)\n",
    "    df.to_json('./str_hecheng_api.json',orient='records',force_ascii=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "f6b78e07",
   "metadata": {},
   "outputs": [],
   "source": [
    "df.to_json('./str_hecheng_api.json',orient='records',force_ascii=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c9b8c2fb-2263-456b-b9c3-eedc156633bb",
   "metadata": {
    "ExecutionIndicator": {
     "show": true
    },
    "execution": {
     "iopub.status.busy": "2025-02-19T07:56:53.007446Z",
     "iopub.status.idle": "2025-02-19T07:56:53.007636Z",
     "shell.execute_reply": "2025-02-19T07:56:53.007550Z",
     "shell.execute_reply.started": "2025-02-19T07:56:53.007541Z"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "from tqdm import tqdm\n",
    "rows_to_drop = []\n",
    "for i in tqdm(range(0,500)):\n",
    "    try:\n",
    "        json.loads(df['conversations'][i])\n",
    "    except json.JSONDecodeError as e:\n",
    "        rows_to_drop.append(i)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "464680f6-f494-4f81-a528-277a298b834e",
   "metadata": {
    "execution": {
     "iopub.status.busy": "2025-02-19T07:56:53.008446Z",
     "iopub.status.idle": "2025-02-19T07:56:53.008735Z",
     "shell.execute_reply": "2025-02-19T07:56:53.008646Z",
     "shell.execute_reply.started": "2025-02-19T07:56:53.008637Z"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "df.drop(rows_to_drop, inplace=True)\n",
    "df.reset_index(drop=True, inplace=True)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "xiayou",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
