{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1.以JSON格式输出\n",
    "### 参数：format = 'json' or ''\n",
    "\n",
    "以JSON格式输出"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\n",
      "    \"台区编号\": \"16761186\",\n",
      "    \"台区名称\": \"电网_10千伏玉都城四线1016加哈尔巴格住宅楼\",\n",
      "    \"停电记录\": [\n",
      "        {\n",
      "            \"停电时间\": \"12月19日12时05分-12月19日12时06分\",\n",
      "            \"停电原因\": \"客户自行拉合表前开关\",\n",
      "            \"是否供电公司责任\": \"非供电公司责任\"\n",
      "        },\n",
      "        {\n",
      "            \"停电时间\": \"12月19日12时14分-12月19日12时15分\",\n",
      "            \"停电原因\": \"房屋内空气开关容量较小，热水器一经开启造成过负荷频繁跳闸\",\n",
      "            \"是否供电公司责任\": \"非供电公司责任\"\n",
      "        },\n",
      "        {\n",
      "            \"停电时间\": \"12月20日11时09分-12月20日11时10分\",\n",
      "            \"停电原因\": \"房屋内空气开关容量较小，热水器一经开启造成过负荷频繁跳闸\",\n",
      "            \"是否供电公司责任\": \"非供电公司责任\"\n",
      "        }\n",
      "    ]\n",
      "}\n"
     ]
    }
   ],
   "source": [
    "import ollama\n",
    "client = ollama.Client(host='http://192.168.20.43:11434')\n",
    "\n",
    "def llm_chat(sys, query):\n",
    "    system_prompt = sys\n",
    "    prompt = query\n",
    "    response = client.chat(\n",
    "        model='qwen2.5:14b', \n",
    "        format='json',\n",
    "        options={\n",
    "            \"num_ctx\": 32768,  # 设置 num_ctx 参数\n",
    "            \"temperature\": 0,  # 设置 temperature 参数\n",
    "            # \"top_p\": 0.9  # 设置 top_p 参数\n",
    "            # 可以添加的其他参数:\n",
    "            # \"max_tokens\": 150,  # 设置最大生成的token数量\n",
    "            # \"stop\": [\"\\n\", \"结束\"],  # 设置停止生成的token\n",
    "            # \"stream\": False,  # 是否启用流式响应\n",
    "            # \"presence_penalty\": 0.0,  # 设置存在惩罚\n",
    "            # \"frequency_penalty\": 0.0,  # 设置频率惩罚\n",
    "            # \"logprobs\": None,  # 设置返回log概率\n",
    "            # \"echo\": False,  # 是否回显输入\n",
    "            # \"n\": 1,  # 设置生成的回复数量\n",
    "        },\n",
    "        messages=[{'role': 'system', 'content': system_prompt},\n",
    "                  {'role': 'user', 'content': prompt}],\n",
    "    )\n",
    "    return response['message']['content']\n",
    "\n",
    "#测试\n",
    "sys = \"\"\"\n",
    "## 任务\n",
    "你的任务是从以下文本中提取停电相关信息，并按照指定的JSON格式输出。请确保提取的信息完整且格式规范。提取要求如下：\n",
    "\n",
    "1. 提取停电记录，包括停电时间、停电原因和是否供电公司责任\n",
    "2. 停电时间格式统一为\"月日时分\"格式，且需完整记录停电开始和结束时间\n",
    "3. \"是否供电公司责任\"字段统一使用\"供电公司责任\"或\"非供电公司责任\"描述\n",
    "4. 确保提取完整的台区编号\n",
    "5. 确保提取完整的台区名称，台区名称是电力系统中用于唯一标识配电台区的标准化名称，通常包含 电压等级、变电站 / 线路信息、设备属性（公变 / 专变）、具体位置 / 用户 / 设备名称 等关键要素，用于区分不同供电单元。通常与台区编号在一起。\n",
    "6. 台区名称必须显示全称，不得进行任何简化或省略。例如将\"丰达农场公变\"完整显示为\"35千伏乌拉斯台农场变10千伏乌东线（1013）丰达农场公变\"。\n",
    "7. 不得遗漏任何停电记录\n",
    "\n",
    "## 仅输出JSON格式，Like：\n",
    "{\n",
    "    \"台区编号\": \"XXXX\",\n",
    "    \"台区名称\": \"XXXX\",\n",
    "    \"停电记录\": [\n",
    "        {\n",
    "            \"停电时间\": \"MM月DD日HH时mm分-MM月DD日HH时mm分\",\n",
    "            \"停电原因\": \"XXXX\",\n",
    "            \"是否供电公司责任\": \"供电公司责任/非供电公司责任\"\n",
    "        },\n",
    "        ...\n",
    "    ]\n",
    "}\n",
    "\n",
    "\"\"\"\n",
    "query = \"\"\"\n",
    "已处理完毕。12月20日，经【巴音郭楞蒙古自治州且末县】城区供电所工作人员艾力西尔、用电检查人员买尔丹到达现场联系客户后核实，客户反映情况确实存在，但非供电企业责任。现场实际情况是：客户（用户编号：2380991695，用户名称：麦菲娜·亚力坤，用电类别：城镇居民生活用电；台区编号：16761186；台区名称：电网_10千伏玉都城四线1016加哈尔巴格住宅楼）所在小区两个月内因供电企业原因停电0次，非供电企业责任3次，具体停电情况：第一次停电时间为12月19日12时05分-12时06分，客户自行拉合表前开关；第二次停电时间为12月19日12时14分-12时15分；第三次停电时间为12月20日11时09分-11时10分，均因客户家中增加负荷（热水器），房屋内空气开关（客户侧产权）开关容量较小，热水器一经开启造成过负荷频繁跳闸。整改措施：对小区定期开展用电检查工作，对发现设备隐患的问题及时整改，避免此类问题再次发生，已采取工作人员向客户解释清楚并建议客户及时更换大容量空气开关的措施。已于12月25日将处理结果告知客户（19209965304），客户表示满意。（台区经理：帕拉提·加马力 、所在小区：加哈尔巴格小区、附件：现场回访单）\n",
    "\"\"\"\n",
    "response = llm_chat(sys, query)\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 工具调用\n",
    "### 参数：tools = []\n",
    "\n",
    "设置大模型可以使用的工具，进行function calling。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import ollama\n",
    "client = ollama.Client(host='http://192.168.20.43:11434')\n",
    "\n",
    "def llm_chat(sys, query):\n",
    "    system_prompt = sys\n",
    "    prompt = query\n",
    "    \n",
    "    # 定义一个工具函数，用于获取当前日期和时间\n",
    "    datetime_tool = {\n",
    "        \"type\": \"function\",\n",
    "        \"function\": {\n",
    "            \"name\": \"get_current_datetime\",\n",
    "            \"description\": \"获取当前日期和时间.\",\n",
    "            \"parameters\": {}\n",
    "        }\n",
    "    }\n",
    "    \n",
    "    response = client.chat(\n",
    "        model='qwen2.5:14b', \n",
    "        tools=[datetime_tool],  # 添加 tools 参数\n",
    "        # \"stream\": False,  # 是否启用流式响应\n",
    "        options={\n",
    "            \"num_ctx\": 32768,  # 设置 num_ctx 参数\n",
    "            \"temperature\": 0,  # 设置 temperature 参数\n",
    "            # \"top_p\": 0.9  # 设置 top_p 参数\n",
    "            # 可以添加的其他参数:\n",
    "            # \"max_tokens\": 150,  # 设置最大生成的token数量\n",
    "            # \"stop\": [\"\\n\", \"结束\"],  # 设置停止生成的token\n",
    "            # \"presence_penalty\": 0.0,  # 设置存在惩罚\n",
    "            # \"frequency_penalty\": 0.0,  # 设置频率惩罚\n",
    "            \"n\": 3,  # 设置生成的回复数量\n",
    "        },\n",
    "        messages=[{'role': 'system', 'content': system_prompt},\n",
    "                  {'role': 'user', 'content': prompt}],\n",
    "    )\n",
    "    # return response # 返回全部响应\n",
    "    return response['message']['tool_calls']    # 仅返回工具调用\n",
    "\n",
    "# 示例调用\n",
    "response = llm_chat(\"You are a helpful assistant.\", \"现在几号了？\")\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 流式响应\n",
    "### 参数：stream = True\n",
    "\n",
    "可以通过设置 stream=True 启用响应流，使函数调用返回一个 Python 生成器，其中每个部分都是流中的一个对象。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "当然可以，这里给您推荐一种非常受欢迎的小吃名字：“麻辣诱惑”。这个名字既带有地方特色（比如四川、重庆等地的麻辣风味），又具有很强的吸引力和创意。当然，根据您想要表达的具体风味或者地域特点，还可以进行相应的调整或创造新的名称哦！如果您有特定的食物类型或者是想结合某种独特的口味，请告诉我更多的细节，我可以提供更加贴切的名字建议。"
     ]
    }
   ],
   "source": [
    "import ollama\n",
    "client = ollama.Client(host='http://192.168.20.43:11434')\n",
    "\n",
    "stream = client.chat(\n",
    "    model='qwen2.5:14b', \n",
    "    stream=True,\n",
    "    messages=[\n",
    "        {\n",
    "            'role': 'user',\n",
    "            'content': '请给我一种小吃的名字',\n",
    "        },]\n",
    "    )\n",
    "    \n",
    "for chunk in stream:\n",
    "  print(chunk['message']['content'], end='', flush=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 控制随机性\n",
    "### 参数： options={\"seed\": 42}\n",
    "\n",
    "可以通过对比上面和下面这种方式，看到如何把随机性去掉的。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "当然可以，这里给你推荐一个非常受欢迎的小吃名字：“煎饼果子”。这是一种在中国北方地区特别流行的传统街头美食，深受人们喜爱。如果你喜欢尝试不同的风味和口味，也可以考虑“蚵仔煎”（源自台湾）、“章鱼小丸子”（源自日本）或者“土耳其烤肉卷饼”等来自世界各地的美味小吃。你喜欢哪种类型的呢？"
     ]
    }
   ],
   "source": [
    "import ollama\n",
    "\n",
    "client = ollama.Client(host='http://192.168.20.43:11434')\n",
    "\n",
    "stream = client.chat(\n",
    "    model='qwen2.5:14b', \n",
    "    stream=True,\n",
    "    options={\"seed\": 100},  # 添加 seed 参数\n",
    "    messages=[\n",
    "        {\n",
    "            'role': 'user',\n",
    "            'content': '请给我一种小吃的名字',\n",
    "        },\n",
    "    ]\n",
    ")\n",
    "\n",
    "for chunk in stream:\n",
    "    print(chunk['message']['content'], end='', flush=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 控制文本多样性\n",
    "### 参数： options={\"num_keep\": 1}\n",
    "\n",
    "假设我们有一个文本生成模型，我们希望生成一个句子。\n",
    "\n",
    "如果我们设置 num_keep=3，那么模型在生成每个 token 时可能会：\n",
    "- 计算所有可能的下一个 token 的概率。\n",
    "- 选择概率最高的 3 个 token。\n",
    "- 从这 3 个 token 中选择一个来生成。\n",
    "\n",
    "其它类似参数：\n",
    "- num_keep：从概率最高的 n 个 token中选择一个来生成。\n",
    "- top_k：在生成文本时，只考虑概率最高的 k 个 token。\n",
    "- top_p：在生成文本时，只考虑累积概率达到 p 的 token。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 97,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "num_keep = 1\n",
      "在北京，有很多特色小吃都非常受欢迎，其中“炸酱面”是非常具有代表性和广受好评的一种。炸酱面不仅是一道美味的主食，而且历史悠久、风味独特。\n",
      "\n",
      "炸酱面的主要材料包括黄豆酱和肉末，通常还会搭配一些新鲜蔬菜（如黄瓜丝、心里美萝卜丝等）以及煮熟的面条。制作时先将猪肉或牛肉末与黄豆酱炒制成炸酱，然后配上各种蔬菜和煮好的面条即可食用。其味道浓郁醇厚，香气扑鼻。\n",
      "\n",
      "此外，北京小吃中还有很多其他美味的选择，比如老北京冰糖葫芦、艾窝窝（一种甜点）、驴打滚儿等也都是不错的选择。如果你有机会去北京的话，不妨尝试一下这些地道的小吃吧！\n",
      "\n",
      "num_keep = 50\n",
      "在北京，有着丰富多样的传统小吃，其中一种非常受欢迎的小吃是“炸酱面”。炸酱面不仅是京城的特色美食之一，还承载着浓厚的地方文化和历史底蕴。它的特点是面条爽滑有劲道，而炸酱则是用黄豆酱、甜面酱等经过小火慢炖至浓稠，加上肉末和其他配料制成，口感香醇浓郁。\n",
      "\n",
      "除了炸酱面外，北京小吃还包括有名的豆汁儿、卤煮火烧、炒肝、艾窝窝等等。每一种都有其独特的味道和制作工艺，反映了老北京人的生活习俗和饮食文化特点。如果你在北京旅游或居住的话，不妨去尝试一下这些地道的美食吧！\n",
      "\n"
     ]
    }
   ],
   "source": [
    "import ollama\n",
    "\n",
    "client = ollama.Client(host='http://192.168.20.43:11434')\n",
    "\n",
    "def chat_with_num_keep(num_keep):\n",
    "    response = client.chat(\n",
    "        model='qwen2.5:14b', \n",
    "        stream=True,\n",
    "        options={\"num_keep\": num_keep},  # 添加 num_keep 参数\n",
    "        messages=[\n",
    "            {\n",
    "                'role': 'user',\n",
    "                'content': '推荐一款北京小吃',\n",
    "            },\n",
    "        ]\n",
    "    )\n",
    "    \n",
    "    for chunk in response:\n",
    "        print(chunk['message']['content'], end='', flush=True)\n",
    "    print(\"\\n\")\n",
    "\n",
    "# 演示不同 num_keep 值下的变化\n",
    "print(\"num_keep = 1\")\n",
    "chat_with_num_keep(1)\n",
    "\n",
    "print(\"num_keep = 50\")\n",
    "chat_with_num_keep(50)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 控制输出Token数\n",
    "### 参数： options={\"num_predict\": 1}\n",
    "\n",
    "num_predict = 1\n",
    "当然可以\n",
    "\n",
    "num_predict = 20\n",
    "当然可以。下面是一首关于爱情的诗，尝试捕捉爱的美好、复杂以及它如何影响着人们的心灵和生活。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "num_predict = 1\n",
      "猫\n",
      "\n",
      "num_predict = 20\n",
      "猫确实是哺乳动物。哺乳动物的特征包括生产乳汁来喂养幼崽，而\n",
      "\n"
     ]
    }
   ],
   "source": [
    "import ollama\n",
    "\n",
    "# 请确保这里的 URL 是正确的，并且您的网络可以访问\n",
    "client = ollama.Client(host='http://192.168.20.43:11434')\n",
    "\n",
    "def chat_with_num_predict(num_predict):\n",
    "    response = client.chat(\n",
    "        model='qwen2.5:14b', \n",
    "        stream=True,\n",
    "        options={\"num_predict\": num_predict},  # 添加 num_predict 参数\n",
    "        messages=[\n",
    "            {\n",
    "                'role': 'user',\n",
    "                'content': '判断猫是否是哺乳动物。',\n",
    "            },\n",
    "        ]\n",
    "    )\n",
    "    \n",
    "    for chunk in response:\n",
    "        print(chunk['message']['content'], end='', flush=True)\n",
    "    print(\"\\n\")\n",
    "\n",
    "# 演示不同 num_predict 值下的变化\n",
    "print(\"num_predict = 1\")\n",
    "chat_with_num_predict(1)\n",
    "\n",
    "print(\"num_predict = 20\")\n",
    "chat_with_num_predict(20)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 对输出token进行惩罚\n",
    "### 参数： \n",
    "    options = {\n",
    "        \"repeat_penalty\": repeat_penalty, # 重复惩罚\n",
    "        \"presence_penalty\": presence_penalty, # 存在惩罚\n",
    "        \"frequency_penalty\": frequency_penalty, # 频率惩罚\n",
    "        \"penalize_newline\": penalize_newline    # 惩罚换行符\n",
    "    }\n",
    "\n",
    "- repeat_penalty：设置重复惩罚，用于减少重复生成相同 token 的概率。\n",
    "- presence_penalty：设置存在惩罚，用于减少生成已经出现过的 token 的概率。\n",
    "- frequency_penalty：设置频率惩罚，用于减少生成频繁出现的 token 的概率。\n",
    "- penalize_newline：如果设置为 True，则模型会惩罚生成换行符，以避免生成过长的文本。\n",
    "- repeat_last_n：指定在生成文本时，最后 n 个 token 不能重复。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "chat_with_repeat_penalty = 1.5\n",
      "当然可以帮您列出五十位以内的、由唐代诗人李白创作的诗歌题目：\n",
      "\n",
      "1. 《静夜思》\n",
      "2. 将进酒 \n",
      "3. 赠汪伦  \n",
      "4. 独坐敬亭山   \n",
      "5. 秋浦歌十七首·其十四    \n",
      "6. 宣州谢脁楼饯别校书叔云     \n",
      "7. 渡荆门送别      \n",
      "8. 望庐山瀑布水二首\n",
      "9. 夜泊牛渚怀古        \n",
      "10. 黄鹤楼送孟浩然之广陵       \n",
      "11. 梦游天姥吟留別          \n",
      "12. 秋登宣城谢脁北楼  \n",
      "13. 宿五松山下荀媪家     \n",
      "14. 闻王昌龄左迁龙标遥有此寄\n",
      "15. 赠钱征君少阳         \n",
      "16. 下终南山过斛斯山人宿置酒 \n",
      "17. 留别西河刘少府       \n",
      "18. 听蜀僧濬弹琴        \n",
      "19. 陪侍御叔华登楼歌      \n",
      "20. 蜀道难\n",
      "21. 战士篇（又名从军行）   \n",
      "22. 清平调词三首之一  \n",
      "23. 杨叛儿       \n",
      "24. 春思 \n",
      "25. 长干行二首·其一      \n",
      "26. 送友人入蜀        \n",
      "27. 山中问答\n",
      "28. 拟古十二首   \n",
      "29. 行路难三首之一  \n",
      "30. 赠孟浩然       \n",
      "31. 峨眉山月歌 \n",
      "32. 醉后赠王宣州参谋与幕府诸公    \n",
      "33. 金陵酒肆留别\n",
      "34. 江行寄远   \n",
      "35. 大庭库      \n",
      "36. 对雪献从兄虞城宰  \n",
      "37. 下陵阳沿高溪送黄师我赴淮上        \n",
      "38. 题取昌山瀑布峰（又名望庐山东林寺碑） \n",
      "39. 东鲁门泛舟二首·其一\n",
      "40. 过汪氏别业二首   \n",
      "41. 赠僧朝美  \n",
      "42. 拟古十二首之九    \n",
      "43. 鲍防赠李南京（又名陪金陵府相中堂夜宴）\n",
      "44. 答湖州迦叶司马问白是何人\n",
      "45. 奉饯高承制赴歙州   \n",
      "46. 书情题蔡舍人雄阙下\n",
      "47. 醪糟火后添百瓮，山鸟不知呼可园 \n",
      "48. 同王昌龄送族弟襄归桂阳（一作韩冬郎）\n",
      "49. 宣城见杜鹃花  \n",
      "50. 秋夜宿龙门寺\n",
      "\n",
      "请注意，“李白”作为一位历史上的诗人，并非所有流传下来的诗篇都能完全确定是其亲笔所写，这些题目中有些可能是后世模仿或者代笔之作。以上列出的作品大部分被认为是可信的代表作之一。\n",
      "\n",
      "希望这能帮到您！如果需要更多具体信息或详细内容，请告知我进一步的需求吧。\n",
      "\n"
     ]
    }
   ],
   "source": [
    "import ollama\n",
    "\n",
    "# 请确保这里的 URL 是正确的，并且您的网络可以访问\n",
    "client = ollama.Client(host='http://192.168.20.43:11434')\n",
    "\n",
    "def chat_with_repeat_penalty(repeat_penalty):\n",
    "    response = client.chat(\n",
    "        model='qwen2.5:14b', \n",
    "        stream=True,\n",
    "        options={\n",
    "            \"repeat_penalty\": repeat_penalty # 0.0~1.0之间，如果设置为0.0会乱说\n",
    "            },  \n",
    "        messages=[\n",
    "            {\n",
    "                'role': 'user',\n",
    "                'content': '请给我50首李白写的诗。一定要是50首，只要名字，但不要省略',\n",
    "            },\n",
    "        ]\n",
    "    )\n",
    "    \n",
    "    for chunk in response:\n",
    "        print(chunk['message']['content'], end='', flush=True)\n",
    "    print(\"\\n\")\n",
    "\n",
    "# 演示不同 num_predict 值下的变化\n",
    "# print(\"chat_with_repeat_penalty = 0\")\n",
    "# chat_with_repeat_penalty(0.0)\n",
    "\n",
    "# print(\"chat_with_repeat_penalty = 0.7\")\n",
    "# chat_with_repeat_penalty(0.7)\n",
    "\n",
    "print(\"chat_with_repeat_penalty = 1.3\") # 1是最正常的。1.3会让模型更倾向于生成新的内容。\n",
    "chat_with_repeat_penalty(1.3)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 停止词stop_token\n",
    "### 参数： \"stop\": stop_tokens,\n",
    "\n",
    "当生成文本出现指定 token 时即停止生成。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 123,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "我们的退货政策如下：在您收到商品后的7天内（以签收日期为准），如果商品存在质量问题或与描述不符的情况，您可以申请退货退款；请您确保退回的商品及其配件、赠品等保持完整，并不影响二次销售，在此前提下我们将为您办理退货退款手续。如因个人原因需要退货，请您承担寄回的运费，我们会在收到退回商品并确认无误后尽快处理您的退款请求。需要注意的是部分商品不支持7天无理由退换货服务，具体请以页面提示为准。如有其他疑问可以随时联系我们客服进行咨询。\n",
      "\n"
     ]
    }
   ],
   "source": [
    "import ollama\n",
    "\n",
    "client = ollama.Client(host='http://192.168.20.43:11434')\n",
    "\n",
    "def generate_customer_service_reply(system_prompt, user_query, stop_tokens):\n",
    "    messages = [\n",
    "        {'role': 'system', 'content': system_prompt},\n",
    "        {'role': 'user', 'content': user_query},\n",
    "    ]\n",
    "    \n",
    "    response = client.chat(\n",
    "        model='qwen2.5:14b', \n",
    "        stream=True,\n",
    "        options={\n",
    "            \"stop\": stop_tokens,  # 设置停止条件\n",
    "            \"temperature\": 0.7,  # 设置温度参数，控制随机性\n",
    "        },\n",
    "        messages=messages\n",
    "    )\n",
    "    \n",
    "    for chunk in response:\n",
    "        print(chunk['message']['content'], end='', flush=True)\n",
    "    print(\"\\n\")\n",
    "\n",
    "# 示例\n",
    "system_prompt = \"你是一家电商的客户服务代表，回答客户的常见问题。\"\n",
    "user_query = \"我想了解你们的退货政策。\"\n",
    "stop_tokens = [\"感谢您的咨询。\", \"欢迎随时联系\", \"如有任何疑问\", \"如有其它疑问\", \"请随时联系我们\"]  # 指定结束语\n",
    "#stop_tokens = [\"\\n\",\"。\"]\n",
    "generate_customer_service_reply(system_prompt, user_query, stop_tokens)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
