{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "3beaf591",
   "metadata": {},
   "outputs": [],
   "source": [
    "zhipuai_api_key = \"7e630b66a305a0495f7e81b373986c57.r3MD7stgIOketzM2\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e145aa7e",
   "metadata": {},
   "source": [
    "pip install --upgrade zhipuai"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "addeb59b",
   "metadata": {},
   "outputs": [],
   "source": [
    "from zhipuai import ZhipuAI\n",
    "\n",
    "client = ZhipuAI(api_key=zhipuai_api_key)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "83e6ead1",
   "metadata": {},
   "outputs": [],
   "source": [
    "prompt = \"广州的天气怎么样？\"\n",
    "response = client.chat.completions.create(\n",
    "    model=\"glm-4\",\n",
    "    messages=[\n",
    "        {\"role\":\"user\",\"content\":\"你好\"},\n",
    "        {\"role\":\"assistant\",\"content\":\"我是人工智能助手\"},\n",
    "        {\"role\":\"user\",\"content\":\"{介绍工具}结合上面的工具，回答问题{prompt}\"}\n",
    "        {\"role\":\"assistant\",\"metadata\":\"weather\",\"content\":\"需要调用weather工具，输入的参数='广州'\"}\n",
    "        {\"role\":\"observation\",\"content\":\"{'text':'晴',tempature:'26'}\"}\n",
    "        {\"role\":\"assistant\",\"content\":\"广州明天天气晴朗，气温26度，非常适合户外活动\"}\n",
    "        \n",
    "    ]\n",
    ")\n",
    "\n",
    "response"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "ee20fa9c",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Completion(model='glm-4', created=1705581119, choices=[CompletionChoice(index=0, finish_reason='stop', message=CompletionMessage(content='以色列之所以给人以“喜欢战争”的印象，实际上是一个复杂的问题，涉及到历史、地缘政治、安全和国防等多个方面。以下是一些可能的原因：\\n\\n1. 安全威胁：以色列自成立以来就面临着来自周边国家和非国家行为者的安全威胁。由于其地理位置和地区政治环境，以色列认为保持强大的军事力量和采取先发制人的策略是维护国家安全的必要手段。\\n\\n2. 历史经验：以色列历史上的多次中东战争，包括1948年独立战争、1967年的六日战争、1973年的赎罪日战争等，使得以色列社会对军事冲突有深刻的认识和准备。\\n\\n3. 政治和军事战略：以色列采取的是“国防军”模式，即全民兵役制度，这使得国家能够在短时间内动员大量军事力量。此外，以色列政府倾向于采取积极主动的防御策略，以防止潜在敌人的攻击。\\n\\n4. 地缘政治因素：以色列在中东地区是美国的重要盟友，其安全政策和军事行动常常与美国的中东政策相联系，这也可能导致以色列卷入地区冲突。\\n\\n5. 国内政治：以色列国内政治中存在多个派别，不同的政治立场和利益集团可能会影响国家的安全政策和军事行动。\\n\\n6. 资源和领土问题：以色列与周边国家之间存在水资源分配、领土争议等问题，这些问题有时会通过军事手段来解决或加剧。\\n\\n需要注意的是，尽管以色列可能因为上述原因而参与战争或冲突，但这并不意味着以色列“喜欢”战争。任何国家的人民都不希望生活在战争的阴影下，以色列也不例外。大多数以色列人和其他国家的人民一样，渴望和平与稳定。\\n\\n在讨论这类敏感问题时，应该保持客观和全面，避免简化和刻板印象。国际关系和地区冲突是复杂的，涉及多方面的因素和利益。', role='assistant', tool_calls=None))], request_id='8311641773807997845', id='8311641773807997845', usage=CompletionUsage(prompt_tokens=2238, completion_tokens=360, total_tokens=2598))"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "prompt = \"以色列为什么喜欢战争？\"\n",
    "response = client.chat.completions.create(\n",
    "    model=\"glm-4\",\n",
    "    messages=[\n",
    "        {\"role\":\"user\",\"content\":\"你好\"},\n",
    "        {\"role\":\"assistant\",\"content\":\"我是人工智能助手\"},\n",
    "        {\"role\":\"user\",\"content\":prompt}\n",
    "    ]\n",
    ")\n",
    "\n",
    "response"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "da3e690a",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'以色列之所以给人以“喜欢战争”的印象，实际上是一个复杂的问题，涉及到历史、地缘政治、安全和国防等多个方面。以下是一些可能的原因：\\n\\n1. 安全威胁：以色列自成立以来就面临着来自周边国家和非国家行为者的安全威胁。由于其地理位置和地区政治环境，以色列认为保持强大的军事力量和采取先发制人的策略是维护国家安全的必要手段。\\n\\n2. 历史经验：以色列历史上的多次中东战争，包括1948年独立战争、1967年的六日战争、1973年的赎罪日战争等，使得以色列社会对军事冲突有深刻的认识和准备。\\n\\n3. 政治和军事战略：以色列采取的是“国防军”模式，即全民兵役制度，这使得国家能够在短时间内动员大量军事力量。此外，以色列政府倾向于采取积极主动的防御策略，以防止潜在敌人的攻击。\\n\\n4. 地缘政治因素：以色列在中东地区是美国的重要盟友，其安全政策和军事行动常常与美国的中东政策相联系，这也可能导致以色列卷入地区冲突。\\n\\n5. 国内政治：以色列国内政治中存在多个派别，不同的政治立场和利益集团可能会影响国家的安全政策和军事行动。\\n\\n6. 资源和领土问题：以色列与周边国家之间存在水资源分配、领土争议等问题，这些问题有时会通过军事手段来解决或加剧。\\n\\n需要注意的是，尽管以色列可能因为上述原因而参与战争或冲突，但这并不意味着以色列“喜欢”战争。任何国家的人民都不希望生活在战争的阴影下，以色列也不例外。大多数以色列人和其他国家的人民一样，渴望和平与稳定。\\n\\n在讨论这类敏感问题时，应该保持客观和全面，避免简化和刻板印象。国际关系和地区冲突是复杂的，涉及多方面的因素和利益。'"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "response.choices[0].message.content"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "4d76c88e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "GLM-130B（General Language Modeling）是一种大规模的双语预训练语言模型，它的原理基于Transformer架构，这种架构广泛用于自然语言处理任务，因为它能够有效地捕捉文本中的长距离依赖关系。\n",
      "\n",
      "### GLM-130B的原理：\n",
      "1. **Transformer架构**：GLM-130B采用了Transformer模型，该模型基于自注意力机制，能够处理变长序列数据，并在各个层次捕捉文本中的复杂关系。\n",
      "   \n",
      "2. **自注意力机制**：模型通过自注意力机制，可以同时考虑输入序列中的所有位置，为每个位置的词分配不同的注意力权重，从而更好地理解上下文。\n",
      "\n",
      "3. **预训练任务**：GLM-130B在预训练阶段使用了多种任务，如掩码语言模型（Masked Language Modeling, MLM）和下一句预测（Next Sentence Prediction, NSP），以提高模型对语言的理解能力。\n",
      "\n",
      "4. **双语能力**：GLM-130B特别强调中英双语能力，这意味着它可以同时处理中文和英文数据，为跨语言任务提供支持。\n",
      "\n",
      "5. **参数高效利用**：尽管参数量巨大，GLM-130B通过有效的训练策略和模型设计，使得参数能够高效地学习和表征语言。\n",
      "\n",
      "### 训练参数量：\n",
      "GLM-130B模型的参数量达到了1300亿（130B），这是一个非常庞大的参数规模，使得模型能够捕捉到极其复杂的语言特征和模式。\n",
      "\n",
      "这样的参数规模使得GLM-130B能够处理大规模的数据集，并在多种语言任务上表现出色。同时，为了能够运行在相对实惠的硬件上，GLM-130B还进行了INT4量化，以减少对计算资源的需求，同时尽量保持性能不受损失。\n",
      "\n",
      "总的来说，GLM-130B是一个具有强大表达能力的模型，旨在提供高质量的双语语言理解服务。"
     ]
    }
   ],
   "source": [
    "prompt = \"glm-4原理是什么？使用了多少的参数进行训练？\"\n",
    "response = client.chat.completions.create(\n",
    "    model=\"glm-4\",\n",
    "    messages=[\n",
    "        {\"role\":\"user\",\"content\":\"你好\"},\n",
    "        {\"role\":\"assistant\",\"content\":\"我是人工智能助手\"},\n",
    "        {\"role\":\"user\",\"content\":prompt}\n",
    "    ],\n",
    "    stream=True\n",
    ")\n",
    "\n",
    "for chunk in response:\n",
    "    print(chunk.choices[0].delta.content,end=\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "02be6aea",
   "metadata": {},
   "source": [
    "GLM-130B（General Language Modeling）是一种大规模的双语预训练语言模型，它的原理基于Transformer架构，这种架构广泛用于自然语言处理任务，因为它能够有效地捕捉文本中的长距离依赖关系。\n",
    "\n",
    "### GLM-130B的原理：\n",
    "1. **Transformer架构**：GLM-130B采用了Transformer模型，该模型基于自注意力机制，能够处理变长序列数据，并在各个层次捕捉文本中的复杂关系。\n",
    "   \n",
    "2. **自注意力机制**：模型通过自注意力机制，可以同时考虑输入序列中的所有位置，为每个位置的词分配不同的注意力权重，从而更好地理解上下文。\n",
    "\n",
    "3. **预训练任务**：GLM-130B在预训练阶段使用了多种任务，如掩码语言模型（Masked Language Modeling, MLM）和下一句预测（Next Sentence Prediction, NSP），以提高模型对语言的理解能力。\n",
    "\n",
    "4. **双语能力**：GLM-130B特别强调中英双语能力，这意味着它可以同时处理中文和英文数据，为跨语言任务提供支持。\n",
    "\n",
    "5. **参数高效利用**：尽管参数量巨大，GLM-130B通过有效的训练策略和模型设计，使得参数能够高效地学习和表征语言。\n",
    "\n",
    "### 训练参数量：\n",
    "GLM-130B模型的参数量达到了1300亿（130B），这是一个非常庞大的参数规模，使得模型能够捕捉到极其复杂的语言特征和模式。\n",
    "\n",
    "这样的参数规模使得GLM-130B能够处理大规模的数据集，并在多种语言任务上表现出色。同时，为了能够运行在相对实惠的硬件上，GLM-130B还进行了INT4量化，以减少对计算资源的需求，同时尽量保持性能不受损失。\n",
    "\n",
    "总的来说，GLM-130B是一个具有强大表达能力的模型，旨在提供高质量的双语语言理解服务。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "40389a46",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.llms.base import LLM\n",
    "from zhipuai import ZhipuAI\n",
    "zhipuai_api_key = \"689523068d4786094df96f16e512c082.yL5hdh4F1T3GCprW\"\n",
    "\n",
    "\n",
    "class ChatGLM4(LLM):\n",
    "    max_token:int=8192\n",
    "    do_sample:bool = True\n",
    "    temperature:float = 0.7\n",
    "    top_p = 0.0\n",
    "    tokenizer:object = None\n",
    "    model:object = None\n",
    "    history = []\n",
    "    client:object = None\n",
    "    \n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.client = ZhipuAI(api_key=zhipuai_api_key)\n",
    "    \n",
    "    @property\n",
    "    def _llm_type(self):\n",
    "        return \"ChatGLM3\"\n",
    "    \n",
    "        \n",
    "    def _call(self,prompt,history=[],stop=[\"<|user|>\"]):\n",
    "        if history is None:\n",
    "            history=[]\n",
    "        history.append({\"role\":\"user\",\"content\":prompt})\n",
    "        response = self.client.chat.completions.create(\n",
    "            model=\"glm-4\",\n",
    "            messages=history\n",
    "        )\n",
    "\n",
    "        result = response.choices[0].message.content\n",
    "        return result\n",
    "        \n",
    "    def stream(self,prompt,history=[]):\n",
    "        if history is None:\n",
    "            history=[]\n",
    "        history.append({\"role\":\"user\",\"content\":prompt})\n",
    "        response = self.client.chat.completions.create(\n",
    "            model=\"glm-4\",\n",
    "            messages=history,\n",
    "            stream=True\n",
    "        )\n",
    "        for chunk in response:\n",
    "            yield chunk.choices[0].delta.content\n",
    "            \n",
    "    def _extract_tool(self):\n",
    "        #执行工具，触发2种动作，第一种中间调用工具得到答案的动作，第二种，根据最终的调用工作的结果，回答问题\n",
    "        if len(self.history[-1][\"metadata\"]) > 0:\n",
    "            metadata = self.history[-1][\"metadata\"]\n",
    "            content = self.history[-1][\"content\"]\n",
    "            \n",
    "            if \"tool_call\" in content:\n",
    "                for tool in self.tool_names:\n",
    "                    if tool in metadata:\n",
    "                        input_para = content.split(\"=\")[-1].split(\"'\")[0]\n",
    "                        action_json={\n",
    "                            \"action\":tool,\n",
    "                            \"action_input\":input_ara\n",
    "                        }\n",
    "                        self.has_search = True\n",
    "                        \n",
    "                        result = f\"\"\"\n",
    "                        Action:\n",
    "                        ```\n",
    "                        {json.dumps(action_json,ensure_ascii=False)}\n",
    "                        ```\"\"\"\n",
    "        final_answer_json = {\n",
    "            \"action\":\"Final Answer\",\n",
    "            \"action_input\":self.history[-1][\"content\"]    \n",
    "        }\n",
    "        self.has_search = False\n",
    "        return f\"\"\"\n",
    "        Action:\n",
    "        ```\n",
    "        {json.dumps(final_answer_json,ensure_ascii=False)}\n",
    "        ```\"\"\"\n",
    "    \n",
    "    def _tool_history(self,prompt):\n",
    "        #根据调用的工具和内容，生成处理工具的历史聊天内容，让AI能够理解中间所发生的事情。\n",
    "        #包含构造的系统消息，以及query，查询的问题\n",
    "        ans = []\n",
    "        print(prompt)\n",
    "        tool_prompts = prompt.split(\"You have access to the following tools:\\n\\n\")[1].split('\\n\\nUse a json blob')[0].split('\\n')\n",
    "        print(tool_prompts)\n",
    "        tool_names = [ tool.split(\":\")[0] for tool in tool_prompts]\n",
    "        self.tool_names = tool_names\n",
    "        tools_json = []\n",
    "        for i,tool in enumerate(tool_names):\n",
    "            tool_config = tool_config_from_file(tool)\n",
    "            if tool_config:\n",
    "                tools_json.append(tool_config)\n",
    "            else:\n",
    "                ValueError(f\"{tool}工具的配置未找到！提示词的描述是{tool_prompts[i]}\")\n",
    "        \n",
    "        ans.append({\n",
    "            \"role\":\"system\",\n",
    "#             \"content\":\"尽可能回答下面的问题，你可以使用下面这些工具\"\n",
    "            \"content\":\"Answer the following questions as best as you can. Yon have access to the following tools:\",\n",
    "            \"tools\":tools_json\n",
    "        })\n",
    "        query = f\"\"\"{prompt.split(\"Human:\")[-1].strip()}\"\"\"\n",
    "        return ans,query\n",
    "    \n",
    "    \n",
    "    def _extract_observation(self,prompt):\n",
    "        print(\"_extract_observation_prompt\",prompt)\n",
    "        return_json = prompt.split(\"Observation:\")[-1].split(\"\\nThought:\")[0]\n",
    "        self.history.append({\n",
    "            \"role\":\"observation\",\n",
    "            \"content\":return_json\n",
    "        })\n",
    "        return None\n",
    "                        \n",
    "                        \n",
    "                        \n",
    "\n",
    "                        \n",
    "                    \n",
    "                        \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "a00020f1",
   "metadata": {},
   "outputs": [],
   "source": [
    "llm = ChatGLM4()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7f451b6c",
   "metadata": {},
   "source": [
    "1. 让大模型根据提示词拆分任务\n",
    "2. 按照可以使用的工具来进行拆分\n",
    "3. 定义有工具可以使用"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "f0fac133",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "广州\n",
      "广州\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'description': '晴', 'temperature': '21'}"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain.tools import BaseTool\n",
    "import requests\n",
    "from typing import Any\n",
    "\n",
    "class Weather(BaseTool):\n",
    "    name = \"weather\"\n",
    "    description = \"根据位置获取天气数据\"\n",
    "    \n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        \n",
    "#     async def _arun(self,*args,**kwargs):\n",
    "#         pass\n",
    "        \n",
    "    def get_weather(self,location):\n",
    "        api_key = \"SKcA5FGgmLvN7faJi\"\n",
    "        url = f\"https://api.seniverse.com/v3/weather/now.json?key={api_key}&location={location}&language=zh-Hans&unit=c\"\n",
    "        response = requests.get(url)\n",
    "        print(location)\n",
    "        if response.status_code == 200:\n",
    "            data = response.json()\n",
    "            #print(data)\n",
    "            weather = {\n",
    "                'description':data['results'][0][\"now\"][\"text\"],\n",
    "                'temperature':data['results'][0][\"now\"][\"temperature\"]\n",
    "            }\n",
    "            return weather\n",
    "        else:\n",
    "            raise Exception(f\"失败接收天气信息：{response.status_code}\")\n",
    "    def _run(self,para):\n",
    "        print(para)\n",
    "        return self.get_weather(para)\n",
    "            \n",
    "weather_tool = Weather()\n",
    "weather_tool.run({\"para\":\"广州\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "1e8f460e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "file_path\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'name': 'weather',\n",
       " 'description': '根据地名搜索天气信息',\n",
       " 'parameters': {'type': 'object',\n",
       "  'properties': {'city': {'type': 'string', 'description': '城市的名称'}},\n",
       "  'required': ['city']}}"
      ]
     },
     "execution_count": 39,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#读取tool目录的工具配置文件\n",
    "import os\n",
    "import yaml\n",
    "def tool_config_from_file(tool_name,directory=\"Tool/\"):\n",
    "    #讲查询到的工具的配置文件，转为json格式\n",
    "    for filename in os.listdir(directory):\n",
    "        if filename.endswith('.yaml') and tool_name in filename:\n",
    "            file_path = os.path.join(directory,filename)\n",
    "            print(\"file_path\")\n",
    "            with open(file_path,encoding=\"utf-8\") as f:\n",
    "                return yaml.safe_load(f)\n",
    "    #print(\"---\")\n",
    "    return None\n",
    "    \n",
    "tool_config_from_file(\"weather\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "1ee9d9ac",
   "metadata": {},
   "outputs": [],
   "source": [
    "#处理工具，分割提取工具字符串，生成相应的动作，返回结果\n",
    "\n",
    "from langchain.agents import load_tools,initialize_agent,AgentType"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "id": "5d5b36c5",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[ArxivQueryRun()]"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tool = load_tools([\"arxiv\"],llm=llm)\n",
    "tool"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "4d37b9c4",
   "metadata": {},
   "outputs": [],
   "source": [
    "agent = initialize_agent(\n",
    "    [weather_tool],llm,\n",
    "    agent = AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,\n",
    "    verbose=True,\n",
    "    handle_parsing_errors=True\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "id": "4b2fade9",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3mAction:\n",
      "```json\n",
      "{\n",
      "  \"action\": \"weather\",\n",
      "  \"action_input\": {\"para\": \"广州\"}\n",
      "}\n",
      "```\u001b[0m广州\n",
      "广州\n",
      "\n",
      "Observation: \u001b[36;1m\u001b[1;3m{'description': '晴', 'temperature': '21'}\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3mThe human has asked for the current weather in Guangzhou. The tool has provided the weather information, which is sunny with a temperature of 21 degrees Celsius. I can now provide a final answer.\n",
      "\n",
      "Action:\n",
      "```json\n",
      "{\n",
      "  \"action\": \"Final Answer\",\n",
      "  \"action_input\": \"今天广州的天气是晴天，温度为21摄氏度。\"\n",
      "}\n",
      "```\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'今天广州的天气是晴天，温度为21摄氏度。'"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "agent.run(\"今天广州天气怎么样？\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "id": "cff448e8",
   "metadata": {},
   "outputs": [],
   "source": [
    "agent = initialize_agent(\n",
    "    [tool[0]],llm,\n",
    "    agent = AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,\n",
    "    verbose=True,\n",
    "    handle_parsing_errors=True\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "id": "bc9e5122",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3m```json\n",
      "{\n",
      "  \"action\": \"arxiv\",\n",
      "  \"action_input\": {\"query\": \"ChatGLM model structure\"}\n",
      "}\n",
      "```\u001b[0m\n",
      "Observation: \u001b[36;1m\u001b[1;3mPublished: 2023-09-28\n",
      "Title: LLM-Pruner: On the Structural Pruning of Large Language Models\n",
      "Authors: Xinyin Ma, Gongfan Fang, Xinchao Wang\n",
      "Summary: Large language models (LLMs) have shown remarkable capabilities in language\n",
      "understanding and generation. However, such impressive capability typically\n",
      "comes with a substantial model size, which presents significant challenges in\n",
      "both the deployment, inference, and training stages. With LLM being a\n",
      "general-purpose task solver, we explore its compression in a task-agnostic\n",
      "manner, which aims to preserve the multi-task solving and language generation\n",
      "ability of the original LLM. One challenge to achieving this is the enormous\n",
      "size of the training corpus of LLM, which makes both data transfer and model\n",
      "post-training over-burdensome. Thus, we tackle the compression of LLMs within\n",
      "the bound of two constraints: being task-agnostic and minimizing the reliance\n",
      "on the original training dataset. Our method, named LLM-Pruner, adopts\n",
      "structural pruning that selectively removes non-critical coupled structures\n",
      "based on gradient information, maximally preserving the majority of the LLM's\n",
      "functionality. To this end, the performance of pruned models can be efficiently\n",
      "recovered through tuning techniques, LoRA, in merely 3 hours, requiring only\n",
      "50K data. We validate the LLM-Pruner on three LLMs, including LLaMA, Vicuna,\n",
      "and ChatGLM, and demonstrate that the compressed models still exhibit\n",
      "satisfactory capabilities in zero-shot classification and generation. The code\n",
      "is available at: https://github.com/horseee/LLM-Pruner\n",
      "\n",
      "Published: 2023-06-09\n",
      "Title: Customizing General-Purpose Foundation Models for Medical Report Generation\n",
      "Authors: Bang Yang, Asif Raza, Yuexian Zou, Tong Zhang\n",
      "Summary: Medical caption prediction which can be regarded as a task of medical report\n",
      "generation (MRG), requires the automatic generation of coherent and accurate\n",
      "captions for the given medical images. However, the scarcity of labelled\n",
      "medical image-report pairs presents great challenges in the development of deep\n",
      "and large-scale neural networks capable of harnessing the potential artificial\n",
      "general intelligence power like large language models (LLMs). In this work, we\n",
      "propose customizing off-the-shelf general-purpose large-scale pre-trained\n",
      "models, i.e., foundation models (FMs), in computer vision and natural language\n",
      "processing with a specific focus on medical report generation. Specifically,\n",
      "following BLIP-2, a state-of-the-art vision-language pre-training approach, we\n",
      "introduce our encoder-decoder-based MRG model. This model utilizes a\n",
      "lightweight query Transformer to connect two FMs: the giant vision Transformer\n",
      "EVA-ViT-g and a bilingual LLM trained to align with human intentions (referred\n",
      "to as ChatGLM-6B). Furthermore, we conduct ablative experiments on the\n",
      "trainable components of the model to identify the crucial factors for effective\n",
      "transfer learning. Our findings demonstrate that unfreezing EVA-ViT-g to learn\n",
      "medical image representations, followed by parameter-efficient training of\n",
      "ChatGLM-6B to capture the writing styles of medical reports, is essential for\n",
      "achieving optimal results. Our best attempt (PCLmed Team) achieved the 4th and\n",
      "the 2nd, respectively, out of 13 participating teams, based on the BERTScore\n",
      "and ROUGE-1 metrics, in the ImageCLEFmedical Caption 2023 Caption Prediction\n",
      "Task competition.\n",
      "\n",
      "Published: 2024-01-02\n",
      "Title: A Novel Evaluation Framework for Assessing Resilience Against Prompt Injection Attacks in Large Language Models\n",
      "Authors: Daniel Wankit Yip, Aysan Esmradi, Chun Fai Chan\n",
      "Summary: Prompt injection attacks exploit vulnerabilities in large language models\n",
      "(LLMs) to manipulate the model into unintended actions or generate malicious\n",
      "content. As LLM integrated applications gain wider adoption, they face growing\n",
      "susceptibility to such attacks. This study introduces a novel evaluation\n",
      "framework for quantifying the resilience of applications. The framework\n",
      "incorporates innovative techniques des\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3mAction:\n",
      "```json\n",
      "{\n",
      "  \"action\": \"Final Answer\",\n",
      "  \"action_input\": \"ChatGLM的模型结构是一个大型的语言模型，具体的技术细节在公开的文献中并没有详细的描述。但是，根据相关的研究，例如LLM-Pruner的研究，我们可以知道ChatGLM等大型语言模型采用了结构剪枝的方法来减少模型的大小，同时尽量保持模型的功能。此外，还有研究通过定制化通用基础模型，如ChatGLM-6B，用于特定领域如医学报告生成。这些模型通常由编码器-解码器结构组成，并且可能包含视觉Transformer和语言模型等组件。\"\n",
      "}\n",
      "```\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'ChatGLM的模型结构是一个大型的语言模型，具体的技术细节在公开的文献中并没有详细的描述。但是，根据相关的研究，例如LLM-Pruner的研究，我们可以知道ChatGLM等大型语言模型采用了结构剪枝的方法来减少模型的大小，同时尽量保持模型的功能。此外，还有研究通过定制化通用基础模型，如ChatGLM-6B，用于特定领域如医学报告生成。这些模型通常由编码器-解码器结构组成，并且可能包含视觉Transformer和语言模型等组件。'"
      ]
     },
     "execution_count": 47,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "agent.run(\"chatglm的模型的结构是什么？\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "82952c98",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "pytorch",
   "language": "python",
   "name": "pytorch"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
