{
 "cells": [
  {
   "cell_type": "markdown",
   "source": [
    "# 1. 初始化chatglm大模型"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "outputs": [],
   "source": [
    "from typing import Any, List\n",
    "from transformers import AutoTokenizer, AutoModel\n",
    "\n",
    "\n",
    "class chatGLM():\n",
    "    def __init__(self, model_name, quantization_bit=4) -> None:\n",
    "        self.tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)\n",
    "        model = AutoModel.from_pretrained(model_name, trust_remote_code=True).half().cuda().eval()\n",
    "        self.model = model.quantize(quantization_bit)\n",
    "\n",
    "    def __call__(self, prompt) -> Any:\n",
    "        response, _ = self.model.chat(self.tokenizer, prompt)  # 这里演示未使用流式接口. stream_chat()\n",
    "        return response"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "outputs": [],
   "source": [
    "llm = chatGLM(model_name=\"THUDM/chatglm-6b\")\n",
    "prompt = \"你好\"\n",
    "response = llm(prompt)\n",
    "print(\"response: %s\" % response)"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "# 2 使用prompt模板，格式化生成新的prompt"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "outputs": [],
   "source": [
    "from langchain import PromptTemplate\n",
    "\n",
    "template = \"请给我解释一下{query}的意思\"\n",
    "promptTem = PromptTemplate(input_variables=[\"query\"], template=template)\n",
    "prompt = promptTem.format(query=\"天道酬勤\")"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "# 3 用chain连链接llm和prompt组件\n",
    "chains ---------Chatglm对象不符合LLMChain类llm对象要求，模仿一下\n",
    "其实就是把提示语构造和大模型回答两个过程接起来"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "outputs": [],
   "source": [
    "from langchain.chains.base import Chain\n",
    "\n",
    "\n",
    "class DemoChain():\n",
    "    def __init__(self, llm, prompt) -> None:\n",
    "        self.llm = llm\n",
    "        self.prompt = prompt\n",
    "\n",
    "    def run(self, query, context=None) -> Any:\n",
    "        if context is not None:\n",
    "            prompt = self.prompt.format(query=query, context=context)\n",
    "        else:\n",
    "            prompt = self.prompt.format(query=query)\n",
    "        print(\"query=%s  -> prompt=%s\" % (query, prompt))\n",
    "        print(\"*\" * 60)\n",
    "        response = self.llm(prompt)\n",
    "        return response\n"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "outputs": [],
   "source": [
    "chain = DemoChain(llm=llm, prompt=promptTem)\n",
    "print(\"-\" * 80)\n",
    "chain.run(query=\"天道酬勤\")\n",
    "print(\"-\" * 80)"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "# 4 示例 Embedding 和 vs-store\n",
    "以上部分只有直接的问答过程，没有检索知识库的步骤，我们可以使用vector store来检索相关内容后给出回答"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "outputs": [],
   "source": [
    "from langchain.embeddings.huggingface import HuggingFaceEmbeddings\n",
    "import numpy as np\n",
    "\n",
    "embeddings = HuggingFaceEmbeddings(model_name=\"GanymedeNil_text2vec-large-chinese\",\n",
    "                                   model_kwargs={'device': \"cuda\"})\n",
    "query_result = embeddings.embed_query(\"天道酬勤\")\n",
    "print(\"embedding query.shape=\", np.array(query_result).shape)\n",
    "\n",
    "texts = \"\"\" '天道酬勤'并不是鼓励人们不劳而获，而是提醒人们要遵循自然规律，通过不断的努力和付出来追求自己的目标。\\n这种努力不仅仅是指身体上的劳动，\n",
    "也包括精神上的努力和思考，以及学习和适应变化的能力。\\n只要一个人具备这些能力，他就有可能会获得成功。\"\"\"\n"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "构造一个符合检索文本格式的分割器"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "outputs": [],
   "source": [
    "from langchain.text_splitter import CharacterTextSplitter\n",
    "from langchain.docstore.document import Document\n",
    "\n",
    "\n",
    "class TextSpliter(CharacterTextSplitter):\n",
    "    def __init__(self, separator: str = \"\\n\\n\", **kwargs: Any):\n",
    "        super().__init__(separator, **kwargs)\n",
    "\n",
    "    def split_text(self, text: str) -> List[str]:\n",
    "        texts = text.split(\"\\n\")\n",
    "        texts = [Document(page_content=text, metadata={\"from\": \"filename or book.txt\"}) for text in texts]\n",
    "        return texts\n",
    "\n",
    "\n",
    "text_splitter = TextSpliter()\n",
    "texts = text_splitter.split_text(texts)\n",
    "texts1 = [text.page_content for text in texts]"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "向量化文本知识库，并检索最相关的文本"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "outputs": [],
   "source": [
    "vs_path = \"demo-vs\"\n",
    "from langchain.vectorstores import FAISS\n",
    "\n",
    "docs = embeddings.embed_documents(texts1)\n",
    "vector_store = FAISS.from_documents(texts, embeddings)\n",
    "vector_store.save_local(vs_path)\n",
    "\n",
    "vector_store = FAISS.load_local(vs_path, embeddings)\n",
    "related_docs_with_score = vector_store.similarity_search_with_score(query=\"天道酬勤\", k=2)"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "# 5 基于查询到的知识做prompt"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "outputs": [],
   "source": [
    "context = \"\"\n",
    "for pack in related_docs_with_score:\n",
    "    doc, socre = pack\n",
    "    content = doc.page_content\n",
    "    print(\"检索到的知识=%s, from=%s, socre=%.3f\" % (content, doc.metadata.get(\"from\"), socre))\n",
    "    context += content\n",
    "\n",
    "# 重新配置一个基于上下文的模板在来调下语言模型\n",
    "template = \"已知{context}, 请给我解释一下{query}的意思?\"\n",
    "promptTem = PromptTemplate(input_variables=[\"context\", \"query\"], template=template)\n",
    "chain = DemoChain(llm=llm, prompt=promptTem)\n",
    "print(\"-\" * 80)\n",
    "print(chain.run(query=\"天道酬勤\", context=context))\n",
    "print(\"-\" * 80)\n",
    "\n"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "\n",
    "----------------------log\n",
    "response: 你好！请问有什么需要帮助的吗？\n",
    "--------------------------------------------------------------------------------\n",
    "query = 天道酬勤  -> prompt = 请给我解释一下天道酬勤的意思\n",
    "--------------------------------------------------------------------------------\n",
    "embedding\n",
    "query.shape = (1024,)\n",
    "检索到的知识 = 天道酬勤”并不是鼓励人们不劳而获，而是提醒人们要遵循自然规律，通过不断的努力和付出来追求自己的目标。, from=filename or book.txt, socre = 373.131\n",
    "检索到的知识 = 这种努力不仅仅是指身体上的劳动，, from=filename or book.txt, socre = 740.042\n",
    "--------------------------------------------------------------------------------\n",
    "query = 天道酬勤  -> prompt = 已知天道酬勤”并不是鼓励人们不劳而获，而是提醒人们要遵循自然规律，通过不断的努力和付出来追求自己的目标。这种努力不仅仅是指身体上的劳动，, 请给我解释一下天道酬勤的意思?\n",
    "“天道酬勤”是指遵循自然规律，通过不断的努力和付出来追求自己的目标，这种努力不仅仅是指身体上的劳动，还包括精神上的、心理上的和智力上的。它鼓励人们不要放弃，坚持追求自己的目标，即使需要付出很大的努力和时间。最终，这种努力和付出会转化为回报，使人们的生活更加充实和有意义。\n",
    "\n",
    "这句话的意义在于提醒人们要坚持不懈地追求自己的目标，不要懒惰和放弃。同时，它也提醒人们要认识到自然规律，并遵循这些规律，才能取得成功。这三个方面的努力和付出是“天道酬勤”的核心意思。"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "markdown",
   "source": [],
   "metadata": {
    "collapsed": false
   }
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
