{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.llms import Tongyi\n",
    "\n",
    "llm = Tongyi()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 阻塞模式"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'在经济学中，失业和通货膨胀之间的关系是通过一种叫做菲利普斯曲线（Phillips Curve）的理论来描述的。菲利普斯曲线最初由新西兰经济学家A.W.H. 菲利普斯在1958年提出，它表明了失业率和通货膨胀率之间存在反向关系：当失业率下降时，通货膨胀率通常会上升；反之，当失业率上升时，通货膨胀率则可能下降。\\n\\n这个理论的基本逻辑是，当经济接近充分就业时，劳动力市场紧张，工人有更大的议价能力，工资上涨压力增大，企业为了维持利润，可能会提高产品价格，从而导致通货膨胀。相反，如果失业率高，劳动力市场宽松，工资增长缓慢，通货膨胀压力也会减轻。\\n\\n然而，后来的经济学家发现，这种简单的菲利普斯曲线关系在长期并不稳定，特别是在美国1970年代的“滞胀”时期，即失业率上升的同时，通货膨胀率也在上升，这挑战了原有的理论。因此，现代经济学中，菲利普斯曲线通常被理解为一个短期关系，并且受到预期通货膨胀、供给冲击、货币政策规则等因素的影响。\\n\\n此外，新古典综合派和理性预期学派的经济学家提出了长期的“垂直菲利普斯曲线”理论，认为在长期中，失业率与通货膨胀率之间不存在稳定的替代关系，因为人们会形成对未来的理性预期，中央银行无法通过持续降低失业率来刺激经济增长而不引发通货膨胀。'"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm.invoke(\"有什么关于失业和通货膨胀的相关性的理论\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 流式模式"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "失业\n",
      "和\n",
      "通\n",
      "货膨胀之间的关系是\n",
      "经济学中的一个重要概念，通常被描述\n",
      "为菲利普斯曲线（Phill\n",
      "ips Curve）的理论。该理论\n",
      "由新西兰经济学家A.W.H.菲\n",
      "利普斯在1958\n",
      "年提出，后来被进一步发展和完善\n",
      "。\n",
      "\n",
      "菲利普斯曲线最初表明\n",
      "，失业率与通货膨胀率\n",
      "之间存在反向关系：当失业\n",
      "率下降时，通货膨胀率\n",
      "通常会上升；反之，当失业\n",
      "率上升时，通货膨胀率\n",
      "会下降。这是因为在一个经济体系中\n",
      "，如果需求增加，企业为了吸引\n",
      "工人可能会提高工资，这将导致\n",
      "成本上升，进而引发通货膨胀\n",
      "。同时，由于就业增加，消费者\n",
      "购买力增强，也会推高物价\n",
      "。\n",
      "\n",
      "然而，这个理论在20\n",
      "世纪70年代的“滞胀\n",
      "”时期受到了挑战，即失业率\n",
      "上升的同时通货膨胀率也在上升\n",
      "，这超出了菲利普斯\n",
      "曲线的预测范围。经济学家开始提出\n",
      "“长期菲利普斯曲线”\n",
      "或“预期调整的菲利普\n",
      "斯曲线”，认为如果人们预期到\n",
      "通货膨胀，他们会在合同谈判\n",
      "中要求更高的工资，形成一种螺旋\n",
      "式的通货膨胀压力，即使失业\n",
      "率没有下降，通货膨胀也可能\n",
      "发生。\n",
      "\n",
      "此外，现代宏观经济学中的\n",
      "新凯恩斯主义模型也考虑\n",
      "了失业和通货膨胀的关系，\n",
      "但更加强调了预期、名义\n",
      "刚性和政策规则等因素的影响。这些\n",
      "理论认为，中央银行可以通过调整货币政策\n",
      "来影响通货膨胀和失业的\n",
      "组合，但可能无法在长期内\n",
      "实现零失业和零通货膨胀\n",
      "的“金发女孩状态”（\n",
      "既不热也不冷的经济状态\n",
      "）。\n",
      "\n",
      "总的来说，失业和通货\n",
      "膨胀的关系是复杂的，受到多种因素\n",
      "的影响，包括经济结构、市场预期\n",
      "、政策决策等。\n",
      "\n"
     ]
    }
   ],
   "source": [
    "for chunk in llm.stream(\n",
    "    \"有什么关于失业和通货膨胀的相关性的理论\"\n",
    "):\n",
    "    print(chunk, end=\"\\n\", flush=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['李白，字太白，号青莲居士，是唐朝时期著名的浪漫主义诗人，被誉为“诗仙”。他的诗歌才情横溢，想象力丰富，语言豪放，形式多样，深受后人喜爱。\\n\\n李白的诗歌主题广泛，既有描绘壮丽山河的自然诗，也有抒发个人情感的抒情诗，还有反映社会现实的作品。他的诗作如《静夜思》、《望庐山瀑布》、《将进酒》等，都是流传千古的经典之作，展现了他豪放不羁的性格和对自由生活的向往。\\n\\n李白的一生，游历四方，嗜酒如命，性格独立，不拘小节，他的诗风与他的人生经历紧密相连，形成了独特的“李白风格”，对后世文学产生了深远影响。',\n",
       " '苏轼，字子瞻，号东坡居士，是北宋时期著名的文学家、书画家，被后人尊称为“唐宋八大家”之一。他的诗词才情横溢，风格豪放，与父苏洵、弟苏辙并称“三苏”，在中国文学史上有着重要地位。\\n\\n苏轼的诗歌内容广泛，既有对人生哲理的深刻探讨，也有对自然景色的生动描绘，更有对社会现实的直接反映。他的词则开创了豪放派的新风格，情感真挚，意境开阔，如《水调歌头·明月几时有》中的\"但愿人长久，千里共婵娟\"，已成为千古传颂的名句。\\n\\n除了文学，苏轼在书法上也颇有造诣，他的行书被称为“苏体”，影响深远。同时，他在烹饪、书画鉴赏等领域都有独到的见解和贡献，是一位多才多艺的文化巨匠。']"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm.batch(\n",
    "    [\n",
    "        \"简单介绍一个唐代诗人\",\n",
    "        \"简单介绍一个宋代诗人\"\n",
    "    ]\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 自定义LLM"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import Any, Dict, Iterator, List, Mapping, Optional\n",
    "\n",
    "from langchain_core.callbacks.manager import CallbackManagerForLLMRun\n",
    "from langchain_core.language_models.llms import LLM\n",
    "from langchain_core.outputs import GenerationChunk\n",
    "\n",
    "\n",
    "class CustomLLM(LLM):\n",
    "    \"\"\"A custom chat model that echoes the first `n` characters of the input.\n",
    "\n",
    "    When contributing an implementation to LangChain, carefully document\n",
    "    the model including the initialization parameters, include\n",
    "    an example of how to initialize the model and include any relevant\n",
    "    links to the underlying models documentation or API.\n",
    "\n",
    "    Example:\n",
    "\n",
    "        .. code-block:: python\n",
    "\n",
    "            model = CustomChatModel(n=2)\n",
    "            result = model.invoke([HumanMessage(content=\"hello\")])\n",
    "            result = model.batch([[HumanMessage(content=\"hello\")],\n",
    "                                 [HumanMessage(content=\"world\")]])\n",
    "    \"\"\"\n",
    "\n",
    "    n: int\n",
    "    \"\"\"The number of characters from the last message of the prompt to be echoed.\"\"\"\n",
    "\n",
    "    def _call(\n",
    "        self,\n",
    "        prompt: str,\n",
    "        stop: Optional[List[str]] = None,\n",
    "        run_manager: Optional[CallbackManagerForLLMRun] = None,\n",
    "        **kwargs: Any,\n",
    "    ) -> str:\n",
    "        \"\"\"Run the LLM on the given input.\n",
    "\n",
    "        Override this method to implement the LLM logic.\n",
    "\n",
    "        Args:\n",
    "            prompt: The prompt to generate from.\n",
    "            stop: Stop words to use when generating. Model output is cut off at the\n",
    "                first occurrence of any of the stop substrings.\n",
    "                If stop tokens are not supported consider raising NotImplementedError.\n",
    "            run_manager: Callback manager for the run.\n",
    "            **kwargs: Arbitrary additional keyword arguments. These are usually passed\n",
    "                to the model provider API call.\n",
    "\n",
    "        Returns:\n",
    "            The model output as a string. Actual completions SHOULD NOT include the prompt.\n",
    "        \"\"\"\n",
    "        if stop is not None:\n",
    "            raise ValueError(\"stop kwargs are not permitted.\")\n",
    "        return prompt[: self.n]\n",
    "\n",
    "    def _stream(\n",
    "        self,\n",
    "        prompt: str,\n",
    "        stop: Optional[List[str]] = None,\n",
    "        run_manager: Optional[CallbackManagerForLLMRun] = None,\n",
    "        **kwargs: Any,\n",
    "    ) -> Iterator[GenerationChunk]:\n",
    "        \"\"\"Stream the LLM on the given prompt.\n",
    "\n",
    "        This method should be overridden by subclasses that support streaming.\n",
    "\n",
    "        If not implemented, the default behavior of calls to stream will be to\n",
    "        fallback to the non-streaming version of the model and return\n",
    "        the output as a single chunk.\n",
    "\n",
    "        Args:\n",
    "            prompt: The prompt to generate from.\n",
    "            stop: Stop words to use when generating. Model output is cut off at the\n",
    "                first occurrence of any of these substrings.\n",
    "            run_manager: Callback manager for the run.\n",
    "            **kwargs: Arbitrary additional keyword arguments. These are usually passed\n",
    "                to the model provider API call.\n",
    "\n",
    "        Returns:\n",
    "            An iterator of GenerationChunks.\n",
    "        \"\"\"\n",
    "        for char in prompt[: self.n]:\n",
    "            chunk = GenerationChunk(text=char)\n",
    "            if run_manager:\n",
    "                run_manager.on_llm_new_token(chunk.text, chunk=chunk)\n",
    "\n",
    "            yield chunk\n",
    "\n",
    "    @property\n",
    "    def _identifying_params(self) -> Dict[str, Any]:\n",
    "        \"\"\"Return a dictionary of identifying parameters.\"\"\"\n",
    "        return {\n",
    "            # The model name allows users to specify custom token counting\n",
    "            # rules in LLM monitoring applications (e.g., in LangSmith users\n",
    "            # can provide per token pricing for their model and monitor\n",
    "            # costs for the given LLM.)\n",
    "            \"model_name\": \"CustomChatModel\",\n",
    "        }\n",
    "\n",
    "    @property\n",
    "    def _llm_type(self) -> str:\n",
    "        \"\"\"Get the type of language model used by this chat model. Used for logging purposes only.\"\"\"\n",
    "        return \"custom\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001B[1mCustomLLM\u001B[0m\n",
      "Params: {'model_name': 'CustomChatModel'}\n"
     ]
    }
   ],
   "source": [
    "llm = CustomLLM(n=5)\n",
    "print(llm)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'This '"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm.invoke(\"This is a foobar thing\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "h|e|l|l|o|"
     ]
    }
   ],
   "source": [
    "for token in llm.stream(\"hello\"):\n",
    "    print(token, end=\"|\", flush=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['woof ', 'meow ']"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm.batch([\"woof woof woof\", \"meow meow meow\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 缓存"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.globals import set_llm_cache\n",
    "from langchain.cache import InMemoryCache"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/hong/anaconda3/envs/langchain/lib/python3.12/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `predict` was deprecated in LangChain 0.1.7 and will be removed in 0.2.0. Use invoke instead.\n",
      "  warn_deprecated(\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "CPU times: user 137 ms, sys: 77.9 ms, total: 215 ms\n",
      "Wall time: 1.26 s\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'一朵花为什么很好笑？因为它很有梗。'"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "%%time\n",
    "\n",
    "set_llm_cache(InMemoryCache())\n",
    "\n",
    "# The first time, it is not yet in cache, so it should take longer\n",
    "llm.invoke(\"说一个笑话\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "CPU times: user 752 µs, sys: 317 µs, total: 1.07 ms\n",
      "Wall time: 1.03 ms\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'一朵花为什么很好笑？因为它很有梗。'"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "%%time\n",
    "\n",
    "# The second time it is, so it goes faster\n",
    "llm.invoke(\"说一个笑话\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 记录消耗tokens"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Tokens Used: 316\n",
      "\tPrompt Tokens: 18\n",
      "\tCompletion Tokens: 298\n",
      "Successful Requests: 1\n",
      "Total Cost (CYN): ¥0.00632\n"
     ]
    }
   ],
   "source": [
    "from callbacks.manager import get_generic_llms_callback\n",
    "\n",
    "with get_generic_llms_callback() as cb:\n",
    "    llm.invoke(\"有什么关于失业和通货膨胀的相关性的理论\")\n",
    "    print(cb)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "在|经济学|中|，失业和通货|膨胀之间的关系是通过几种主要理论|来解释的：\n",
      "\n",
      "1. |菲利普斯曲线（Phillips| Curve）：这是最经典的理论，|由A.W.菲利普斯|在1958年提出。|该理论认为，失业率和通|货膨胀率之间存在负相关关系|，即当失业率降低时，|通货膨胀率会上升，反之|亦然。这是因为当经济接近充分|就业时，劳动力市场紧张，工资|和物价上涨压力增大，导致通胀|。\n",
      "\n",
      "2. 新古典综合理论（|Neoclassical Synthesis）：|这一理论扩展了菲利普斯|曲线，认为在长期，失业率|会稳定在一个自然水平，这个水平|下的通货膨胀率与失业率|无关。但短期中，政策制定|者可以通过调整货币政策或财政政策在|失业和通胀之间做出权衡。\n",
      "\n",
      "|3. 滞胀理论（|Stagflation Theory）：20|世纪70年代的滞胀现象|（高失业和高通胀并存|）挑战了传统的菲利普斯|曲线。这表明，某些情况下，|如供给冲击（如石油危机），|失业和通胀可能同时上升，打破了|菲利普斯曲线的简单负|相关关系。\n",
      "\n",
      "4. 长|期菲利普斯曲线（Long|-Run Phillips Curve）：新凯|恩斯主义经济学提出了长期菲利|普斯曲线，它认为，虽然|在短期内，政策可以影响失业和|通胀的关系，但在长期，通货|膨胀预期将调整，使得失业率|回到自然失业率，无论通胀如何|变化。\n",
      "\n",
      "这些理论都为理解和分析|失业与通货膨胀的关系提供了框架|，但实际经济情况可能会因各种|因素而复杂化。||Tokens Used: 375\n",
      "\tPrompt Tokens: 18\n",
      "\tCompletion Tokens: 357\n",
      "Successful Requests: 1\n",
      "Total Cost (CYN): ¥0.0075\n"
     ]
    }
   ],
   "source": [
    "from tongyi.llm import CustomTongyi\n",
    "\n",
    "# dashscope_api_key作为参数传入\n",
    "# 或者配置环境变量`DASHSCOPE_API_KEY`\n",
    "llm = CustomTongyi()\n",
    "\n",
    "with get_generic_llms_callback() as cb:\n",
    "    for chunk in llm.stream(\n",
    "    \"有什么关于失业和通货膨胀的相关性的理论\"):\n",
    "        print(chunk, end=\"|\", flush=True)\n",
    "    print(cb)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "langchain",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
