{
 "cells": [
  {
   "cell_type": "markdown",
   "source": [
    "# Using LangSmith\n",
    "You can use [LangSmith](https://www.langchain.com/langsmith) to help track token usage in your LLM application. See the [LangSmith quick start guide](https://docs.smith.langchain.com/).\n",
    "\n",
    "# Using AIMessage.usage_metadata\n",
    "许多模型提供者将令牌使用信息作为聊天生成响应的一部分返回。如果可用，此信息将包含在相应模型生成的AIMessage对象中。\n",
    "LangChain AIMessage对象包含一个usage_metadata属性。填充后，该属性将是具有标准关键字的UsageMetadata字典(例如，“input_tokens”和“output_tokens”)。\n",
    "\n",
    "## OpenChatAI\n"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "97d33fb8a619155e"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "content='Hello! How can I assist you today?' additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 19, 'total_tokens': 28, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen-max', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None} id='run-ab68aaa1-d86b-456a-9580-a2ec77881b45-0' usage_metadata={'input_tokens': 19, 'output_tokens': 9, 'total_tokens': 28, 'input_token_details': {}, 'output_token_details': {}}\n"
     ]
    },
    {
     "data": {
      "text/plain": "{'input_tokens': 19,\n 'output_tokens': 9,\n 'total_tokens': 28,\n 'input_token_details': {},\n 'output_token_details': {}}"
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import os\n",
    "\n",
    "from dotenv import load_dotenv\n",
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "load_dotenv()\n",
    "\n",
    "llm = ChatOpenAI(\n",
    "    # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "    openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "    model_name=\"qwen-max\",\n",
    "    temperature=0,\n",
    ")\n",
    "\n",
    "openai_response = llm.invoke(\"hello\")\n",
    "print(openai_response)\n",
    "openai_response.usage_metadata"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-11-11T04:00:42.276103Z",
     "start_time": "2024-11-11T04:00:38.836669Z"
    }
   },
   "id": "5321861803a30a85",
   "execution_count": 1
  },
  {
   "cell_type": "markdown",
   "source": [],
   "metadata": {
    "collapsed": false
   },
   "id": "315b77b5707d5d2"
  },
  {
   "cell_type": "markdown",
   "source": [
    "## Using AIMessage.response_metadata\n",
    "来自模型响应的元数据也包含在[AIMessage response_metadata](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.response_metadata)属性中。这些数据通常不是标准化的。请注意，不同的提供者采用不同的约定来表示令牌计数:"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "3fa3272358ca535e"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "OpenAI: {'completion_tokens': 9, 'prompt_tokens': 19, 'total_tokens': 28, 'completion_tokens_details': None, 'prompt_tokens_details': None}\n"
     ]
    }
   ],
   "source": [
    "print(f'OpenAI: {openai_response.response_metadata[\"token_usage\"]}\\n')\n"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-11-06T10:05:22.987736Z",
     "start_time": "2024-11-06T10:05:22.983436Z"
    }
   },
   "id": "c78235f3a189bd4a",
   "execution_count": 3
  },
  {
   "cell_type": "markdown",
   "source": [
    "## Streaming\n",
    "一些提供者支持流式上下文中的令牌计数元数据。\n",
    "- OpenAI \n",
    "例如，OpenAI将在流的末尾返回一个包含令牌使用信息的消息块。langchain-openai >= 0.1.9支持此行为，并且可以通过设置**stream_usage=True来启用**。这个属性也可以在实例化ChatOpenAI时设置。\n",
    "注意：默认情况下，流中的最后一个消息块将在消息的response_metadata属性中包含“finish_reason”。如果我们在流模式中包括令牌使用，则包含使用元数据的附加块将被添加到流的末尾，这样“finish_reason”将出现在倒数第二个消息块上。\n"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "fda3eb8c77a22eff"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "content='' additional_kwargs={} response_metadata={} id='run-620f559a-8c2a-4c5b-8c2e-b02872e6bc2b'\n",
      "content='Hello!' additional_kwargs={} response_metadata={} id='run-620f559a-8c2a-4c5b-8c2e-b02872e6bc2b'\n",
      "content=' How' additional_kwargs={} response_metadata={} id='run-620f559a-8c2a-4c5b-8c2e-b02872e6bc2b'\n",
      "content=' can' additional_kwargs={} response_metadata={} id='run-620f559a-8c2a-4c5b-8c2e-b02872e6bc2b'\n",
      "content=' I assist you today' additional_kwargs={} response_metadata={} id='run-620f559a-8c2a-4c5b-8c2e-b02872e6bc2b'\n",
      "content='?' additional_kwargs={} response_metadata={} id='run-620f559a-8c2a-4c5b-8c2e-b02872e6bc2b'\n",
      "content='' additional_kwargs={} response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-max'} id='run-620f559a-8c2a-4c5b-8c2e-b02872e6bc2b'\n",
      "content='' additional_kwargs={} response_metadata={} id='run-620f559a-8c2a-4c5b-8c2e-b02872e6bc2b' usage_metadata={'input_tokens': 19, 'output_tokens': 9, 'total_tokens': 28, 'input_token_details': {}, 'output_token_details': {}}\n"
     ]
    }
   ],
   "source": [
    "aggregate = None\n",
    "for chunk in llm.stream(\"hello\", stream_usage=True):\n",
    "    print(chunk)\n",
    "    aggregate = chunk if aggregate is None else aggregate + chunk"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-11-06T10:05:24.191978Z",
     "start_time": "2024-11-06T10:05:22.990474Z"
    }
   },
   "id": "2d7984c5c0d68904",
   "execution_count": 4
  },
  {
   "cell_type": "markdown",
   "source": [
    "请注意，usage metadata将包含在各个消息块的总和中:"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "cfc89e910f14e986"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "content='Hello! How can I assist you today?' additional_kwargs={} response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-max'} id='run-620f559a-8c2a-4c5b-8c2e-b02872e6bc2b' usage_metadata={'input_tokens': 19, 'output_tokens': 9, 'total_tokens': 28, 'input_token_details': {}, 'output_token_details': {}}\n",
      "{'input_tokens': 19, 'output_tokens': 9, 'total_tokens': 28, 'input_token_details': {}, 'output_token_details': {}}\n"
     ]
    }
   ],
   "source": [
    "print(aggregate)\n",
    "print(aggregate.usage_metadata)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-11-06T10:05:24.200960Z",
     "start_time": "2024-11-06T10:05:24.196205Z"
    }
   },
   "id": "2e33a254b66ad03b",
   "execution_count": 5
  },
  {
   "cell_type": "markdown",
   "source": [
    "实例化聊天模型时也可以通过stream_usage=True启用token使用统计。\n",
    "请看下面的例子，在这里我们返回结构化为所需模式的输出，但是仍然可以观察到从中间步骤流出的令牌使用情况。"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "3c5a2d48a4e2c803"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "event: on_chain_start\n",
      "event: on_chat_model_start\n",
      "event: on_chat_model_stream\n",
      "event: on_parser_start\n",
      "event: on_chat_model_stream\n",
      "event: on_chat_model_stream\n",
      "event: on_chat_model_stream\n",
      "event: on_chat_model_stream\n",
      "event: on_chat_model_stream\n",
      "event: on_chat_model_stream\n",
      "event: on_parser_stream\n",
      "event: on_chain_stream\n",
      "event: on_chat_model_stream\n",
      "event: on_parser_stream\n",
      "event: on_chain_stream\n",
      "event: on_chat_model_stream\n",
      "event: on_parser_stream\n",
      "event: on_chain_stream\n",
      "event: on_chat_model_stream\n",
      "event: on_chat_model_stream\n",
      "event: on_chat_model_stream\n",
      "event: on_chat_model_end\n",
      "Token usage: {'input_tokens': 206, 'output_tokens': 26, 'total_tokens': 232, 'input_token_details': {}, 'output_token_details': {}}\n",
      "\n",
      "event: on_parser_end\n",
      "event: on_chain_end\n",
      "setup='Why was the math book sad?' punchline='Because it had too many problems.'\n"
     ]
    }
   ],
   "source": [
    "import tiktoken\n",
    "from pydantic import BaseModel, Field\n",
    "\n",
    "\n",
    "class Joke(BaseModel):\n",
    "    \"\"\"告诉用户一个笑话.\"\"\"\n",
    "    setup: str = Field(description=\"设置笑话问题\")\n",
    "    punchline: str = Field(description=\"回答的笑话\")\n",
    "\n",
    "llm = ChatOpenAI(\n",
    "    # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "    openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "    model_name=\"qwen-max\",\n",
    "    temperature=0,\n",
    "    # 启用token使用统计\n",
    "    stream_usage=True,\n",
    ")\n",
    "\n",
    "# 返回信息结构化\n",
    "structured_llm = llm.with_structured_output(Joke)\n",
    "\n",
    "async for event in structured_llm.astream_events(\"Tell me a joke\", version=\"v2\"):\n",
    "    print(f\"event: {event['event']}\")\n",
    "    if event[\"event\"] == \"on_chat_model_end\":\n",
    "        print(f'Token usage: {event[\"data\"][\"output\"].usage_metadata}\\n')\n",
    "    elif event[\"event\"] == \"on_chain_end\":\n",
    "        print(event[\"data\"][\"output\"])\n",
    "    else:\n",
    "        pass\n",
    "\n",
    "enc = tiktoken.encoding_for_model(\"qwen-max\")\n",
    "\n",
    "template=\"{input}\"\n",
    "prompt_str  = template.replace(\"{input}\",\"Tell me a joke\")\n",
    "prompt_tokens = len(enc.encode(prompt_str))\n"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-11-06T10:05:26.656520Z",
     "start_time": "2024-11-06T10:05:24.203127Z"
    }
   },
   "id": "d217c618d42176ef",
   "execution_count": 6
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Using callbacks\n",
    "还有一些特定于API的回调上下文管理器，允许您跨多个调用跟踪令牌的使用。它们目前仅针对OpenAI API和基岩Anthropic API实现，并且在langchain-community中提供.\n",
    "[简单例子](./How%20to%20track%20token%20usage%20for%20LLMs.ipynb),下面是一个使用包含多个步骤的链或代理，它将跟踪所有这些步骤。"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "674d4b03cfb87b66"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['sleep', 'wolfram-alpha', 'google-search', 'google-search-results-json', 'searx-search-results-json', 'bing-search', 'metaphor-search', 'ddg-search', 'google-lens', 'google-serper', 'google-scholar', 'google-finance', 'google-trends', 'google-jobs', 'google-serper-results-json', 'searchapi', 'searchapi-results-json', 'serpapi', 'dalle-image-generator', 'twilio', 'searx-search', 'merriam-webster', 'wikipedia', 'arxiv', 'golden-query', 'pubmed', 'human', 'awslambda', 'stackexchange', 'sceneXplain', 'graphql', 'openweathermap-api', 'dataforseo-api-search', 'dataforseo-api-search-json', 'eleven_labs_text2speech', 'google_cloud_texttospeech', 'read_file', 'reddit_search', 'news-api', 'tmdb-api', 'podcast-api', 'memorize', 'llm-math', 'open-meteo-api', 'requests', 'requests_get', 'requests_post', 'requests_patch', 'requests_put', 'requests_delete', 'terminal']\n"
     ]
    }
   ],
   "source": [
    "from langchain_core.tools import tool\n",
    "from langchain.agents import AgentExecutor, create_tool_calling_agent, load_tools,get_all_tool_names\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "\n",
    "@tool\n",
    "def get_joke(input: str):\n",
    "    \"\"\"获取一段笑话\"\"\"\n",
    "    return \"Why did the chicken cross the road?\"\n",
    "\n",
    "prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", \"You're a helpful assistant\"),\n",
    "        (\"human\", \"{input}\"),\n",
    "        (\"placeholder\", \"{agent_scratchpad}\"),\n",
    "    ]\n",
    ")\n",
    "print(get_all_tool_names())\n",
    "tools = load_tools([\"human\"])\n",
    "agent = create_tool_calling_agent(llm, tools, prompt)\n",
    "agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-11-06T10:05:28.001506Z",
     "start_time": "2024-11-06T10:05:26.659169Z"
    }
   },
   "id": "ac0dd5090b204be6",
   "execution_count": 7
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "ename": "TypeError",
     "evalue": "'function' object does not support the context manager protocol",
     "output_type": "error",
     "traceback": [
      "\u001B[0;31m---------------------------------------------------------------------------\u001B[0m",
      "\u001B[0;31mTypeError\u001B[0m                                 Traceback (most recent call last)",
      "Cell \u001B[0;32mIn[8], line 3\u001B[0m\n\u001B[1;32m      1\u001B[0m \u001B[38;5;28;01mfrom\u001B[39;00m \u001B[38;5;21;01mlangchain_community\u001B[39;00m\u001B[38;5;21;01m.\u001B[39;00m\u001B[38;5;21;01mcallbacks\u001B[39;00m \u001B[38;5;28;01mimport\u001B[39;00m get_openai_callback\n\u001B[0;32m----> 3\u001B[0m \u001B[38;5;28;01mwith\u001B[39;00m get_openai_callback \u001B[38;5;28;01mas\u001B[39;00m cb:\n\u001B[1;32m      4\u001B[0m     response \u001B[38;5;241m=\u001B[39m agent_executor\u001B[38;5;241m.\u001B[39minvoke(\n\u001B[1;32m      5\u001B[0m         {\n\u001B[1;32m      6\u001B[0m             \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124minput\u001B[39m\u001B[38;5;124m\"\u001B[39m: \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m今天热门新闻?\u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[1;32m      7\u001B[0m         }\n\u001B[1;32m      8\u001B[0m     )\n\u001B[1;32m      9\u001B[0m     \u001B[38;5;28mprint\u001B[39m(\u001B[38;5;124mf\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mTotal Tokens: \u001B[39m\u001B[38;5;132;01m{\u001B[39;00mcb\u001B[38;5;241m.\u001B[39mtotal_tokens\u001B[38;5;132;01m}\u001B[39;00m\u001B[38;5;124m\"\u001B[39m)\n",
      "\u001B[0;31mTypeError\u001B[0m: 'function' object does not support the context manager protocol"
     ]
    }
   ],
   "source": [
    "from langchain_community.callbacks import get_openai_callback\n",
    "\n",
    "with get_openai_callback as cb:\n",
    "    response = agent_executor.invoke(\n",
    "        {\n",
    "            \"input\": \"今天热门新闻?\"\n",
    "        }\n",
    "    )\n",
    "    print(f\"Total Tokens: {cb.total_tokens}\")\n",
    "    print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
    "    print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
    "    print(f\"Total Cost (USD): ${cb.total_cost}\")"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-11-06T10:05:28.066244Z",
     "start_time": "2024-11-06T10:05:28.003293Z"
    }
   },
   "id": "6f1762ea48bbb751",
   "execution_count": 8
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
