{
 "cells": [
  {
   "cell_type": "markdown",
   "source": [
    "如何跟踪LLMS的token使用？\n",
    "1. Using LangSmith\n",
    "2. Using callbacks \n",
    "There are some API-specific callback context managers that allow you to track token usage across multiple calls. You'll need to check whether such an integration is available for your particular model.\n",
    "\n",
    "If such an integration is not available for your model, you can create a custom callback manager by adapting the implementation of the [OpenAI callback manager](https://python.langchain.com/api_reference/community/callbacks/langchain_community.callbacks.openai_info.OpenAICallbackHandler.html).\n",
    "\n",
    "\n",
    "## Using Callbacks\n",
    ">回调处理程序目前不支持传统语言模型(例如langchain_openai)的流令牌计数。OpenAI)。有关流环境中的支持，请在此处参考相应的聊天模型指南。\n",
    "\n",
    "### Single call"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "aaf5603a976471c"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "content=\"Sure, here's a light joke for you: \\n\\nWhy don't scientists trust atoms?\\n\\nBecause they make up everything!\" additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 24, 'prompt_tokens': 22, 'total_tokens': 46, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen-max', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None} id='run-b11da7eb-9f79-4381-afb7-ee79c9a65c83-0' usage_metadata={'input_tokens': 22, 'output_tokens': 24, 'total_tokens': 46, 'input_token_details': {}, 'output_token_details': {}}\n",
      "---\n",
      "\n",
      "Total Tokens: 46\n",
      "Prompt Tokens: 22\n",
      "Completion Tokens: 24\n",
      "Total Cost (USD): $0.0\n"
     ]
    }
   ],
   "source": [
    "from langchain_community.callbacks import get_openai_callback\n",
    "import os\n",
    "\n",
    "from dotenv import load_dotenv\n",
    "from langchain_community.agent_toolkits.load_tools import load_tools\n",
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "load_dotenv()\n",
    "\n",
    "llm = ChatOpenAI(\n",
    "    # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "    openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "    model_name=\"qwen-max\",\n",
    "    temperature=0,\n",
    ")\n",
    "\n",
    "with get_openai_callback() as cb:\n",
    "    result = llm.invoke(\"Tell me a joke\")\n",
    "    print(result)\n",
    "    print(\"---\")\n",
    "print()\n",
    "\n",
    "print(f\"Total Tokens: {cb.total_tokens}\")\n",
    "print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
    "print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
    "print(f\"Total Cost (USD): ${cb.total_cost}\")"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-11-06T02:59:16.141387Z",
     "start_time": "2024-11-06T02:59:11.079035Z"
    }
   },
   "id": "c30fbac73fbcffa5",
   "execution_count": 1
  },
  {
   "cell_type": "markdown",
   "source": [
    "### Multiple calls\n",
    "上下文管理器中的任何内容都将被跟踪。下面是一个使用它来跟踪多个调用的例子。这也适用于可能使用多个步骤的代理。"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "78263db83f0e2daa"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "content=\"Sure, here's a light joke about birds for you:\\n\\nWhy don't birds use Facebook?\\n\\nBecause they already have Twitter!\" additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 25, 'prompt_tokens': 24, 'total_tokens': 49, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen-max', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None} id='run-a20da2cd-4a19-4e4d-8dd1-001052a9f3e0-0' usage_metadata={'input_tokens': 24, 'output_tokens': 25, 'total_tokens': 49, 'input_token_details': {}, 'output_token_details': {}}\n",
      "--\n",
      "content=\"Sure, here's a fishy joke for you:\\n\\nWhy don't fish play basketball?\\n\\nBecause they're afraid of the net!\" additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 26, 'prompt_tokens': 24, 'total_tokens': 50, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen-max', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None} id='run-76dc1b5c-f2da-4e13-bc60-0a936c59cf91-0' usage_metadata={'input_tokens': 24, 'output_tokens': 26, 'total_tokens': 50, 'input_token_details': {}, 'output_token_details': {}}\n",
      "\n",
      "---\n",
      "Total Tokens: 99\n",
      "Prompt Tokens: 48\n",
      "Completion Tokens: 51\n",
      "Total Cost (USD): $0.0\n"
     ]
    }
   ],
   "source": [
    "from langchain_community.callbacks import get_openai_callback\n",
    "from langchain_core.prompts import PromptTemplate\n",
    "\n",
    "llm = ChatOpenAI(\n",
    "    # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "    openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "    model_name=\"qwen-max\",\n",
    "    temperature=0,\n",
    ")\n",
    "\n",
    "template = PromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
    "chain = template | llm\n",
    "\n",
    "with get_openai_callback() as cb:\n",
    "    response = chain.invoke({\"topic\": \"birds\"})\n",
    "    print(response)\n",
    "    response = chain.invoke({\"topic\": \"fish\"})\n",
    "    print(\"--\")\n",
    "    print(response)\n",
    "\n",
    "\n",
    "print()\n",
    "print(\"---\")\n",
    "print(f\"Total Tokens: {cb.total_tokens}\")\n",
    "print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
    "print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
    "print(f\"Total Cost (USD): ${cb.total_cost}\")"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-11-06T03:02:18.015824Z",
     "start_time": "2024-11-06T03:02:11.947199Z"
    }
   },
   "id": "6b9b0891b65185b0",
   "execution_count": 2
  },
  {
   "cell_type": "markdown",
   "source": [
    "## Streaming \n",
    "get_openai_callback目前不支持传统语言模型(如langchain_openai)的流令牌计数。OpenAI)。如果您想要在流式上下文中正确计算令牌数，有许多选项可供选择:\n",
    "- 使用[本指南](https://python.langchain.com/docs/how_to/chat_token_usage_tracking/)中描述的聊天模式；\n",
    "- 实现一个[自定义回调处理程序](https://python.langchain.com/docs/how_to/custom_callbacks/)，该程序使用适当的令牌化器来计算令牌数； \n",
    "- 使用LangSmith监控平台"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "e838be65ca25c42d"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "content='' additional_kwargs={} response_metadata={} id='run-303ddfde-c324-4b93-826c-176240b86d10'content='Sure,' additional_kwargs={} response_metadata={} id='run-303ddfde-c324-4b93-826c-176240b86d10'content=' here' additional_kwargs={} response_metadata={} id='run-303ddfde-c324-4b93-826c-176240b86d10'content=\"'s\" additional_kwargs={} response_metadata={} id='run-303ddfde-c324-4b93-826c-176240b86d10'content=' a light joke for' additional_kwargs={} response_metadata={} id='run-303ddfde-c324-4b93-826c-176240b86d10'content=' you:\\n\\nWhy don' additional_kwargs={} response_metadata={} id='run-303ddfde-c324-4b93-826c-176240b86d10'content=\"'t scientists trust atoms\" additional_kwargs={} response_metadata={} id='run-303ddfde-c324-4b93-826c-176240b86d10'content='?\\n\\nBecause they make' additional_kwargs={} response_metadata={} id='run-303ddfde-c324-4b93-826c-176240b86d10'content=' up everything!' additional_kwargs={} response_metadata={} id='run-303ddfde-c324-4b93-826c-176240b86d10'content='' additional_kwargs={} response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-max'} id='run-303ddfde-c324-4b93-826c-176240b86d10'content='' additional_kwargs={} response_metadata={} id='run-303ddfde-c324-4b93-826c-176240b86d10' usage_metadata={'input_tokens': 22, 'output_tokens': 23, 'total_tokens': 45, 'input_token_details': {}, 'output_token_details': {}}"
     ]
    },
    {
     "ename": "NameError",
     "evalue": "name 'result' is not defined",
     "output_type": "error",
     "traceback": [
      "\u001B[0;31m---------------------------------------------------------------------------\u001B[0m",
      "\u001B[0;31mNameError\u001B[0m                                 Traceback (most recent call last)",
      "Cell \u001B[0;32mIn[1], line 22\u001B[0m\n\u001B[1;32m     20\u001B[0m     \u001B[38;5;28;01mfor\u001B[39;00m chunk \u001B[38;5;129;01min\u001B[39;00m llm\u001B[38;5;241m.\u001B[39mstream(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mTell me a joke\u001B[39m\u001B[38;5;124m\"\u001B[39m):\n\u001B[1;32m     21\u001B[0m         \u001B[38;5;28mprint\u001B[39m(chunk, end\u001B[38;5;241m=\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m\"\u001B[39m, flush\u001B[38;5;241m=\u001B[39m\u001B[38;5;28;01mTrue\u001B[39;00m)\n\u001B[0;32m---> 22\u001B[0m     \u001B[38;5;28mprint\u001B[39m(result)\n\u001B[1;32m     23\u001B[0m     \u001B[38;5;28mprint\u001B[39m(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m---\u001B[39m\u001B[38;5;124m\"\u001B[39m)\n\u001B[1;32m     25\u001B[0m \u001B[38;5;28mprint\u001B[39m()\n",
      "\u001B[0;31mNameError\u001B[0m: name 'result' is not defined"
     ]
    }
   ],
   "source": [
    "from langchain_community.callbacks import get_openai_callback\n",
    "import os\n",
    "\n",
    "from dotenv import load_dotenv\n",
    "from langchain_community.agent_toolkits.load_tools import load_tools\n",
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "load_dotenv()\n",
    "\n",
    "llm = ChatOpenAI(\n",
    "    # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "    openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "    model_name=\"qwen-max\",\n",
    "    temperature=0,\n",
    "    stream_usage=True\n",
    ")\n",
    "\n",
    "with get_openai_callback() as cb:\n",
    "    for chunk in llm.stream(\"Tell me a joke\"):\n",
    "        print(chunk, end=\"\", flush=True)\n",
    "    print(result)\n",
    "    print(\"---\")\n",
    "    \n",
    "print()\n",
    "\n",
    "print(f\"Total Tokens: {cb.total_tokens}\")\n",
    "print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
    "print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
    "print(f\"Total Cost (USD): ${cb.total_cost}\")"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-11-06T09:53:32.473362Z",
     "start_time": "2024-11-06T09:53:27.258135Z"
    }
   },
   "id": "cd6327569c957824",
   "execution_count": 1
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
