---
sidebar_position: 5
---

# How to track token usage

:::info Prerequisites

This guide assumes familiarity with the following concepts:

- [Chat models](/docs/concepts/#chat-models)

:::

This notebook goes over how to track your token usage for specific calls.

## Using `AIMessage.response_metadata`

A number of model providers return token usage information as part of the chat generation response. When available, this is included in the `AIMessage.response_metadata` field.
Here's an example with OpenAI:

import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/models/chat/token_usage_tracking.ts";

import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";

<IntegrationInstallTooltip></IntegrationInstallTooltip>

```bash npm2yarn
npm install @langchain/openai
```

<CodeBlock language="typescript">{Example}</CodeBlock>

And here's an example with Anthropic:

import AnthropicExample from "@examples/models/chat/token_usage_tracking_anthropic.ts";

```bash npm2yarn
npm install @langchain/anthropic
```

<CodeBlock language="typescript">{AnthropicExample}</CodeBlock>

## Using callbacks

You can also use the `handleLLMEnd` callback to get the full output from the LLM, including token usage for supported models.
Here's an example of how you could do that:

import CallbackExample from "@examples/models/chat/token_usage_tracking_callback.ts";

<CodeBlock language="typescript">{CallbackExample}</CodeBlock>

## Next steps

You've now seen a few examples of how to track chat model token usage for supported providers.

Next, check out the other how-to guides on chat models in this section, like [how to get a model to return structured output](/docs/how_to/structured_output) or [how to add caching to your chat models](/docs/how_to/chat_model_caching).
