---
title: Chat(MODULE_NAME)
---

How to use this Python chat model template:

- [ ] Substitute (MODULE_NAME) with the name of the module, e.g. Anthropic, OpenAI, etc. (`command + F` is your friend here)
- [ ] Update links to point to the correct module
- [ ] Under the details and features tables, update the ✅/❌ to reflect the actual capabilities of the chat model
- [ ] Update the PyPi/registry package name if needed
- [ ] Update the API key environment variable name if needed

The template starts below this line...

This guide provides a quick overview for getting started with the (MODULE_NAME) [chat model](/oss/langchain/models). For a detailed listing of all Chat(MODULE_NAME) features, paramaters, and configurations, head to the [Chat(MODULE_NAME) API reference](https://python.langchain.com/api_reference/(MODULE_NAME)/chat_models/langchain_(MODULE_NAME).chat_models.Chat(MODULE_NAME).html).

## Overview

### Details

| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/anthropic) | Downloads | Version |
| :--- | :--- | :---: | :---: |  :---: | :---: | :---: |
| [Chat(MODULE_NAME)](https://python.langchain.com/api_reference/(MODULE_NAME)/chat_models/langchain_(MODULE_NAME).chat_models.Chat(MODULE_NAME).html) | [langchain-(MODULE_NAME)](https://python.langchain.com/api_reference/anthropic/index.html) | ✅/❌ | beta/❌ | ✅/❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-(MODULE_NAME)?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-(MODULE_NAME)?style=flat-square&label=%20) |

### Features

| [Tool calling](/oss/langchain/tools) | [Structured output](/oss/langchain/structured-output/) | JSON mode | [Image input](/oss/how-to/multimodal_inputs/) | [Audio input](/oss/langchain/messages#multimodal) | [Video input](/oss/langchain/messages#multimodal) | [Token-level streaming](/oss/langchain/streaming/) | Native async | [Token usage](/oss/how-to/chat_token_usage_tracking/) | [Logprobs](/oss/how-to/logprobs/) |
| :---: | :---: | :---: | :---: |  :---: | :---: | :---: | :---: | :---: | :---: |
| ✅/❌ | ✅/❌ | ✅/❌ | ✅/❌ | ✅/❌ | ✅/❌ | ✅/❌ | ✅/❌ | ✅/❌ | ✅❌ |

---

## Setup

To access (MODULE_NAME) models, you'll need to create a/an (MODULE_NAME) account, get an API key, and install the `langchain-(MODULE_NAME)` integration package.

### Credentials

```python Set API key icon="key"
import getpass
import os

if "(MODULE_NAME)_API_KEY" not in os.environ:
    os.environ["(MODULE_NAME)_API_KEY"] = getpass.getpass("Enter your (MODULE_NAME) API key: ")
```

To enable automated <Tooltip tip="Log each step of a model's execution to debug and improve it">tracing</Tooltip> of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:

```python Enable tracing icon="flask"
os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
os.environ["LANGSMITH_TRACING"] = "true"
```

### Installation

The LangChain (MODULE_NAME) integration lives in the `langchain-(MODULE_NAME)` package:

<CodeGroup>
    ```python pip
    pip install -U langchain-(MODULE_NAME)
    ```
    ```python uv
    uv add langchain-(MODULE_NAME)
    ```
</CodeGroup>

---

## Instantiation

Now we can instantiate our model object and generate responses:

```python Initialize chat model icon="robot"
from langchain_(MODULE_NAME) import Chat(MODULE_NAME)

model = Chat(MODULE_NAME)(
    model="model-name",
    temperature=0,
    timeout=None,
    max_tokens=1024,
    max_retries=2,
    # Other params - see API reference for full list
)
```

---

## Invocation

<CodeGroup>
    ```python Dictionary format icon="book"
    messages = [
        {"role": "system", "content": "You are a poetry expert"},
        {"role": "user", "content": "Write a haiku about spring"},
    ]
    response = model.invoke(messages)
    print(response)
    ```
    ```python Message objects icon="message"
    from langchain_core.messages import SystemMessage, HumanMessage, AIMessage

    messages = [
        SystemMessage("You are a poetry expert"),
        HumanMessage("Write a haiku about spring"),
        AIMessage("Cherry blossoms bloom...")
    ]
    response = model.invoke(messages)
    ```
</CodeGroup>

```text Response object icon="terminal"
TODO - replace with response.model_dump_json(indent=2) or similar
```

```python Text content icon="i-cursor"
print(response.text)

# TODO - replace with the output of response.text
```

```python Content Blocks icon="shapes"
print(response.content_blocks)

# TODO - replace with the output of response.content_blocks
```

<Tip>
    Full guides are available on [chat model invocation types](/oss/langchain/models#invocation), [message types](/oss/langchain/messages#message-types), and [content blocks](/oss/langchain/messages#standard-content-blocks).
</Tip>

## TODO: Any functionality specific to this model

Delete if not relevant.

Look at existing model docs for examples, e.g.:

- [ChatAnthropic](/oss/integrations/chat/anthropic)
- [ChatOpenAI](/oss/integrations/chat/openai)
- [ChatGenAI](/oss/integrations/chat/google_generative_ai)

Examples:
- <Icon icon="wrench" size={16}/> Tool calling
- <Icon icon="brain" size={16}/> Reasoning output / extended reasoning
- <Icon icon="bullhorn" size={16}/> Verbosity
- <Icon icon="quote-right" size={16}/> Citations
- <Icon icon="terminal" size={16}/> Built-in tools
    - <Icon icon="clipboard-question" size={16}/> Web search
    - <Icon icon="code" size={16}/> Code execution
    - <Icon icon="rss" size={16}/> Remote MCP
- <Icon icon="photo-film" size={16}/> Multimodal input / output
- <Icon icon="database" size={16}/> Caching
- <Icon icon="link" size={16}/> Chaining
- <Icon icon="wifi" size={16}/> Streaming usage metadata
- <Icon icon="vial" size={16}/> Fine-tuning
- <Icon icon="microchip" size={16}/> Flex processing
- <Icon icon="timeline" size={16}/> Custom base URL or proxy behavior

---

## API reference

For detailed documentation of all Chat(MODULE_NAME) features and configurations, head to the [API reference](link-to-ref-page).
