---
title: Using standard tests
sidebarTitle: Standard tests
---

<Icon icon="flask" size={32} />
**Standard tests ensure your integration works as expected.**

When creating either a custom class for yourself or to publish in a LangChain integration, it is important to add standard tests to ensure it works as expected. This guide will show you how to add standard tests to each integration type.

## Setup

First, install the required dependencies:

:::python
<CardGroup cols={2}>
    <Card title="langchain-core" icon="cube" href="https://github.com/langchain-ai/langchain/tree/master/libs/core#readme" arrow>
        Defines the interfaces we want to import to define our custom components
    </Card>

    <Card title="langchain-tests" icon="flask" href="https://github.com/langchain-ai/langchain/tree/master/libs/standard-tests#readme" arrow>
        Provides the standard tests and `pytest` plugins necessary to run them
    </Card>
</CardGroup>
:::
:::js
<CardGroup cols={2}>
    <Card title="langchain-core" icon="cube" href="https://github.com/langchain-ai/langchainjs/tree/main/langchain-core#readme" arrow>
        Defines the interfaces we want to import to define our custom components
    </Card>

    <Card title="langchain-tests" icon="flask" href="https://github.com/langchain-ai/langchainjs/tree/main/libs/langchain-standard-tests#readme" arrow>
        Provides the standard tests and plugins necessary to run them
    </Card>
</CardGroup>
:::

:::python
<Warning>
    Because added tests in new versions of `langchain-tests` can break your CI/CD pipelines, we recommend pinning to the latest version of [`langchain-tests`](https://pypi.org/project/langchain-tests/#history) to avoid unexpected changes.
</Warning>
:::

:::python
<CodeGroup>
    ```bash pip
    pip install -U langchain-core
    pip install -U langchain-tests
    ```
    ```bash uv
    uv add langchain-core
    uv add langchain-tests
    ```
</CodeGroup>
:::

:::js
<CodeGroup>
    ```bash npm
    npm install @langchain/core
    npm install @langchain/standard-tests
    ```

    ```bash pnpm
    pnpm add @langchain/core
    pnpm add @langchain/standard-tests
    ```

    ```bash yarn
    yarn add @langchain/core
    yarn add @langchain/standard-tests
    ```

    ```bash bun
    bun add @langchain/core
    bun add @langchain/standard-tests
    ```
</CodeGroup>
:::

There are 2 namespaces in the `langchain-tests` package:

<AccordionGroup>
    <Accordion title="Unit tests" icon="gear">
        :::python
        **Location**: `langchain_tests.unit_tests`
        :::
        :::js
        **Location**: `src.unit_tests`
        :::

        Designed to test the component in isolation and without access to external services
    </Accordion>

    <Accordion title="Integration tests" icon="network-wired">
        :::python
        **Location**: `langchain_tests.integration_tests`
        :::
        :::js
        **Location**: `src.integration_tests`
        :::

        Designed to test the component with access to external services (in particular, the external service that the component is designed to interact with)
    </Accordion>
</AccordionGroup>

:::python
Both types of tests are implemented as [`pytest`](https://docs.pytest.org/en/stable/) class-based test suites.
:::

## Implementing standard tests

Depending on your integration type, you will need to implement either or both unit and integration tests.

By subclassing the standard test suite for your integration type, you get the full collection of standard tests for that type. For a test run to be successful, the a given test should pass only if the model supports the capability being tested. Otherwise, the test should be skipped.

Because different integrations offer unique sets of features, most standard tests provided by LangChain are **opt-in by default** to prevent false positives. Consequently, you will need to override properties to indicate which features your integration supports - see the below example for an illustration.

:::python
```python tests/integration_tests/test_standard.py
# Indicate that a chat model supports image inputs

class TestChatParrotLinkStandard(ChatModelIntegrationTests):
    # ... other required properties

    @property
    def supports_image_inputs(self) -> bool:
        return True  # (The default is False)
```
:::
:::js
```javascript tests/chat_models.standard.int.test.ts
// Indicate that a chat model supports parallel tool calls

class ChatParrotLinkStandardIntegrationTests extends ChatModelIntegrationTests<
    ChatParrotLinkCallOptions,
    AIMessageChunk
> {
    constructor() {
        // ... other required properties

        super({
            // ... other required properties
            supportsParallelToolCalls: true,  // (The default is False)
            // ...
        });
    }
```
:::

<Note>
    You should typically organize tests in these subdirectories relative to the root of your package:
    - `tests/unit_tests` for unit tests
    - `tests/integration_tests` for integration tests
</Note>

To see the complete list of configurable capabilities and their defaults, visit the API reference for your integration type. Reference pages include code examples for implementing standard tests.

:::python
<Columns cols={2}>
    <Card title="Unit tests" href="https://python.langchain.com/api_reference/standard_tests/unit_tests.html" icon="gear" arrow />
    <Card title="Integration tests" href="https://python.langchain.com/api_reference/standard_tests/integration_tests.html" icon="network-wired" arrow />
</Columns>
:::
:::js
<Columns cols={2}>
    <Card title="Unit tests" href="https://v03.api.js.langchain.com/classes/_langchain_standard_tests.ChatModelUnitTests.html" icon="gear" arrow />
    <Card title="Integration tests" href="https://v03.api.js.langchain.com/classes/_langchain_standard_tests.ChatModelIntegrationTests.html" icon="network-wired" arrow />
</Columns>
:::

Here are some example implementations of standard tests from popular integrations:

<Tabs>
    <Tab title="Unit tests">
        :::python
        <Columns cols={3}>
            <Card title="ChatOpenAI" href="https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/tests/unit_tests/chat_models/test_base_standard.py" arrow>Unit tests</Card>
            <Card title="ChatAnthropic" href="https://github.com/langchain-ai/langchain/blob/master/libs/partners/anthropic/tests/unit_tests/test_standard.py" arrow>Unit tests</Card>
            <Card title="ChatGenAI" href="https://github.com/langchain-ai/langchain-google/blob/main/libs/genai/tests/unit_tests/test_standard.py" arrow>Unit tests</Card>
        </Columns>
        :::
    </Tab>
    <Tab title="Integration tests">
        :::python
        <Columns cols={3}>
            <Card title="ChatOpenAI" href="https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/tests/integration_tests/chat_models/test_base_standard.py" arrow>Integration tests</Card>
            <Card title="ChatAnthropic" href="https://github.com/langchain-ai/langchain/blob/master/libs/partners/anthropic/tests/integration_tests/test_standard.py" arrow>Integration tests</Card>
            <Card title="ChatGenAI" href="https://github.com/langchain-ai/langchain-google/blob/main/libs/genai/tests/integration_tests/test_standard.py" arrow>Integration tests</Card>
        </Columns>
        :::
    </Tab>
</Tabs>

---

## Running tests

:::python
If bootstrapping an integration from a template, a `Makefile` is provided that includes targets for running unit and integration tests:

```bash
make test
make integration_test
```

Otherwise, if you follow the recommended directory structure, you can run tests with:

```bash
# Run all tests
uv run --group test pytest tests/unit_tests/
uv run --group test --group test_integration pytest -n auto tests/integration_tests/

# For certain unit tests, you may need to set certain flags and environment variables:
TIKTOKEN_CACHE_DIR=tiktoken_cache uv run --group test pytest --disable-socket --allow-unix-socket tests/unit_tests/

# Run a specific test file
uv run --group test pytest tests/integration_tests/test_chat_models.py

# Run a specific test function in a file
uv run --group test pytest tests/integration_tests/test_chat_models.py::test_chat_completions

# Run a specific test function within a class
uv run --group test pytest tests/integration_tests/test_chat_models.py::TestChatParrotLinkIntegration::test_chat_completions
```
:::
:::js
TODO
:::

## Troubleshooting

:::python
For a full list of the standard test suites that are available, as well as information on which tests are included and how to troubleshoot common issues, see the [Standard Tests API Reference](https://python.langchain.com/api_reference/standard_tests/index.html).

You can find troubleshooting guides under the individual test suites listed in that API Reference. For example, [here is the guide for `ChatModelIntegrationTests.test_usage_metadata`](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.html#langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.test_usage_metadata).
:::
:::js
For a full list of the standard test suites that are available, as well as information on which tests are included and how to troubleshoot common issues, see the [Standard Tests API Reference](https://v03.api.js.langchain.com/modules/_langchain_standard_tests.html).
:::
