---
title: Optimize Testing with Feature-Based Testing
sidebar:
  label: Feature-Based Testing
description: Learn how to optimize your testing strategy by breaking down monolithic test projects into feature-scoped tests that run only when needed.
filter: 'type:Guides'
---

Feature-based testing is a strategy that co-locates tests with the features they verify. This approach helps you test only what's changed, reducing unnecessary test execution and improving feedback loops.

## The Problem: Monolithic Test Projects

Many projects have a single, large test project that depends on the entire application, such as e2e projects. While this setup ensures tests run when any dependency changes, it also means **all tests run even when only one subset of the app changes**.

{% aside type="tip" title="Beyond E2E Tests" %}
While this guide uses end-to-end (e2e) tests as examples, the same strategy applies to any type of testing—integration tests, component tests, or even large unit test suites.
The principles of feature-based testing work wherever you have monolithic test projects that could benefit from being split by feature.
{% /aside %}

Consider a typical setup where all e2e tests live in a single project at the top of the graph:
{% graph height="400px" %}

```json
{
  "projects": [
    {
      "name": "fancy-app-e2e",
      "type": "app",
      "data": {
        "tags": []
      }
    },
    {
      "name": "fancy-app",
      "type": "app",
      "data": {
        "tags": []
      }
    },
    {
      "name": "shared-ui",
      "type": "lib",
      "data": {
        "tags": []
      }
    },
    {
      "name": "feat-cart",
      "type": "lib",
      "data": {
        "tags": []
      }
    },
    {
      "name": "feat-products",
      "type": "lib",
      "data": {
        "tags": []
      }
    }
  ],
  "dependencies": {
    "fancy-app-e2e": [
      { "source": "fancy-app-e2e", "target": "fancy-app", "type": "implicit" }
    ],
    "fancy-app": [
      { "source": "fancy-app", "target": "feat-products", "type": "static" },
      { "source": "fancy-app", "target": "feat-cart", "type": "static" }
    ],
    "shared-ui": [],
    "feat-cart": [
      {
        "source": "feat-cart",
        "target": "shared-ui",
        "type": "static"
      }
    ],
    "feat-products": [
      {
        "source": "feat-products",
        "target": "shared-ui",
        "type": "static"
      }
    ]
  },
  "workspaceLayout": { "appsDir": "", "libsDir": "" },
  "affectedProjectIds": ["feat-cart", "fancy-app-e2e", "fancy-app"],
  "groupByFolder": false,
  "showAffectedWithNodes": true
}
```

{% /graph %}

In this example, when `feat-cart` changes, **all tests in `fancy-app-e2e` run**, which includes tests for `feat-products` along with other unrelated features. This happens because `fancy-app-e2e` depends on the entire application.

Since these features have minimal overlap, you can optimize testing by splitting the monolithic test project into smaller, feature-scoped test projects.

## The Solution: Feature-Scoped Testing

Instead of keeping all tests in one large project, break them down by feature and co-locate them with the feature libraries they test. This way, only the tests for changed features run.

### How to Implement Feature-Based Testing

To set up feature-based testing, add test configurations directly to your feature projects.
Nx provides plugins to automate and speed up the test configuration for common testing tools.
Here are some guides for using each plugins generators:

- [Playwright](/docs/technologies/test-tools/playwright/introduction#add-playwright-e2e-to-an-existing-project)
- [Cypress](/docs/technologies/test-tools/cypress/introduction#configure-cypress-for-an-existing-project)
- [Vitest](/docs/technologies/build-tools/vite/generators#configuration)
- [Jest](/docs/technologies/test-tools/jest/introduction#add-jest-to-a-project)

If there isn't a generator for your testing tool of choice, you can manually set up the configuration on each feature project.
This includes adding relevant configuration files for the testing framework and adding the test target (e.g. `test`, `e2e`) to the project's `project.json` or `package.json`. Typically these can be copied and slightly modified from the existing top-level monolithic project that is being split apart.

With this setup, when you run `nx affected -t e2e`, only the tests for changed features will execute.
For example, when `feat-cart` changes, only `feat-cart:e2e` runs and `feat-products:e2e` does not run since it wasn't affected.

## Best Practices

### Combining with Automated Task Splitting (Atomizer)

Typically, teams enable [Atomizer, also known as task splitting](/docs/features/ci-features/split-e2e-tasks), for a quick win to improve CI times when using [Nx Agents](/docs/features/ci-features/distribute-task-execution).
Combining both strategies yields the best results.
Here's how they complement each other:

- **Feature-based testing** ensures only relevant feature tests run when code changes
- **Atomizer** splits each feature's test suite into individual file-level tasks that can be distributed across multiple CI agents.

For example, if `feat-cart` has 10 test files and `feat-products` has 15 test files, when you change the cart feature:

1. Feature-based testing runs only `feat-cart:e2e-ci` (skipping `feat-products:e2e-ci`)
2. Atomizer splits `feat-cart:e2e-ci` into 10 parallel tasks, one per test file
3. These tasks get distributed across your CI agents for faster execution

Learn more about [setting up Automated Task Splitting](/docs/features/ci-features/split-e2e-tasks).

### Keep the Top-Level Test Project

Don't delete your top level test project, `fancy-app-e2e` in this example. Instead, repurpose it for:

- **Smoke tests**: Quick sanity checks that the app starts and critical paths work
- **Cross-feature integration tests**: Tests that verify multiple features work together
- **End-to-end user journeys**: Tests that span multiple features

This gives you a balanced testing strategy: focused feature tests that run frequently, plus comprehensive integration tests when needed.

### Running Tests in Parallel

When running tests from multiple features in parallel, be mindful of shared resources. Since all feature tests run against the same application instance, avoid conflicts by:

- **Using unique test data**: Don't rely on specific database records or application state
- **Managing ports**: Configure each test to use different ports, or let the test framework find free ports automatically
  - For Cypress, use the [`--port` flag](/nx-api/cypress/executors/cypress#port) to specify or auto-detect ports
  - For Playwright, the `webServerAddress` can be dynamically assigned
- **Isolating state**: Use test-specific user accounts, temporary data, or cleanup between tests

### Running Affected Tests

With feature-based testing, you can leverage Nx's affected commands to run only the tests that matter:

```shell
# Run all affected tests based on your changes
nx affected -t test

# Run affected e2e tests
nx affected -t e2e
```

This ensures you're only testing what changed, whether locally or in CI.
