---
sidebar_position: 7
title: Dispatcher
sidebar_label: Dispatcher
---

[Read the API Reference »](/api/core/Classes/Dispatcher.mdx)

In order to provide advanced features and fetching modes like retries, deduplication, queueing, canceling etc. It can
catalog the requests providing unique IDs, tracking the lifecycle and triggering the requests at the right time and in
the right way.

Every request in the dispatcher is stored in the queue structure. This allows us to perform many operations (e.g.
stopping, pausing, or starting) on the dispatched requests. However, this does not mean that all requests will be sent
individually; there are multiple dispatching modes available, which you can read about in the
[dispatching modes](#dispatching-modes) section.

---

:::tip Purpose

1. Orchestrates request flow and lifecycle
2. Can retry, deduplicate, manage offline mode and more
3. Allow to pause and resume requests or queue groups
4. Bridges adapter, cache and managers functionality

:::

---

## How it works

Each request triggered with `send()`, unlike the `exec()` method, goes through the entire lifecycle in the library.
First, we check the execution mode—whether it is concurrent, queued, cancelable, or deduplicated. This determines how we
handle the current and previous requests. Next, we add the request to a queue, which is a group of requests with the
same `queryKey`. Once a request is picked from the queue to be triggered, we assign it a `requestId` and execute it in
the adapter of our choice. From this point, we trigger the entire lifecycle, including retries, offline handling, and
request and response events. Finally, we remove the request from the queue (whether successful or not) and pass the data
to the cache for state handling.

#### Two instances

There are two dispatcher instances in the `Client` class: one for querying and one for mutation requests. This design
provides greater flexibility for configuration. This way, you can configure data mutation and data querying behavior
separately. For example, you can set different default settings for each dispatcher.

## QueryKey

The `queryKey` is a unique identifier for the request queues. It is used in propagation and reception of request events
on a given queue and in management of incoming and outgoing requests. Each detected unique `queryKey` creates isolated
queue array. By default, keys are auto-generated from the request's endpoint and URL parameters, but you can still add
the key manually when creating the `Request` or using a generic generator.

```tsx
const getUser = client.createRequest()({
  method: "GET",
  endpoint: "/users/:userId",
});

// highlight-start
const queryKey = getUser.queryKey; // "/users/:userId"
const queryKeyWithParams = getUser.setParams({ userId: 1 }).queryKey; // "/users/1"
const queryKeyWithQueryParams = getUser.setQueryParams({ page: 1 }).queryKey; // "/users/:userId"
// highlight-end
```

### Custom queryKey

You can also set a custom query key:

```tsx
import { client } from "./api";

const getUser = client.createRequest()({
  method: "GET",
  endpoint: "/users/:userId",
  queryKey: "CUSTOM_QUERY_KEY",
});

// highlight-start
console.log(getUser.queryKey); // "CUSTOM_QUERY_KEY"
// highlight-end
```

### Generic queryKey

You can also set a generic query key:

```tsx
import { client } from "./api";

// highlight-start
client.setQueryKeyMapper((request) => {
  if (request.requestOptions.endpoint === "/users/:userId") {
    return `CUSTOM_QUERY_KEY_${request.params?.userId || "unknown"}`;
  }
});
// highlight-end

const getUser = client.createRequest()({
  method: "GET",
  endpoint: "/users/:userId",
});

// highlight-start
console.log(getUser.setParams({ userId: 1 }).queryKey); // "CUSTOM_QUERY_KEY_1"
// highlight-end
```

---

## RequestId

The **`requestId`** is autogenerated by the dispatchers when a request is executed. It is unique within a single
`Client` instance, but we do not guarantee its uniqueness between different `Client` instances. It's used for precise
communication with the dispatcher, for example, when listening for particular request events or removing a request.

---

## Available methods

<ShowMore>

(@import core Dispatcher type=methods&display=table)

</ShowMore>

<LinkCard
  type="api"
  title="Detailed Dispatcher API Methods"
  description="Explore all available methods, their parameters, and return values for the Dispatcher class."
  to="/docs/api/core/Classes/Dispatcher#methods"
/>

---

## Features

Here is a list of features that the dispatcher provides:

1. ### Retrying

One of the features that the dispatcher provides is retrying failed requests. It can retry requests until a successful
result is obtained or the retry limit is reached.

Below is an example of how to set the request to retry and specify the time between attempts. The response promise will
be resolved on success or after the last retry attempt.

```tsx
const getUser = client.createRequest()({
  method: "GET",
  endpoint: "/users/:userId",
  retry: 5,
  retryTime: 3000,
});

const response = await getUser.send();
```

---

2. ### Queues

Dispatchers store requests in queues. Thanks to this, we have more and better control over the request flow. This
flexibility allows us to `stop()`, `pause()`, and `start()` request execution. We can apply these actions to a single
request or to entire queues (a new queue is created for each unique `queryKey`).

:::note What is the difference between `stop()` and `pause()`?

The main difference lies in how they handle the currently executing request:

- `stop()` - Cancels the currently executing request, along with all other requests in the queue.
- `pause()` - Allows the currently executing request to finish, but prevents any further requests from starting.

In both scenarios, the requests are not removed from the queue and can be resumed later.

:::

#### Start

When a queue is stopped, you can use `start()` to resume processing requests.

```tsx live title="Start request queue" size=md
// Requests will be stopped
client.fetchDispatcher.stop(getUser.queryKey);

// Trigger the request
getUser.send();

setTimeout(() => {
  // Start the request
  client.fetchDispatcher.start(getUser.queryKey);
}, 2000);
```

#### Pause

To pause requests, just use the `pause()` method on the Dispatcher.

```tsx
// To pause the queue
// highlight-next-line
client.fetchDispatcher.pause("queryKey");
```

:::warning Pausing individual request

You cannot `pause()` individual requests; they can only be stopped.

:::

```tsx live  title="Pausing request queue" size=md
// Trigger the request
getUser.send();

setTimeout(() => {
  // Pause the queue, this will complete the in-progress request and hold all others
  client.fetchDispatcher.pause(getUser.queryKey);
}, 500);

setTimeout(() => {
  // Trigger another request, that will be stopped
  getUser.send();
}, 1000);

// Start the queue
setTimeout(() => {
  client.fetchDispatcher.start(getUser.queryKey);
}, 4000);
```

#### Stop

To stop requests use the `stop()` method to stop the queue. It will cancel the in-progress request and hold all others.

```tsx
// To stop the queue
// highlight-next-line
client.fetchDispatcher.stop("queryKey");
```

```tsx live title="Stopping request queue" size=md
// Trigger the request
getUser.send();

setTimeout(() => {
  // Stop the queue, this will cancel the in-progress request and hold all others
  client.fetchDispatcher.stop(getUser.queryKey);
}, 500);

setTimeout(() => {
  // Trigger another request, that will be stopped
  getUser.send();
}, 1000);

// Start the queue, will resume the in-progress request and allow the others to be sent
setTimeout(() => {
  client.fetchDispatcher.start(getUser.queryKey);
}, 4000);
```

You can also stop an individual request with the `stopRequest()` method. It will cancel the request and remove it from
the queue.

```tsx
// To stop individual request
// highlight-next-line
client.fetchDispatcher.stopRequest("queryKey", "requestId");
```

```tsx live title="Stopping individual request" size=md
let getUserRequestId = "";

// Trigger the request
getUser.send({
  onBeforeSent: ({ requestId }) => {
    getUserRequestId = requestId;
  },
});

// Trigger another request
getUser.send();

// Stop individual request
setTimeout(() => {
  client.fetchDispatcher.stopRequest(getUser.queryKey, getUserRequestId);
}, 1000);

// Start stopped request
setTimeout(() => {
  client.fetchDispatcher.startRequest(getUser.queryKey, getUserRequestId);
}, 3000);
```

---

3. ### Offline

When the connection is lost, the queue is stopped, and any failed or interrupted requests will wait for the connection
to be restored before being retried. This prevents data loss and allows us to leverage caching abilities.

```tsx live  title=Offline handling
// Simulate the connection loss
client.appManager.setOnline(false);

// Trigger the request being offline
getUser.send();

// Return back online and the request will be resumed
setTimeout(() => {
  client.appManager.setOnline(true);
}, 2000);
```

To **disable offline mode**, you can set the request's `offline` option to `false`.

```ts
const newRequest = client.createClient()({
  endpoint: "/request"
  // highlight-next-line
  offline: false
})
```

---

## Modes

Each dispatcher queue can operate in several modes, which can be selected via request properties.

### Concurrent

In this mode, requests are not limited and can be executed concurrently. This is the default mode for all requests.

This mode is active when the `queued` property on a request is set to `false`, which is the default.

```tsx live
import { getUser } from "./api";

// First request
getUser.send();

// Second request
getUser.send();
```

:::tip When it could be useful?

This is the default mode for request execution. Use it to send multiple requests simultaneously and receive responses in
parallel.

:::

---

### Deduplication

Deduplication optimizes data exchange with the server. If we ask the server for the same data twice at the same time
with different requests, this mode will perform one call and propagate the response to both sources.

Enable this mode by setting the request `deduplication` prop to true.

```tsx live size=md
import { getUser } from "./api";

const getUserDeduplicated = getUser.setDeduplicate(true);

// First request
const request1 = getUserDeduplicated.send();

// Second request (Deduplicated - it will never be triggered)
const request2 = getUserDeduplicated.send();

setTimeout(async () => {
  // Third request (Deduplicated - it will never be triggered)
  const request3 = getUserDeduplicated.send();

  // Responses logs
  const [response1, response2, response3] = await Promise.all([request1, request2, request3]);

  console.log(response1);
  console.log(response2);
  console.log(response3);
}, 500);

// Wait for the requests to be resolved
```

:::tip When it could be useful?

Imagine different components making the same request. Instead of sending the same request multiple times, you can group
them into one and listen to its response. This way, you can avoid over-fetching and improve your application's
performance.

:::

---

### Cancelable

Cancelable mode avoids race conditions when multiple requests are sent, but only the result of the last one is desired.
This mode is ideal for paginated lists of data, where only a single page needs to be shown, even if the user triggers
new requests with rapidly changing pagination.

Enable this mode by setting the request `cancelable` prop to true.

```tsx live size=md
import { getUser } from "./api";

const getUserCancelable = getUser.setCancelable(true);

// First request
getUserCancelable.send();

setTimeout(() => {
  // Second request
  getUserCancelable.send();
}, 500);
```

:::tip When it could be useful?

Cancelable mode is very useful for paginated data. You can cancel previous requests (for example, when pages are
switched dynamically) and send only the current one, thus avoiding over-fetching.

:::

---

### Queued

This mode is ideal for sending requests sequentially. It allows you to combine requests into an ordered queue that will
be processed one item at a time. In this mode, you can `start`, `stop`, or `pause` the entire queue, request or whole
processing.

Enable this mode by setting the request `queued` prop to true.

```tsx live size=md
import { postFile } from "./api";

const postFileQueued = postFile.setQueued(true);

// First request
postFileQueued.send();
postFileQueued.send();
postFileQueued.send();
```

:::tip When it could be useful?

This mode is ideal for transferring large amounts of data. It mitigates the impact of network issues by processing
requests sequentially, which prevents multiple requests from failing if the connection is lost. Additionally, it
improves user experience by enabling parts of an application to become available as each data transfer completes, rather
than making the user wait for all transfers to finish.

:::

---
