---
title: "Using AI SDK UI | Frameworks"
description: "Learn how Mastra leverages the AI SDK UI library and how you can leverage it further with Mastra"
---

import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";

# Using AI SDK UI

[AI SDK UI](https://sdk.vercel.ai) is a library of React utilities and components for building AI-powered interfaces. In this guide, you'll learn how to use `@mastra/ai-sdk` to convert Mastra's output to AI SDK-compatible formats, enabling you to use its hooks and components in your frontend.

:::note
Migrating from AI SDK v4 to v5? See the [migration guide](/guides/v1/migrations/ai-sdk-v4-to-v5).
:::

:::tip

Want to see more examples? Visit Mastra's [**UI Dojo**](https://ui-dojo.mastra.ai/) or the [Next.js quickstart guide](/guides/v1/getting-started/next-js).

:::

## Getting Started

Use Mastra and AI SDK UI together by installing the `@mastra/ai-sdk` package. `@mastra/ai-sdk` provides custom API routes and utilities for streaming Mastra agents in AI SDK-compatible formats. This includes chat, workflow, and network route handlers, along with utilities and exported types for UI integrations.

`@mastra/ai-sdk` integrates with AI SDK UI's three main hooks: [`useChat()`](https://ai-sdk.dev/docs/ai-sdk-ui/chatbot), [`useCompletion()`](https://ai-sdk.dev/docs/ai-sdk-ui/completion), and [`useObject()`](https://ai-sdk.dev/docs/ai-sdk-ui/object-generation).

Install the required packages to get started:

<Tabs>
  <TabItem value="npm" label="npm">
    ```bash copy
    npm install @mastra/ai-sdk@beta @ai-sdk/react ai
    ```
  </TabItem>
  <TabItem value="pnpm" label="pnpm">
    ```bash copy
    pnpm add @mastra/ai-sdk@beta @ai-sdk/react ai
    ```
  </TabItem>
  <TabItem value="yarn" label="yarn">
    ```bash copy
    yarn add @mastra/ai-sdk@beta @ai-sdk/react ai
    ```
  </TabItem>
  <TabItem value="bun" label="bun">
    ```bash copy
    bun add @mastra/ai-sdk@beta @ai-sdk/react ai
    ```
  </TabItem>
</Tabs>

You're now ready to follow the integration guides and recipes below!

## Integration Guides

Typically, you'll set up API routes that stream Mastra content in AI SDK-compatible format, and then use those routes in AI SDK UI hooks like `useChat()`. Below you'll find two main approaches to achieve this:

- [Mastra's server](#mastras-server)
- [Framework-agnostic](#framework-agnostic)

Once you have your API routes set up, you can use them in the [`useChat()`](#usechat) hook.

### Mastra's server

Run Mastra as a standalone server and connect your frontend (e.g. using Vite + React) to its API endpoints. You'll be using Mastra's [custom API routes](/docs/v1/server-db/custom-api-routes) feature for this.

:::info

Mastra's [**UI Dojo**](https://ui-dojo.mastra.ai/) is an example of this setup.

:::

You can use [`chatRoute()`](/reference/v1/ai-sdk/chat-route), [`workflowRoute()`](/reference/v1/ai-sdk/workflow-route), and [`networkRoute()`](/reference/v1/ai-sdk/network-route) to create API routes that stream Mastra content in AI SDK-compatible format. Once implemented, you can use these API routes in [`useChat()`](#usechat).

<Tabs>

<TabItem value="chatRoute" label="chatRoute()">

This example shows how to set up a chat route at the `/chat` endpoint that uses an agent with the ID `weatherAgent`.

```typescript title="src/mastra/index.ts" copy
import { Mastra } from "@mastra/core";
import { chatRoute } from "@mastra/ai-sdk";

export const mastra = new Mastra({
  server: {
    apiRoutes: [
      chatRoute({
        path: "/chat",
        agent: "weatherAgent",
      }),
    ],
  },
});
```

You can also use dynamic agent routing, see the [`chatRoute()` reference documentation](/reference/v1/ai-sdk/chat-route) for more details.

</TabItem>

<TabItem value="workflowRoute" label="workflowRoute()">

This example shows how to set up a workflow route at the `/workflow` endpoint that uses a workflow with the ID `weatherWorkflow`.

```typescript title="src/mastra/index.ts" copy
import { Mastra } from "@mastra/core";
import { workflowRoute } from "@mastra/ai-sdk";

export const mastra = new Mastra({
  server: {
    apiRoutes: [
      workflowRoute({
        path: "/workflow",
        workflow: "weatherWorkflow",
      }),
    ],
  },
});
```

You can also use dynamic workflow routing, see the [`workflowRoute()` reference documentation](/reference/v1/ai-sdk/workflow-route) for more details.

:::tip Agent streaming in workflows

When a workflow step pipes an agent's stream to the workflow writer (e.g., `await response.fullStream.pipeTo(writer)`), the agent's text chunks and tool calls are forwarded to the UI stream in real time, even when the agent runs inside workflow steps.

See [Workflow Streaming](/docs/v1/streaming/workflow-streaming#streaming-agent-text-chunks-to-ui) for more details.

:::

</TabItem>

<TabItem value="networkRoute" label="networkRoute()">

This example shows how to set up a network route at the `/network` endpoint that uses an agent with the ID `weatherAgent`.

```typescript title="src/mastra/index.ts" copy
import { Mastra } from "@mastra/core";
import { networkRoute } from "@mastra/ai-sdk";

export const mastra = new Mastra({
  server: {
    apiRoutes: [
      networkRoute({
        path: "/network",
        agent: "weatherAgent",
      }),
    ],
  },
});
```

You can also use dynamic network routing, see the [`networkRoute()` reference documentation](/reference/v1/ai-sdk/network-route) for more details.

</TabItem>

</Tabs>

### Framework-agnostic

If you don't want to run Mastra's server and instead use frameworks like Next.js or Express, you can use the [`handleChatStream()`](/reference/v1/ai-sdk/handle-chat-stream), [`handleWorkflowStream()`](/reference/v1/ai-sdk/handle-workflow-stream), and [`handleNetworkStream()`](/reference/v1/ai-sdk/handle-network-stream) functions in your own API route handlers.

They return a `ReadableStream` that you can wrap with [`createUIMessageStreamResponse()`](https://ai-sdk.dev/docs/reference/ai-sdk-ui/create-ui-message-stream-response).

The examples below show you how to use them with Next.js App Router.

<Tabs>

<TabItem value="handleChatStream" label="handleChatStream()">

This example shows how to set up a chat route at the `/chat` endpoint that uses an agent with the ID `weatherAgent`.

```typescript title="app/chat/route.ts" copy
import { handleChatStream } from '@mastra/ai-sdk';
import { createUIMessageStreamResponse } from 'ai';
import { mastra } from '@/src/mastra';

export async function POST(req: Request) {
  const params = await req.json();
  const stream = await handleChatStream({
    mastra,
    agentId: 'weatherAgent',
    params,
  });
  return createUIMessageStreamResponse({ stream });
}
```

</TabItem>

<TabItem value="handleWorkflowStream" label="handleWorkflowStream()">

This example shows how to set up a workflow route at the `/workflow` endpoint that uses a workflow with the ID `weatherWorkflow`.

```typescript title="app/workflow/route.ts" copy
import { handleWorkflowStream } from '@mastra/ai-sdk';
import { createUIMessageStreamResponse } from 'ai';
import { mastra } from '@/src/mastra';

export async function POST(req: Request) {
  const params = await req.json();
  const stream = await handleWorkflowStream({
    mastra,
    workflowId: 'weatherWorkflow',
    params,
  });
  return createUIMessageStreamResponse({ stream });
}
```

</TabItem>

<TabItem value="handleNetworkStream" label="handleNetworkStream()">

This example shows how to set up a network route at the `/network` endpoint that uses an agent with the ID `routingAgent`.

```typescript title="app/network/route.ts" copy
import { handleNetworkStream } from '@mastra/ai-sdk';
import { createUIMessageStreamResponse } from 'ai';
import { mastra } from '@/src/mastra';

export async function POST(req: Request) {
  const params = await req.json();
  const stream = await handleNetworkStream({
    mastra,
    agentId: 'routingAgent',
    params,
  });
  return createUIMessageStreamResponse({ stream });
}
```

</TabItem>

</Tabs>

### `useChat()`

Whether you created API routes through [Mastra's server](#mastras-server) or used a [framework of your choice](#framework-agnostic), you can now use the API endpoints in the `useChat()` hook.

Assuming you set up a route at `/chat` that uses a weather agent, you can ask it questions as seen below. It's important that you set the correct `api` URL.

```ts {9}
import { useChat } from "@ai-sdk/react";
import { useState } from "react";
import { DefaultChatTransport } from "ai";

export default function Chat() {
  const [inputValue, setInputValue] = useState("")
  const { messages, sendMessage } = useChat({
    transport: new DefaultChatTransport({
      api: "http://localhost:4111/chat",
    }),
  });

  const handleFormSubmit = (e: React.FormEvent) => {
    e.preventDefault();
    sendMessage({ text: inputValue });
  };

  return (
    <div>
      <pre>{JSON.stringify(messages, null, 2)}</pre>
      <form onSubmit={handleFormSubmit}>
        <input value={inputValue} onChange={e => setInputValue(e.target.value)} placeholder="Name of the city" />
      </form>
    </div>
  );
}
```

Use [`prepareSendMessagesRequest`](https://ai-sdk.dev/docs/reference/ai-sdk-ui/use-chat#transport.default-chat-transport.prepare-send-messages-request) to customize the request sent to the chat route, for example to pass additional configuration to the agent.

### `useCompletion()`

The `useCompletion()` hook handles single-turn completions between your frontend and a Mastra agent, allowing you to send a prompt and receive a streamed response over HTTP.

Your frontend could look like this:

```typescript title="app/page.tsx" copy
import { useCompletion } from '@ai-sdk/react';

export default function Page() {
  const { completion, input, handleInputChange, handleSubmit } = useCompletion({
    api: '/api/completion',
  });

  return (
    <form onSubmit={handleSubmit}>
      <input
        name="prompt"
        value={input}
        onChange={handleInputChange}
        id="input"
      />
      <button type="submit">Submit</button>
      <div>{completion}</div>
    </form>
  );
}
```

Below are two approaches to implementing the backend:

<Tabs>

<TabItem value="mastra-server" label="Mastra Server">

```ts title="src/mastra/index.ts" copy
import { Mastra } from '@mastra/core/mastra';
import { registerApiRoute } from '@mastra/core/server';
import { handleChatStream } from '@mastra/ai-sdk';
import { createUIMessageStreamResponse } from 'ai';

export const mastra = new Mastra({
  server: {
    apiRoutes: [
      registerApiRoute('/completion', {
        method: 'POST',
        handler: async (c) => {
          const { prompt } = await c.req.json();
          const mastra = c.get('mastra');
          const stream = await handleChatStream({
            mastra,
            agentId: 'weatherAgent',
            params: {
              messages: [
                {
                  id: "1",
                  role: 'user',
                  parts: [
                    {
                      type: 'text',
                      text: prompt
                    }
                  ]
                }
              ],
            }
          })

          return createUIMessageStreamResponse({ stream });
        }
      })
    ]
  }
});
```

</TabItem>

<TabItem value="nextjs" label="Next.js">

```ts title="app/completion/route.ts" copy
import { handleChatStream } from '@mastra/ai-sdk';
import { createUIMessageStreamResponse } from 'ai';
import { mastra } from '@/src/mastra';

// Allow streaming responses up to 30 seconds
export const maxDuration = 30;

export async function POST(req: Request) {
  const { prompt }: { prompt: string } = await req.json();

  const stream = await handleChatStream({
    mastra,
    agentId: 'weatherAgent',
    params: {
      messages: [
        {
          id: "1",
          role: 'user',
          parts: [
            {
              type: 'text',
              text: prompt
            }
          ]
        }
      ],
    },
  });
  return createUIMessageStreamResponse({ stream });
}
```

</TabItem>

</Tabs>

## Recipes

### Stream transformations

To manually transform Mastra's streams to AI SDK-compatible format, use the [`toAISdkStream()`](/reference/v1/ai-sdk/to-ai-sdk-stream) utility. See the [examples](/reference/v1/ai-sdk/to-ai-sdk-stream#examples) for concrete usage patterns.

### Passing additional data

[`sendMessage()`](https://ai-sdk.dev/docs/reference/ai-sdk-ui/use-chat#send-message) allows you to pass additional data from the frontend to Mastra. This data can then be used on the server as [`RequestContext`](/docs/v1/server-db/request-context).

Here's an example of the frontend code:

```typescript {15-25} copy
import { useChat } from "@ai-sdk/react";
import { useState } from "react";
import { DefaultChatTransport } from 'ai';

export function ChatAdditional() {
  const [inputValue, setInputValue] = useState('')
  const { messages, sendMessage } = useChat({
    transport: new DefaultChatTransport({
      api: 'http://localhost:4111/chat-extra',
    }),
  });

  const handleFormSubmit = (e: React.FormEvent) => {
    e.preventDefault();
    sendMessage({ text: inputValue }, {
      body: {
        data: {
          userId: "user123",
          preferences: {
            language: "en",
            temperature: "celsius"
          }
        }
      }
    });
  };

  return (
    <div>
      <pre>{JSON.stringify(messages, null, 2)}</pre>
      <form onSubmit={handleFormSubmit}>
        <input value={inputValue} onChange={e => setInputValue(e.target.value)} placeholder="Name of the city" />
      </form>
    </div>
  );
}
```

Two examples on how to implement the backend portion of it.

<Tabs>

<TabItem value="mastra-server" label="Mastra Server">

Add a `chatRoute()` to your Mastra configuration like shown above. Then, add a server-level middleware:

```typescript title="src/mastra/index.ts" copy
import { Mastra } from "@mastra/core";

export const mastra = new Mastra({
  server: {
    middleware: [
      async (c, next) => {
        const requestContext = c.get("requestContext");

        if (c.req.method === "POST") {
          const clonedReq = c.req.raw.clone();
          const body = await clonedReq.json();

          if (body?.data) {
            for (const [key, value] of Object.entries(body.data)) {
              requestContext.set(key, value);
            }
          }
        }
        await next();
      },
    ],
  },
});
```

:::info

You can access this data in your tools via the `requestContext` parameter. See the [Request Context documentation](/docs/v1/server-db/request-context) for more details.

:::

</TabItem>

<TabItem value="nextjs" label="Next.js">

```typescript title="app/chat-extra/route.ts" copy
import { handleChatStream } from '@mastra/ai-sdk';
import { RequestContext } from "@mastra/core/request-context";
import { createUIMessageStreamResponse } from 'ai';
import { mastra } from '@/src/mastra';

export async function POST(req: Request) {
  const { messages, data } = await req.json();

  const requestContext = new RequestContext();

  if (data) {
    for (const [key, value] of Object.entries(data)) {
      requestContext.set(key, value);
    }
  }

  const stream = await handleChatStream({
    mastra,
    agentId: 'weatherAgent',
    params: {
      messages,
      requestContext,
    },
  });
  return createUIMessageStreamResponse({ stream });
}
```

</TabItem>

</Tabs>

### Custom UI

The `@mastra/ai-sdk` package transforms and emits Mastra streams (e.g workflow, network streams) into AI SDK-compatible [uiMessages DataParts](https://ai-sdk.dev/docs/reference/ai-sdk-core/ui-message#datauipart) format.

- **Top-level parts**: These are streamed via direct workflow and network stream transformations (e.g in `workflowRoute()` and `networkRoute()`)
  - `data-workflow`: Aggregates a workflow run with step inputs/outputs and final usage.
  - `data-network`: Aggregates a routing/network run with ordered steps (agent/workflow/tool executions) and outputs.

- **Nested parts**: These are streamed via nested and merged streams from within a tool's `execute()` method.
  - `data-tool-workflow`: Nested workflow emitted from within a tool stream.
  - `data-tool-network`: Nested network emitted from within an tool stream.
  - `data-tool-agent`: Nested agent emitted from within an tool stream.

Here's an example: For a [nested agent stream within a tool](/docs/v1/streaming/tool-streaming#tool-using-an-agent), `data-tool-agent` UI message parts will be emitted and can be leveraged on the client as documented below:

```typescript title="app/page.tsx" copy
"use client";

import { useChat } from "@ai-sdk/react";
import { AgentTool } from '../ui/agent-tool';
import { DefaultChatTransport } from 'ai';
import type { AgentDataPart } from "@mastra/ai-sdk";

export default function Page() {
  const { messages } = useChat({
    transport: new DefaultChatTransport({
      api: 'http://localhost:4111/chat',
    }),
  });

  return (
    <div>
      {messages.map((message) => (
        <div key={message.id}>
          {message.parts.map((part, i) => {
            switch (part.type) {
              case 'data-tool-agent':
                return (
                  <AgentTool {...part.data as AgentDataPart} key={`${message.id}-${i}`} />
                );
              default:
                return null;
            }
          })}
        </div>
      ))}
    </div>
  );
}
```

```typescript title="ui/agent-tool.ts" copy
import { Tool, ToolContent, ToolHeader, ToolOutput } from "../ai-elements/tool";
import type { AgentDataPart } from "@mastra/ai-sdk";

export const AgentTool = ({ id, text, status }: AgentDataPart) => {
  return (
    <Tool>
      <ToolHeader
        type={`${id}`}
        state={status === 'finished' ? 'output-available' : 'input-available'}
      />
      <ToolContent>
        <ToolOutput output={text} />
      </ToolContent>
    </Tool>
  );
};
```

### Custom Tool streaming

To stream custom data parts from within your tool execution function, use the `writer.custom()` method.

:::tip

It is important that you `await` the `writer.custom()` call.

:::

```typescript {4,7-10,14-17} copy
import { createTool } from "@mastra/core/tools";

export const testTool = createTool({
  execute: async (inputData, context) => {
    const { value } = inputData;

    await context?.writer?.custom({
      type: "data-tool-progress",
      status: "pending"
    });

    const response = await fetch(...);

    await context?.writer?.custom({
      type: "data-tool-progress",
      status: "success"
    });

    return {
      value: ""
    };
  }
});
```

For more information about tool streaming see [Tool streaming documentation](/docs/v1/streaming/tool-streaming).
