---
title: "Streaming"
description: "Real-time streaming of agent responses"
icon: "wave-pulse"
---

Streaming enables real-time output from your agents, providing immediate feedback as the agent works through tasks. This creates responsive user experiences and allows you to show progress indicators, tool calls, and partial results as they happen.

## Why Stream?

Streaming provides several benefits:

- **Better UX**: Show progress instead of waiting for completion
- **Immediate feedback**: Users see the agent working in real-time
- **Transparency**: Display tool calls and reasoning as they happen
- **Responsiveness**: Start processing results before the agent finishes
- **Error visibility**: Catch and display errors immediately

<Tip>
**Perfect for Chat Interfaces**: Streaming is essential for building chat-like interfaces where users expect to see responses appear token-by-token, just like ChatGPT.
</Tip>

## Streaming Methods

The MCPAgent provides different streaming approaches for different use cases.

<CodeGroup>
```typescript TypeScript
import { ChatOpenAI } from '@langchain/openai'
import { MCPAgent, MCPClient } from 'mcp-use'

async function stepStreamingExample() {
    // Setup agent
    const config = {
        mcpServers: {
            playwright: {
                command: 'npx',
                args: ['@playwright/mcp@latest']
            }
        }
    }

    const client = new MCPClient(config)
    const llm = new ChatOpenAI({ model: 'gpt-4' })
    const agent = new MCPAgent({ llm, client })

    // Stream the agent's steps
    console.log('🤖 Agent is working...')
    console.log('-'.repeat(50))

    for await (const step of agent.stream('Search for the latest Python news and summarize it')) {
        console.log(`\n🔧 Tool: ${step.action.tool}`)
        console.log(`📝 Input: ${JSON.stringify(step.action.toolInput)}`)
        const result = step.observation.substring(0, 100)
        console.log(`📄 Result: ${result}${step.observation.length > 100 ? '...' : ''}`)
    }

    console.log('\n🎉 Done!')
    await client.closeAllSessions()
}

stepStreamingExample().catch(console.error)
```

```typescript TypeScript
import { ChatOpenAI } from '@langchain/openai'
import { MCPAgent, MCPClient } from 'mcp-use'

async function stepStreamingExample() {
    // Setup agent
    const config = {
        mcpServers: {
            playwright: {
                command: 'npx',
                args: ['@playwright/mcp@latest']
            }
        }
    }

    const client = new MCPClient(config)
    const llm = new ChatOpenAI({ model: 'gpt-4' })
    const agent = new MCPAgent({ llm, client })

    // Stream the agent's steps
    console.log('🤖 Agent is working...')
    console.log('-'.repeat(50))

    for await (const step of agent.stream('Search for the latest Python news and summarize it')) {
        console.log(`\n🔧 Tool: ${step.action.tool}`)
        console.log(`📝 Input: ${JSON.stringify(step.action.toolInput)}`)
        const result = step.observation.substring(0, 100)
        console.log(`📄 Result: ${result}${step.observation.length > 100 ? '...' : ''}`)
    }

    console.log('\n🎉 Done!')
    await client.closeAllSessions()
}

stepStreamingExample().catch(console.error)
```
</CodeGroup>

## Low-Level Event Streaming

For more granular control, use the `stream_events` method to get real-time output events:

<CodeGroup>
```typescript TypeScript
import { ChatOpenAI } from '@langchain/openai'
import { MCPAgent, MCPClient } from 'mcp-use'

async function basicStreamingExample() {
    // Setup agent
    const config = {
        mcpServers: {
            playwright: {
                command: 'npx',
                args: ['@playwright/mcp@latest']
            }
        }
    }

    const client = new MCPClient(config)
    const llm = new ChatOpenAI({ model: 'gpt-4' })
    const agent = new MCPAgent({ llm, client })

    // Stream the agent's response
    console.log('Agent is working...')

    for await (const event of agent.streamEvents('Search for the latest Python news and summarize it')) {
        if (event.event === 'on_chat_model_stream') {
            // Stream LLM output token by token
            const text = event.data?.chunk?.text
            if (text) {
                process.stdout.write(text)
            }
        }
    }

    console.log('\n\nDone!')
    await client.closeAllSessions()
}

basicStreamingExample().catch(console.error)
```

```typescript TypeScript
import { ChatOpenAI } from '@langchain/openai'
import { MCPAgent, MCPClient } from 'mcp-use'

async function basicStreamingExample() {
    // Setup agent
    const config = {
        mcpServers: {
            playwright: {
                command: 'npx',
                args: ['@playwright/mcp@latest']
            }
        }
    }

    const client = new MCPClient(config)
    const llm = new ChatOpenAI({ model: 'gpt-4' })
    const agent = new MCPAgent({ llm, client })

    // Stream the agent's response
    console.log('Agent is working...')

    for await (const event of agent.streamEvents('Search for the latest Python news and summarize it')) {
        if (event.event === 'on_chat_model_stream') {
            // Stream LLM output token by token
            const text = event.data?.chunk?.text
            if (text) {
                process.stdout.write(text)
            }
        }
    }

    console.log('\n\nDone!')
    await client.closeAllSessions()
}

basicStreamingExample().catch(console.error)
```
</CodeGroup>

<Tip>
The streaming API is based on LangChain's `stream_events` method. For more details on event types and data structure, check the [LangChain streaming documentation](https://python.langchain.com/docs/how_to/streaming/).
</Tip>

## Choosing the Right Streaming Method

<CardGroup cols={2}>
  <Card title="Use stream() when:" icon="list-ordered">
    • You want to show step-by-step progress
    • You need to process each tool call individually
    • You're building a workflow UI
    • You want simple, clean step tracking
  </Card>
  <Card title="Use stream_events() when:" icon="code">
    • You need fine-grained control over events
    • You're building real-time chat interfaces
    • You want to stream LLM reasoning text
    • You need custom event filtering
  </Card>
</CardGroup>

## Examples
### Building a Streaming UI

Here's an example of how you might build a simple console UI for streaming:

### Web Streaming with FastAPI

For web applications, you can stream agent output using Server-Sent Events:

## Next Steps

<CardGroup cols={3}>
  <Card title="Agent Configuration" icon="gear" href="/typescript/agent/agent-configuration">
    Learn more about configuring agents for optimal streaming performance
  </Card>
  <Card title="Multi-Server Setup" icon="server" href="/typescript/advanced/multi-server-setup">
    Stream output from agents using multiple MCP servers
  </Card>
  <Card title="Agent Configuration" icon="zap" href="/typescript/agent/agent-configuration">
    Optimize streaming performance for production use
  </Card>
</CardGroup>
