---
sidebar_position: 1
---

# HTTP Response Output Parser

import CodeBlock from "@theme/CodeBlock";
import HttpResponse from "@examples/prompts/http_response_output_parser.ts";
import EventStreamHttpResponse from "@examples/prompts/http_response_output_parser_event_stream.ts";
import CustomOutputHttpResponse from "@examples/prompts/http_response_output_parser_custom.ts";

The HTTP Response output parser allows you to stream LLM output properly formatted bytes a web [HTTP response](https://developer.mozilla.org/en-US/docs/Web/API/Response):

import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";

<IntegrationInstallTooltip></IntegrationInstallTooltip>

```bash npm2yarn
npm install @langchain/openai
```

<CodeBlock language="typescript">{HttpResponse}</CodeBlock>

You can also stream back chunks as an [event stream](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events):

<CodeBlock language="typescript">{EventStreamHttpResponse}</CodeBlock>

Or pass a custom output parser to internally parse chunks for e.g. streaming function outputs:

<CodeBlock language="typescript">{CustomOutputHttpResponse}</CodeBlock>
