id
stringlengths 14
17
| text
stringlengths 42
2.1k
|
---|---|
4e9727215e95-600 | ModulesModel I/OLanguage modelsChat modelsIntegrationsPromptLayer OpenAIPromptLayerChatOpenAIYou can pass in the optional returnPromptLayerId boolean to get a promptLayerRequestId like below. Here is an example of getting the PromptLayerChatOpenAI requestID:import { PromptLayerChatOpenAI } from "langchain/chat_models/openai";const chat = new PromptLayerChatOpenAI({ returnPromptLayerId: true,});const respA = await chat.generate([ [ new SystemMessage( "You are a helpful assistant that translates English to French." ), ],]);console.log(JSON.stringify(respA, null, 3));/* { "generations": [ [ { "text": "Bonjour! Je suis un assistant utile qui peut vous aider à traduire de l'anglais vers le français. Que puis-je faire pour vous aujourd'hui? ", "message": { "type": "ai", "data": { "content": "Bonjour! Je suis un assistant utile qui peut vous aider à traduire de l'anglais vers le français. Que puis-je faire pour vous aujourd'hui?" } }, "generationInfo": { "promptLayerRequestId": 2300682 } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 35, "promptTokens": 19, "totalTokens": 54 } } }*/PreviousOpenAINextOutput parsers |
4e9727215e95-601 | PromptLayerChatOpenAIYou can pass in the optional returnPromptLayerId boolean to get a promptLayerRequestId like below. Here is an example of getting the PromptLayerChatOpenAI requestID:import { PromptLayerChatOpenAI } from "langchain/chat_models/openai";const chat = new PromptLayerChatOpenAI({ returnPromptLayerId: true,});const respA = await chat.generate([ [ new SystemMessage( "You are a helpful assistant that translates English to French." ), ],]);console.log(JSON.stringify(respA, null, 3));/* { "generations": [ [ { "text": "Bonjour! Je suis un assistant utile qui peut vous aider à traduire de l'anglais vers le français. Que puis-je faire pour vous aujourd'hui? ", "message": { "type": "ai", "data": { "content": "Bonjour! Je suis un assistant utile qui peut vous aider à traduire de l'anglais vers le français. Que puis-je faire pour vous aujourd'hui?" } }, "generationInfo": { "promptLayerRequestId": 2300682 } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 35, "promptTokens": 19, "totalTokens": 54 } } }*/
You can pass in the optional returnPromptLayerId boolean to get a promptLayerRequestId like below. Here is an example of getting the PromptLayerChatOpenAI requestID: |
4e9727215e95-602 | import { PromptLayerChatOpenAI } from "langchain/chat_models/openai";const chat = new PromptLayerChatOpenAI({ returnPromptLayerId: true,});const respA = await chat.generate([ [ new SystemMessage( "You are a helpful assistant that translates English to French." ), ],]);console.log(JSON.stringify(respA, null, 3));/* { "generations": [ [ { "text": "Bonjour! Je suis un assistant utile qui peut vous aider à traduire de l'anglais vers le français. Que puis-je faire pour vous aujourd'hui? ", "message": { "type": "ai", "data": { "content": "Bonjour! Je suis un assistant utile qui peut vous aider à traduire de l'anglais vers le français. Que puis-je faire pour vous aujourd'hui?" } }, "generationInfo": { "promptLayerRequestId": 2300682 } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 35, "promptTokens": 19, "totalTokens": 54 } } }*/
Page Title: Output parsers | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-603 | Page Title: Output parsers | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersOn this pageOutput parsersLanguage models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:"Get format instructions": A method which returns a string containing instructions for how the output of a language model should be formatted. "Parse": A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.And then one optional one:"Parse with prompt": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. |
4e9727215e95-604 | The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.Get startedBelow we go over one useful type of output parser, the StructuredOutputParser.Structured Output ParserThis output parser can be used when you want to return multiple fields. If you want complex schema returned (i.e. a JSON object with arrays of strings), use the Zod Schema detailed below.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// With a `StructuredOutputParser` we can define a schema for the output.const parser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website. ",});const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.You must format your output as a JSON value that adheres to a given "JSON Schema" instance. |
4e9727215e95-605 | "JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}would match an object with one required property, "foo". The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:```json{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website. "}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? |
4e9727215e95-606 | /console.log(response);/*{"answer": "Paris", "source": "https://en.wikipedia.org/wiki/Paris"}*/console.log(await parser.parse(response));// { answer: 'Paris', source: 'https://en.wikipedia.org/wiki/Paris' }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersStructured Output Parser with Zod SchemaThis output parser can be also be used when you want to define the output schema using Zod, a TypeScript validation library. The Zod schema passed in needs be parseable from a JSON string, so eg. z.date() is not allowed.import { z } from "zod";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// We can use zod to define a schema for the output using the `fromZodSchema` method of `StructuredOutputParser`.const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites. |
4e9727215e95-607 | "), }));const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"sources":{"type":"array","items":{"type":"string"},"description":"sources used to answer the question, should be websites. "}},"required":["answer","sources"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? |
4e9727215e95-608 | /console.log(response);/*{"answer": "Paris", "sources": ["https://en.wikipedia.org/wiki/Paris"]}*/console.log(await parser.parse(response));/*{ answer: 'Paris', sources: [ 'https://en.wikipedia.org/wiki/Paris' ] }*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersPreviousPromptLayer OpenAINextUse with LLMChainsGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersOn this pageOutput parsersLanguage models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:"Get format instructions": A method which returns a string containing instructions for how the output of a language model should be formatted. "Parse": A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.And then one optional one:"Parse with prompt": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. |
4e9727215e95-609 | The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.Get startedBelow we go over one useful type of output parser, the StructuredOutputParser.Structured Output ParserThis output parser can be used when you want to return multiple fields. If you want complex schema returned (i.e. a JSON object with arrays of strings), use the Zod Schema detailed below.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// With a `StructuredOutputParser` we can define a schema for the output.const parser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website. ",});const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.You must format your output as a JSON value that adheres to a given "JSON Schema" instance. |
4e9727215e95-610 | "JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}would match an object with one required property, "foo". The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:```json{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website. "}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? |
4e9727215e95-611 | /console.log(response);/*{"answer": "Paris", "source": "https://en.wikipedia.org/wiki/Paris"}*/console.log(await parser.parse(response));// { answer: 'Paris', source: 'https://en.wikipedia.org/wiki/Paris' }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersStructured Output Parser with Zod SchemaThis output parser can be also be used when you want to define the output schema using Zod, a TypeScript validation library. The Zod schema passed in needs be parseable from a JSON string, so eg. z.date() is not allowed.import { z } from "zod";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// We can use zod to define a schema for the output using the `fromZodSchema` method of `StructuredOutputParser`.const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites. |
4e9727215e95-612 | "), }));const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"sources":{"type":"array","items":{"type":"string"},"description":"sources used to answer the question, should be websites. "}},"required":["answer","sources"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? |
4e9727215e95-613 | /console.log(response);/*{"answer": "Paris", "sources": ["https://en.wikipedia.org/wiki/Paris"]}*/console.log(await parser.parse(response));/*{ answer: 'Paris', sources: [ 'https://en.wikipedia.org/wiki/Paris' ] }*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersPreviousPromptLayer OpenAINextUse with LLMChainsGet started
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference |
4e9727215e95-614 | ModulesModel I/OOutput parsersOn this pageOutput parsersLanguage models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:"Get format instructions": A method which returns a string containing instructions for how the output of a language model should be formatted. "Parse": A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.And then one optional one:"Parse with prompt": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.Get startedBelow we go over one useful type of output parser, the StructuredOutputParser.Structured Output ParserThis output parser can be used when you want to return multiple fields. If you want complex schema returned (i.e. |
4e9727215e95-615 | a JSON object with arrays of strings), use the Zod Schema detailed below.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// With a `StructuredOutputParser` we can define a schema for the output.const parser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website. ",});const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.You must format your output as a JSON value that adheres to a given "JSON Schema" instance. "JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}would match an object with one required property, "foo". |
4e9727215e95-616 | The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:```json{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website. "}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? |
4e9727215e95-617 | /console.log(response);/*{"answer": "Paris", "source": "https://en.wikipedia.org/wiki/Paris"}*/console.log(await parser.parse(response));// { answer: 'Paris', source: 'https://en.wikipedia.org/wiki/Paris' }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersStructured Output Parser with Zod SchemaThis output parser can be also be used when you want to define the output schema using Zod, a TypeScript validation library. The Zod schema passed in needs be parseable from a JSON string, so eg. z.date() is not allowed.import { z } from "zod";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// We can use zod to define a schema for the output using the `fromZodSchema` method of `StructuredOutputParser`.const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites. |
4e9727215e95-618 | "), }));const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"sources":{"type":"array","items":{"type":"string"},"description":"sources used to answer the question, should be websites. "}},"required":["answer","sources"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? |
4e9727215e95-619 | /console.log(response);/*{"answer": "Paris", "sources": ["https://en.wikipedia.org/wiki/Paris"]}*/console.log(await parser.parse(response));/*{ answer: 'Paris', sources: [ 'https://en.wikipedia.org/wiki/Paris' ] }*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersPreviousPromptLayer OpenAINextUse with LLMChainsGet started
ModulesModel I/OOutput parsersOn this pageOutput parsersLanguage models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:"Get format instructions": A method which returns a string containing instructions for how the output of a language model should be formatted. "Parse": A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.And then one optional one:"Parse with prompt": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.Get startedBelow we go over one useful type of output parser, the StructuredOutputParser.Structured Output ParserThis output parser can be used when you want to return multiple fields. If you want complex schema returned (i.e. |
4e9727215e95-620 | a JSON object with arrays of strings), use the Zod Schema detailed below.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// With a `StructuredOutputParser` we can define a schema for the output.const parser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website. ",});const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.You must format your output as a JSON value that adheres to a given "JSON Schema" instance. "JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}would match an object with one required property, "foo". |
4e9727215e95-621 | The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:```json{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website. "}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? |
4e9727215e95-622 | /console.log(response);/*{"answer": "Paris", "source": "https://en.wikipedia.org/wiki/Paris"}*/console.log(await parser.parse(response));// { answer: 'Paris', source: 'https://en.wikipedia.org/wiki/Paris' }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersStructured Output Parser with Zod SchemaThis output parser can be also be used when you want to define the output schema using Zod, a TypeScript validation library. The Zod schema passed in needs be parseable from a JSON string, so eg. z.date() is not allowed.import { z } from "zod";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// We can use zod to define a schema for the output using the `fromZodSchema` method of `StructuredOutputParser`.const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites. |
4e9727215e95-623 | "), }));const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"sources":{"type":"array","items":{"type":"string"},"description":"sources used to answer the question, should be websites. "}},"required":["answer","sources"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? |
4e9727215e95-624 | /console.log(response);/*{"answer": "Paris", "sources": ["https://en.wikipedia.org/wiki/Paris"]}*/console.log(await parser.parse(response));/*{ answer: 'Paris', sources: [ 'https://en.wikipedia.org/wiki/Paris' ] }*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersPreviousPromptLayer OpenAINextUse with LLMChains
Output parsersLanguage models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:"Get format instructions": A method which returns a string containing instructions for how the output of a language model should be formatted. "Parse": A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.And then one optional one:"Parse with prompt": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.Get startedBelow we go over one useful type of output parser, the StructuredOutputParser.Structured Output ParserThis output parser can be used when you want to return multiple fields. If you want complex schema returned (i.e. |
4e9727215e95-625 | a JSON object with arrays of strings), use the Zod Schema detailed below.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// With a `StructuredOutputParser` we can define a schema for the output.const parser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website. ",});const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.You must format your output as a JSON value that adheres to a given "JSON Schema" instance. "JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}would match an object with one required property, "foo". |
4e9727215e95-626 | The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:```json{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website. "}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? |
4e9727215e95-627 | /console.log(response);/*{"answer": "Paris", "source": "https://en.wikipedia.org/wiki/Paris"}*/console.log(await parser.parse(response));// { answer: 'Paris', source: 'https://en.wikipedia.org/wiki/Paris' }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersStructured Output Parser with Zod SchemaThis output parser can be also be used when you want to define the output schema using Zod, a TypeScript validation library. The Zod schema passed in needs be parseable from a JSON string, so eg. z.date() is not allowed.import { z } from "zod";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// We can use zod to define a schema for the output using the `fromZodSchema` method of `StructuredOutputParser`.const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites. |
4e9727215e95-628 | "), }));const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"sources":{"type":"array","items":{"type":"string"},"description":"sources used to answer the question, should be websites. "}},"required":["answer","sources"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? |
4e9727215e95-629 | /console.log(response);/*{"answer": "Paris", "sources": ["https://en.wikipedia.org/wiki/Paris"]}*/console.log(await parser.parse(response));/*{ answer: 'Paris', sources: [ 'https://en.wikipedia.org/wiki/Paris' ] }*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsers
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:
And then one optional one:
Below we go over one useful type of output parser, the StructuredOutputParser.
This output parser can be used when you want to return multiple fields. If you want complex schema returned (i.e. a JSON object with arrays of strings), use the Zod Schema detailed below. |
4e9727215e95-630 | import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// With a `StructuredOutputParser` we can define a schema for the output.const parser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website. ",});const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.You must format your output as a JSON value that adheres to a given "JSON Schema" instance. "JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}would match an object with one required property, "foo". |
4e9727215e95-631 | The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:```json{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website. "}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? */console.log(response);/*{"answer": "Paris", "source": "https://en.wikipedia.org/wiki/Paris"}*/console.log(await parser.parse(response));// { answer: 'Paris', source: 'https://en.wikipedia.org/wiki/Paris' }
API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsers
This output parser can be also be used when you want to define the output schema using Zod, a TypeScript validation library. The Zod schema passed in needs be parseable from a JSON string, so eg. z.date() is not allowed. |
4e9727215e95-632 | import { z } from "zod";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// We can use zod to define a schema for the output using the `fromZodSchema` method of `StructuredOutputParser`.const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites. "), }));const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. |
4e9727215e95-633 | The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"sources":{"type":"array","items":{"type":"string"},"description":"sources used to answer the question, should be websites. "}},"required":["answer","sources"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? */console.log(response);/*{"answer": "Paris", "sources": ["https://en.wikipedia.org/wiki/Paris"]}*/console.log(await parser.parse(response));/*{ answer: 'Paris', sources: [ 'https://en.wikipedia.org/wiki/Paris' ] }*/
Use with LLMChains
Page Title: Use with LLMChains | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toUse with LLMChainsBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersHow-toUse with LLMChainsUse with LLMChainsFor convenience, you can add an output parser to an LLMChain. |
4e9727215e95-634 | This will automatically call .parse() on the output.Don't forget to put the formatting instructions in the prompt!import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";import { StructuredOutputParser, OutputFixingParser,} from "langchain/output_parsers";const outputParser = StructuredOutputParser.fromZodSchema( z .array( z.object({ fields: z.object({ Name: z.string().describe("The name of the country"), Capital: z.string().describe("The country's capital"), }), }) ) .describe("An array of Airtable records, each representing a country"));const chatModel = new ChatOpenAI({ modelName: "gpt-4",
// Or gpt-3.5-turbo temperature: 0, // For best results with the output fixing parser});const outputFixingParser = OutputFixingParser.fromLLM(chatModel, outputParser);// Don't forget to include formatting instructions in the prompt!const prompt = new PromptTemplate({ template: `Answer the user's question as best you can:\n{format_instructions}\n{query}`, inputVariables: ["query"], partialVariables: { format_instructions: outputFixingParser.getFormatInstructions(), },});const answerFormattingChain = new LLMChain({ llm: chatModel, prompt, outputKey: "records", // For readability - otherwise the chain output will default to a property named "text" outputParser: outputFixingParser,});const result = await answerFormattingChain.call({ query: "List 5 countries. |
4e9727215e95-635 | ",});console.log(JSON.stringify(result.records, null, 2));/*[ { "fields": { "Name": "United States", "Capital": "Washington, D.C." } }, { "fields": { "Name": "Canada", "Capital": "Ottawa" } }, { "fields": { "Name": "Germany", "Capital": "Berlin" } }, { "fields": { "Name": "Japan", "Capital": "Tokyo" } }, { "fields": { "Name": "Australia", "Capital": "Canberra" } }]*/API Reference:ChatOpenAI from langchain/chat_models/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsStructuredOutputParser from langchain/output_parsersOutputFixingParser from langchain/output_parsersPreviousOutput parsersNextBytes output parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toUse with LLMChainsBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersHow-toUse with LLMChainsUse with LLMChainsFor convenience, you can add an output parser to an LLMChain. |
4e9727215e95-636 | This will automatically call .parse() on the output.Don't forget to put the formatting instructions in the prompt!import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";import { StructuredOutputParser, OutputFixingParser,} from "langchain/output_parsers";const outputParser = StructuredOutputParser.fromZodSchema( z .array( z.object({ fields: z.object({ Name: z.string().describe("The name of the country"), Capital: z.string().describe("The country's capital"), }), }) ) .describe("An array of Airtable records, each representing a country"));const chatModel = new ChatOpenAI({ modelName: "gpt-4",
// Or gpt-3.5-turbo temperature: 0, // For best results with the output fixing parser});const outputFixingParser = OutputFixingParser.fromLLM(chatModel, outputParser);// Don't forget to include formatting instructions in the prompt!const prompt = new PromptTemplate({ template: `Answer the user's question as best you can:\n{format_instructions}\n{query}`, inputVariables: ["query"], partialVariables: { format_instructions: outputFixingParser.getFormatInstructions(), },});const answerFormattingChain = new LLMChain({ llm: chatModel, prompt, outputKey: "records", // For readability - otherwise the chain output will default to a property named "text" outputParser: outputFixingParser,});const result = await answerFormattingChain.call({ query: "List 5 countries. |
4e9727215e95-637 | ",});console.log(JSON.stringify(result.records, null, 2));/*[ { "fields": { "Name": "United States", "Capital": "Washington, D.C." } }, { "fields": { "Name": "Canada", "Capital": "Ottawa" } }, { "fields": { "Name": "Germany", "Capital": "Berlin" } }, { "fields": { "Name": "Japan", "Capital": "Tokyo" } }, { "fields": { "Name": "Australia", "Capital": "Canberra" } }]*/API Reference:ChatOpenAI from langchain/chat_models/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsStructuredOutputParser from langchain/output_parsersOutputFixingParser from langchain/output_parsersPreviousOutput parsersNextBytes output parser
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toUse with LLMChainsBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference
ModulesModel I/OOutput parsersHow-toUse with LLMChainsUse with LLMChainsFor convenience, you can add an output parser to an LLMChain. |
4e9727215e95-638 | This will automatically call .parse() on the output.Don't forget to put the formatting instructions in the prompt!import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";import { StructuredOutputParser, OutputFixingParser,} from "langchain/output_parsers";const outputParser = StructuredOutputParser.fromZodSchema( z .array( z.object({ fields: z.object({ Name: z.string().describe("The name of the country"), Capital: z.string().describe("The country's capital"), }), }) ) .describe("An array of Airtable records, each representing a country"));const chatModel = new ChatOpenAI({ modelName: "gpt-4",
// Or gpt-3.5-turbo temperature: 0, // For best results with the output fixing parser});const outputFixingParser = OutputFixingParser.fromLLM(chatModel, outputParser);// Don't forget to include formatting instructions in the prompt!const prompt = new PromptTemplate({ template: `Answer the user's question as best you can:\n{format_instructions}\n{query}`, inputVariables: ["query"], partialVariables: { format_instructions: outputFixingParser.getFormatInstructions(), },});const answerFormattingChain = new LLMChain({ llm: chatModel, prompt, outputKey: "records", // For readability - otherwise the chain output will default to a property named "text" outputParser: outputFixingParser,});const result = await answerFormattingChain.call({ query: "List 5 countries. |
4e9727215e95-639 | ",});console.log(JSON.stringify(result.records, null, 2));/*[ { "fields": { "Name": "United States", "Capital": "Washington, D.C." } }, { "fields": { "Name": "Canada", "Capital": "Ottawa" } }, { "fields": { "Name": "Germany", "Capital": "Berlin" } }, { "fields": { "Name": "Japan", "Capital": "Tokyo" } }, { "fields": { "Name": "Australia", "Capital": "Canberra" } }]*/API Reference:ChatOpenAI from langchain/chat_models/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsStructuredOutputParser from langchain/output_parsersOutputFixingParser from langchain/output_parsersPreviousOutput parsersNextBytes output parser
Use with LLMChainsFor convenience, you can add an output parser to an LLMChain. |
4e9727215e95-640 | This will automatically call .parse() on the output.Don't forget to put the formatting instructions in the prompt!import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";import { StructuredOutputParser, OutputFixingParser,} from "langchain/output_parsers";const outputParser = StructuredOutputParser.fromZodSchema( z .array( z.object({ fields: z.object({ Name: z.string().describe("The name of the country"), Capital: z.string().describe("The country's capital"), }), }) ) .describe("An array of Airtable records, each representing a country"));const chatModel = new ChatOpenAI({ modelName: "gpt-4", |
4e9727215e95-641 | // Or gpt-3.5-turbo temperature: 0, // For best results with the output fixing parser});const outputFixingParser = OutputFixingParser.fromLLM(chatModel, outputParser);// Don't forget to include formatting instructions in the prompt!const prompt = new PromptTemplate({ template: `Answer the user's question as best you can:\n{format_instructions}\n{query}`, inputVariables: ["query"], partialVariables: { format_instructions: outputFixingParser.getFormatInstructions(), },});const answerFormattingChain = new LLMChain({ llm: chatModel, prompt, outputKey: "records", // For readability - otherwise the chain output will default to a property named "text" outputParser: outputFixingParser,});const result = await answerFormattingChain.call({ query: "List 5 countries. ",});console.log(JSON.stringify(result.records, null, 2));/*[ { "fields": { "Name": "United States", "Capital": "Washington, D.C." } }, { "fields": { "Name": "Canada", "Capital": "Ottawa" } }, { "fields": { "Name": "Germany", "Capital": "Berlin" } }, { "fields": { "Name": "Japan", "Capital": "Tokyo" } }, { "fields": { "Name": "Australia", "Capital": "Canberra" } }]*/API Reference:ChatOpenAI from langchain/chat_models/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsStructuredOutputParser from langchain/output_parsersOutputFixingParser from langchain/output_parsers |
4e9727215e95-642 | For convenience, you can add an output parser to an LLMChain. This will automatically call .parse() on the output.
Don't forget to put the formatting instructions in the prompt!
import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";import { StructuredOutputParser, OutputFixingParser,} from "langchain/output_parsers";const outputParser = StructuredOutputParser.fromZodSchema( z .array( z.object({ fields: z.object({ Name: z.string().describe("The name of the country"), Capital: z.string().describe("The country's capital"), }), }) ) .describe("An array of Airtable records, each representing a country"));const chatModel = new ChatOpenAI({ modelName: "gpt-4", // Or gpt-3.5-turbo temperature: 0, // For best results with the output fixing parser});const outputFixingParser = OutputFixingParser.fromLLM(chatModel, outputParser);// Don't forget to include formatting instructions in the prompt!const prompt = new PromptTemplate({ template: `Answer the user's question as best you can:\n{format_instructions}\n{query}`, inputVariables: ["query"], partialVariables: { format_instructions: outputFixingParser.getFormatInstructions(), },});const answerFormattingChain = new LLMChain({ llm: chatModel, prompt, outputKey: "records", // For readability - otherwise the chain output will default to a property named "text" outputParser: outputFixingParser,});const result = await answerFormattingChain.call({ query: "List 5 countries. |
4e9727215e95-643 | ",});console.log(JSON.stringify(result.records, null, 2));/*[ { "fields": { "Name": "United States", "Capital": "Washington, D.C." } }, { "fields": { "Name": "Canada", "Capital": "Ottawa" } }, { "fields": { "Name": "Germany", "Capital": "Berlin" } }, { "fields": { "Name": "Japan", "Capital": "Tokyo" } }, { "fields": { "Name": "Australia", "Capital": "Canberra" } }]*/
API Reference:ChatOpenAI from langchain/chat_models/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsStructuredOutputParser from langchain/output_parsersOutputFixingParser from langchain/output_parsers
Bytes output parser
Page Title: Bytes output parser | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersBytes output parserOn this pageBytes output parserThe BytesOutputParser takes language model output (either an entire response or as a stream) and converts |
4e9727215e95-644 | it into binary data. This is particularly useful for streaming output to the frontend from a server.This output parser can act as a transform stream and work with streamed response chunks from a model.Usageimport { ChatOpenAI } from "langchain/chat_models/openai";import { BytesOutputParser } from "langchain/schema/output_parser";const parser = new BytesOutputParser();const model = new ChatOpenAI({ temperature: 0 });const stream = await model.pipe(parser).stream("Hello there! ");const decoder = new TextDecoder();for await (const chunk of stream) { if (chunk) { console.log(decoder.decode(chunk)); }}API Reference:ChatOpenAI from langchain/chat_models/openaiBytesOutputParser from langchain/schema/output_parserPreviousUse with LLMChainsNextCombining output parsersUsageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersBytes output parserOn this pageBytes output parserThe BytesOutputParser takes language model output (either an entire response or as a stream) and converts |
4e9727215e95-645 | it into binary data. This is particularly useful for streaming output to the frontend from a server.This output parser can act as a transform stream and work with streamed response chunks from a model.Usageimport { ChatOpenAI } from "langchain/chat_models/openai";import { BytesOutputParser } from "langchain/schema/output_parser";const parser = new BytesOutputParser();const model = new ChatOpenAI({ temperature: 0 });const stream = await model.pipe(parser).stream("Hello there! ");const decoder = new TextDecoder();for await (const chunk of stream) { if (chunk) { console.log(decoder.decode(chunk)); }}API Reference:ChatOpenAI from langchain/chat_models/openaiBytesOutputParser from langchain/schema/output_parserPreviousUse with LLMChainsNextCombining output parsersUsage
ModulesModel I/OOutput parsersBytes output parserOn this pageBytes output parserThe BytesOutputParser takes language model output (either an entire response or as a stream) and converts
it into binary data. This is particularly useful for streaming output to the frontend from a server.This output parser can act as a transform stream and work with streamed response chunks from a model.Usageimport { ChatOpenAI } from "langchain/chat_models/openai";import { BytesOutputParser } from "langchain/schema/output_parser";const parser = new BytesOutputParser();const model = new ChatOpenAI({ temperature: 0 });const stream = await model.pipe(parser).stream("Hello there! ");const decoder = new TextDecoder();for await (const chunk of stream) { if (chunk) { console.log(decoder.decode(chunk)); }}API Reference:ChatOpenAI from langchain/chat_models/openaiBytesOutputParser from langchain/schema/output_parserPreviousUse with LLMChainsNextCombining output parsersUsage |
4e9727215e95-646 | ModulesModel I/OOutput parsersBytes output parserOn this pageBytes output parserThe BytesOutputParser takes language model output (either an entire response or as a stream) and converts
it into binary data. This is particularly useful for streaming output to the frontend from a server.This output parser can act as a transform stream and work with streamed response chunks from a model.Usageimport { ChatOpenAI } from "langchain/chat_models/openai";import { BytesOutputParser } from "langchain/schema/output_parser";const parser = new BytesOutputParser();const model = new ChatOpenAI({ temperature: 0 });const stream = await model.pipe(parser).stream("Hello there! ");const decoder = new TextDecoder();for await (const chunk of stream) { if (chunk) { console.log(decoder.decode(chunk)); }}API Reference:ChatOpenAI from langchain/chat_models/openaiBytesOutputParser from langchain/schema/output_parserPreviousUse with LLMChainsNextCombining output parsers
Bytes output parserThe BytesOutputParser takes language model output (either an entire response or as a stream) and converts
it into binary data. This is particularly useful for streaming output to the frontend from a server.This output parser can act as a transform stream and work with streamed response chunks from a model.Usageimport { ChatOpenAI } from "langchain/chat_models/openai";import { BytesOutputParser } from "langchain/schema/output_parser";const parser = new BytesOutputParser();const model = new ChatOpenAI({ temperature: 0 });const stream = await model.pipe(parser).stream("Hello there! ");const decoder = new TextDecoder();for await (const chunk of stream) { if (chunk) { console.log(decoder.decode(chunk)); }}API Reference:ChatOpenAI from langchain/chat_models/openaiBytesOutputParser from langchain/schema/output_parser |
4e9727215e95-647 | The BytesOutputParser takes language model output (either an entire response or as a stream) and converts
it into binary data. This is particularly useful for streaming output to the frontend from a server.
This output parser can act as a transform stream and work with streamed response chunks from a model.
import { ChatOpenAI } from "langchain/chat_models/openai";import { BytesOutputParser } from "langchain/schema/output_parser";const parser = new BytesOutputParser();const model = new ChatOpenAI({ temperature: 0 });const stream = await model.pipe(parser).stream("Hello there! ");const decoder = new TextDecoder();for await (const chunk of stream) { if (chunk) { console.log(decoder.decode(chunk)); }}
API Reference:ChatOpenAI from langchain/chat_models/openaiBytesOutputParser from langchain/schema/output_parser
Combining output parsers
Usage
Page Title: Combining output parsers | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-648 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersCombining output parsersCombining output parsersOutput parsers can be combined using CombiningOutputParser. This output parser takes in a list of output parsers, and will ask for (and parse) a combined output that contains all the fields of all the parsers.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser, RegexParser, CombiningOutputParser,} from "langchain/output_parsers";const answerParser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website. ",});const confidenceParser = new RegexParser( /Confidence: (A|B|C), Explanation: (. |
4e9727215e95-649 | )/, ["confidence", "explanation"], "noConfidence");const parser = new CombiningOutputParser(answerParser, confidenceParser);const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.Return the following outputs, each formatted as described below:Output 1:The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website. |
4e9727215e95-650 | "}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```Output 2:Your response should match the following regex: /Confidence: (A|B|C), Explanation: (. *)/What is the capital of France? */console.log(response);/*Output 1:{"answer":"Paris","source":"https://www.worldatlas.com/articles/what-is-the-capital-of-france.html"}Output 2:Confidence: A, Explanation: The capital of France is Paris. */console.log(await parser.parse(response));/*{ answer: 'Paris', source: 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html', confidence: 'A', explanation: 'The capital of France is Paris. '}*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersRegexParser from langchain/output_parsersCombiningOutputParser from langchain/output_parsersPreviousBytes output parserNextList parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-651 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersCombining output parsersCombining output parsersOutput parsers can be combined using CombiningOutputParser. This output parser takes in a list of output parsers, and will ask for (and parse) a combined output that contains all the fields of all the parsers.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser, RegexParser, CombiningOutputParser,} from "langchain/output_parsers";const answerParser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website. ",});const confidenceParser = new RegexParser( /Confidence: (A|B|C), Explanation: (. |
4e9727215e95-652 | )/, ["confidence", "explanation"], "noConfidence");const parser = new CombiningOutputParser(answerParser, confidenceParser);const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.Return the following outputs, each formatted as described below:Output 1:The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website. |
4e9727215e95-653 | "}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```Output 2:Your response should match the following regex: /Confidence: (A|B|C), Explanation: (. *)/What is the capital of France? */console.log(response);/*Output 1:{"answer":"Paris","source":"https://www.worldatlas.com/articles/what-is-the-capital-of-france.html"}Output 2:Confidence: A, Explanation: The capital of France is Paris. */console.log(await parser.parse(response));/*{ answer: 'Paris', source: 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html', confidence: 'A', explanation: 'The capital of France is Paris. '}*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersRegexParser from langchain/output_parsersCombiningOutputParser from langchain/output_parsersPreviousBytes output parserNextList parser |
4e9727215e95-654 | ModulesModel I/OOutput parsersCombining output parsersCombining output parsersOutput parsers can be combined using CombiningOutputParser. This output parser takes in a list of output parsers, and will ask for (and parse) a combined output that contains all the fields of all the parsers.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser, RegexParser, CombiningOutputParser,} from "langchain/output_parsers";const answerParser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website. ",});const confidenceParser = new RegexParser( /Confidence: (A|B|C), Explanation: (. *)/, ["confidence", "explanation"], "noConfidence");const parser = new CombiningOutputParser(answerParser, confidenceParser);const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? |
4e9727215e95-655 | ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.Return the following outputs, each formatted as described below:Output 1:The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website. "}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```Output 2:Your response should match the following regex: /Confidence: (A|B|C), Explanation: (. *)/What is the capital of France? */console.log(response);/*Output 1:{"answer":"Paris","source":"https://www.worldatlas.com/articles/what-is-the-capital-of-france.html"}Output 2:Confidence: A, Explanation: The capital of France is Paris. |
4e9727215e95-656 | /console.log(await parser.parse(response));/*{ answer: 'Paris', source: 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html', confidence: 'A', explanation: 'The capital of France is Paris. '}*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersRegexParser from langchain/output_parsersCombiningOutputParser from langchain/output_parsersPreviousBytes output parserNextList parser
Combining output parsersOutput parsers can be combined using CombiningOutputParser. This output parser takes in a list of output parsers, and will ask for (and parse) a combined output that contains all the fields of all the parsers.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser, RegexParser, CombiningOutputParser,} from "langchain/output_parsers";const answerParser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website. ",});const confidenceParser = new RegexParser( /Confidence: (A|B|C), Explanation: (. *)/, ["confidence", "explanation"], "noConfidence");const parser = new CombiningOutputParser(answerParser, confidenceParser);const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? |
4e9727215e95-657 | ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.Return the following outputs, each formatted as described below:Output 1:The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website. "}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```Output 2:Your response should match the following regex: /Confidence: (A|B|C), Explanation: (. *)/What is the capital of France? */console.log(response);/*Output 1:{"answer":"Paris","source":"https://www.worldatlas.com/articles/what-is-the-capital-of-france.html"}Output 2:Confidence: A, Explanation: The capital of France is Paris. |
4e9727215e95-658 | /console.log(await parser.parse(response));/*{ answer: 'Paris', source: 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html', confidence: 'A', explanation: 'The capital of France is Paris. '}*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersRegexParser from langchain/output_parsersCombiningOutputParser from langchain/output_parsers
Output parsers can be combined using CombiningOutputParser. This output parser takes in a list of output parsers, and will ask for (and parse) a combined output that contains all the fields of all the parsers.
import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser, RegexParser, CombiningOutputParser,} from "langchain/output_parsers";const answerParser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website. ",});const confidenceParser = new RegexParser( /Confidence: (A|B|C), Explanation: (. *)/, ["confidence", "explanation"], "noConfidence");const parser = new CombiningOutputParser(answerParser, confidenceParser);const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? |
4e9727215e95-659 | ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.Return the following outputs, each formatted as described below:Output 1:The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website. "}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```Output 2:Your response should match the following regex: /Confidence: (A|B|C), Explanation: (. *)/What is the capital of France? */console.log(response);/*Output 1:{"answer":"Paris","source":"https://www.worldatlas.com/articles/what-is-the-capital-of-france.html"}Output 2:Confidence: A, Explanation: The capital of France is Paris.
/console.log(await parser.parse(response));/*{ answer: 'Paris', source: 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html', confidence: 'A', explanation: 'The capital of France is Paris. '}*/ |
4e9727215e95-660 | API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersRegexParser from langchain/output_parsersCombiningOutputParser from langchain/output_parsers
List parser
Page Title: List parser | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersList parserList parserThis output parser can be used when you want to return a list of comma-separated items.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { CommaSeparatedListOutputParser } from "langchain/output_parsers";export const run = async () => { // With a `CommaSeparatedListOutputParser`, we can parse a comma separated list. const parser = new CommaSeparatedListOutputParser(); const formatInstructions = parser.getFormatInstructions(); const prompt = new PromptTemplate({ template: "List five {subject}.\n{format_instructions}", inputVariables: ["subject"], partialVariables: { format_instructions: formatInstructions }, }); const model = new OpenAI({ temperature: 0 }); const input = await prompt.format({ subject: "ice cream flavors" }); const response = await model.call(input); console.log(input); /* List five ice cream flavors. |
4e9727215e95-661 | Your response should be a list of comma separated values, eg: `foo, bar, baz` */ console.log(response); // Vanilla, Chocolate, Strawberry, Mint Chocolate Chip, Cookies and Cream console.log(await parser.parse(response)); /* [ 'Vanilla', 'Chocolate', 'Strawberry', 'Mint Chocolate Chip', 'Cookies and Cream' ] */};API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsCommaSeparatedListOutputParser from langchain/output_parsersPreviousCombining output parsersNextCustom list parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-662 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersList parserList parserThis output parser can be used when you want to return a list of comma-separated items.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { CommaSeparatedListOutputParser } from "langchain/output_parsers";export const run = async () => { // With a `CommaSeparatedListOutputParser`, we can parse a comma separated list. const parser = new CommaSeparatedListOutputParser(); const formatInstructions = parser.getFormatInstructions(); const prompt = new PromptTemplate({ template: "List five {subject}.\n{format_instructions}", inputVariables: ["subject"], partialVariables: { format_instructions: formatInstructions }, }); const model = new OpenAI({ temperature: 0 }); const input = await prompt.format({ subject: "ice cream flavors" }); const response = await model.call(input); console.log(input); /* List five ice cream flavors. |
4e9727215e95-663 | Your response should be a list of comma separated values, eg: `foo, bar, baz` */ console.log(response); // Vanilla, Chocolate, Strawberry, Mint Chocolate Chip, Cookies and Cream console.log(await parser.parse(response)); /* [ 'Vanilla', 'Chocolate', 'Strawberry', 'Mint Chocolate Chip', 'Cookies and Cream' ] */};API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsCommaSeparatedListOutputParser from langchain/output_parsersPreviousCombining output parsersNextCustom list parser |
4e9727215e95-664 | ModulesModel I/OOutput parsersList parserList parserThis output parser can be used when you want to return a list of comma-separated items.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { CommaSeparatedListOutputParser } from "langchain/output_parsers";export const run = async () => { // With a `CommaSeparatedListOutputParser`, we can parse a comma separated list. const parser = new CommaSeparatedListOutputParser(); const formatInstructions = parser.getFormatInstructions(); const prompt = new PromptTemplate({ template: "List five {subject}.\n{format_instructions}", inputVariables: ["subject"], partialVariables: { format_instructions: formatInstructions }, }); const model = new OpenAI({ temperature: 0 }); const input = await prompt.format({ subject: "ice cream flavors" }); const response = await model.call(input); console.log(input); /* List five ice cream flavors. Your response should be a list of comma separated values, eg: `foo, bar, baz` */ console.log(response); // Vanilla, Chocolate, Strawberry, Mint Chocolate Chip, Cookies and Cream console.log(await parser.parse(response)); /* [ 'Vanilla', 'Chocolate', 'Strawberry', 'Mint Chocolate Chip', 'Cookies and Cream' ] */};API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsCommaSeparatedListOutputParser from langchain/output_parsersPreviousCombining output parsersNextCustom list parser |
4e9727215e95-665 | List parserThis output parser can be used when you want to return a list of comma-separated items.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { CommaSeparatedListOutputParser } from "langchain/output_parsers";export const run = async () => { // With a `CommaSeparatedListOutputParser`, we can parse a comma separated list. const parser = new CommaSeparatedListOutputParser(); const formatInstructions = parser.getFormatInstructions(); const prompt = new PromptTemplate({ template: "List five {subject}.\n{format_instructions}", inputVariables: ["subject"], partialVariables: { format_instructions: formatInstructions }, }); const model = new OpenAI({ temperature: 0 }); const input = await prompt.format({ subject: "ice cream flavors" }); const response = await model.call(input); console.log(input); /* List five ice cream flavors. Your response should be a list of comma separated values, eg: `foo, bar, baz` */ console.log(response); // Vanilla, Chocolate, Strawberry, Mint Chocolate Chip, Cookies and Cream console.log(await parser.parse(response)); /* [ 'Vanilla', 'Chocolate', 'Strawberry', 'Mint Chocolate Chip', 'Cookies and Cream' ] */};API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsCommaSeparatedListOutputParser from langchain/output_parsers
This output parser can be used when you want to return a list of comma-separated items. |
4e9727215e95-666 | This output parser can be used when you want to return a list of comma-separated items.
import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { CommaSeparatedListOutputParser } from "langchain/output_parsers";export const run = async () => { // With a `CommaSeparatedListOutputParser`, we can parse a comma separated list. const parser = new CommaSeparatedListOutputParser(); const formatInstructions = parser.getFormatInstructions(); const prompt = new PromptTemplate({ template: "List five {subject}.\n{format_instructions}", inputVariables: ["subject"], partialVariables: { format_instructions: formatInstructions }, }); const model = new OpenAI({ temperature: 0 }); const input = await prompt.format({ subject: "ice cream flavors" }); const response = await model.call(input); console.log(input); /* List five ice cream flavors. Your response should be a list of comma separated values, eg: `foo, bar, baz` */ console.log(response); // Vanilla, Chocolate, Strawberry, Mint Chocolate Chip, Cookies and Cream console.log(await parser.parse(response)); /* [ 'Vanilla', 'Chocolate', 'Strawberry', 'Mint Chocolate Chip', 'Cookies and Cream' ] */};
API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsCommaSeparatedListOutputParser from langchain/output_parsers
Custom list parser
Page Title: Custom list parser | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-667 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersCustom list parserCustom list parserThis output parser can be used when you want to return a list of items with a specific length and separator.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { CustomListOutputParser } from "langchain/output_parsers";// With a |
4e9727215e95-668 | `CustomListOutputParser`, we can parse a list with a specific length and separator.const parser = new CustomListOutputParser({ length: 3, separator: "\n" });const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Provide a list of {subject}.\n{format_instructions}", inputVariables: ["subject"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ subject: "great fiction books (book, author)",});const response = await model.call(input);console.log(input);/*Provide a list of great fiction books (book, author).Your response should be a list of 3 items separated by "\n" (eg: `foo\n bar\n baz`)*/console.log(response);/*The Catcher in the Rye, J.D. SalingerTo Kill a Mockingbird, Harper LeeThe Great Gatsby, F. Scott Fitzgerald*/console.log(await parser.parse(response));/*[ 'The Catcher in the Rye, J.D. Salinger', 'To Kill a Mockingbird, Harper Lee', 'The Great Gatsby, F. Scott Fitzgerald']*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsCustomListOutputParser from langchain/output_parsersPreviousList parserNextAuto-fixing parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-669 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersCustom list parserCustom list parserThis output parser can be used when you want to return a list of items with a specific length and separator.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { CustomListOutputParser } from "langchain/output_parsers";// With a `CustomListOutputParser`, we can parse a list with a specific length and separator.const parser = new CustomListOutputParser({ length: 3, separator: "\n" });const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Provide a list of {subject}.\n{format_instructions}", inputVariables: ["subject"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ subject: "great fiction books (book, author)",});const response = await model.call(input);console.log(input);/*Provide a list of great fiction books (book, author).Your response should be a list of 3 items separated by "\n" (eg: `foo\n bar\n baz`)*/console.log(response);/*The Catcher in the Rye, J.D. |
4e9727215e95-670 | SalingerTo Kill a Mockingbird, Harper LeeThe Great Gatsby, F. Scott Fitzgerald*/console.log(await parser.parse(response));/*[ 'The Catcher in the Rye, J.D. Salinger', 'To Kill a Mockingbird, Harper Lee', 'The Great Gatsby, F. Scott Fitzgerald']*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsCustomListOutputParser from langchain/output_parsersPreviousList parserNextAuto-fixing parser |
4e9727215e95-671 | ModulesModel I/OOutput parsersCustom list parserCustom list parserThis output parser can be used when you want to return a list of items with a specific length and separator.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { CustomListOutputParser } from "langchain/output_parsers";// With a `CustomListOutputParser`, we can parse a list with a specific length and separator.const parser = new CustomListOutputParser({ length: 3, separator: "\n" });const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Provide a list of {subject}.\n{format_instructions}", inputVariables: ["subject"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ subject: "great fiction books (book, author)",});const response = await model.call(input);console.log(input);/*Provide a list of great fiction books (book, author).Your response should be a list of 3 items separated by "\n" (eg: `foo\n bar\n baz`)*/console.log(response);/*The Catcher in the Rye, J.D. SalingerTo Kill a Mockingbird, Harper LeeThe Great Gatsby, F. Scott Fitzgerald*/console.log(await parser.parse(response));/*[ 'The Catcher in the Rye, J.D.
Salinger', 'To Kill a Mockingbird, Harper Lee', 'The Great Gatsby, F. Scott Fitzgerald']*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsCustomListOutputParser from langchain/output_parsersPreviousList parserNextAuto-fixing parser |
4e9727215e95-672 | Custom list parserThis output parser can be used when you want to return a list of items with a specific length and separator.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { CustomListOutputParser } from "langchain/output_parsers";// With a `CustomListOutputParser`, we can parse a list with a specific length and separator.const parser = new CustomListOutputParser({ length: 3, separator: "\n" });const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Provide a list of {subject}.\n{format_instructions}", inputVariables: ["subject"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ subject: "great fiction books (book, author)",});const response = await model.call(input);console.log(input);/*Provide a list of great fiction books (book, author).Your response should be a list of 3 items separated by "\n" (eg: `foo\n bar\n baz`)*/console.log(response);/*The Catcher in the Rye, J.D. SalingerTo Kill a Mockingbird, Harper LeeThe Great Gatsby, F. Scott Fitzgerald*/console.log(await parser.parse(response));/*[ 'The Catcher in the Rye, J.D. Salinger', 'To Kill a Mockingbird, Harper Lee', 'The Great Gatsby, F. Scott Fitzgerald']*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsCustomListOutputParser from langchain/output_parsers
This output parser can be used when you want to return a list of items with a specific length and separator. |
4e9727215e95-673 | import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { CustomListOutputParser } from "langchain/output_parsers";// With a `CustomListOutputParser`, we can parse a list with a specific length and separator.const parser = new CustomListOutputParser({ length: 3, separator: "\n" });const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Provide a list of {subject}.\n{format_instructions}", inputVariables: ["subject"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ subject: "great fiction books (book, author)",});const response = await model.call(input);console.log(input);/*Provide a list of great fiction books (book, author).Your response should be a list of 3 items separated by "\n" (eg: `foo\n bar\n baz`)*/console.log(response);/*The Catcher in the Rye, J.D. SalingerTo Kill a Mockingbird, Harper LeeThe Great Gatsby, F. Scott Fitzgerald*/console.log(await parser.parse(response));/*[ 'The Catcher in the Rye, J.D. Salinger', 'To Kill a Mockingbird, Harper Lee', 'The Great Gatsby, F. Scott Fitzgerald']*/
API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsCustomListOutputParser from langchain/output_parsers
Auto-fixing parser
Page Title: Auto-fixing parser | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-674 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersAuto-fixing parserAuto-fixing parserThis output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it.For this example, we'll use the structured output parser. Here's what happens if we pass it a result that does not comply with the schema:import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { StructuredOutputParser, OutputFixingParser,} from "langchain/output_parsers";export const run = async () => { const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites. |
4e9727215e95-675 | "), }) ); /** This is a bad output because sources is a string, not a list */ const badOutput = `\`\`\`json { "answer": "foo", "sources": "foo.com" } \`\`\``; try { await parser.parse(badOutput); } catch (e) { console.log("Failed to parse bad output: ", e); /* Failed to parse bad output: OutputParserException [Error]: Failed to parse. Text: ```json { "answer": "foo", "sources": "foo.com" } ```. |
4e9727215e95-676 | Error: [ { "code": "invalid_type", "expected": "array", "received": "string", "path": [ "sources" ], "message": "Expected array, received string" } ] at StructuredOutputParser.parse (/Users/ankushgola/Code/langchainjs/langchain/src/output_parsers/structured.ts:71:13) at run (/Users/ankushgola/Code/langchainjs/examples/src/prompts/fix_parser.ts:25:18) at <anonymous> (/Users/ankushgola/Code/langchainjs/examples/src/index.ts:33:22) */ } const fixParser = OutputFixingParser.fromLLM( new ChatOpenAI({ temperature: 0 }), parser ); const output = await fixParser.parse(badOutput); console.log("Fixed output: ", output); // Fixed output: { answer: 'foo', sources: [ 'foo.com' ] }};API Reference:ChatOpenAI from langchain/chat_models/openaiStructuredOutputParser from langchain/output_parsersOutputFixingParser from langchain/output_parsersPreviousCustom list parserNextString output parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-677 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersAuto-fixing parserAuto-fixing parserThis output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it.For this example, we'll use the structured output parser. Here's what happens if we pass it a result that does not comply with the schema:import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { StructuredOutputParser, OutputFixingParser,} from "langchain/output_parsers";export const run = async () => { const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites. |
4e9727215e95-678 | "), }) ); /** This is a bad output because sources is a string, not a list */ const badOutput = `\`\`\`json { "answer": "foo", "sources": "foo.com" } \`\`\``; try { await parser.parse(badOutput); } catch (e) { console.log("Failed to parse bad output: ", e); /* Failed to parse bad output: OutputParserException [Error]: Failed to parse. Text: ```json { "answer": "foo", "sources": "foo.com" } ```. Error: [ { "code": "invalid_type", "expected": "array", "received": "string", "path": [ "sources" ], "message": "Expected array, received string" } ] at StructuredOutputParser.parse (/Users/ankushgola/Code/langchainjs/langchain/src/output_parsers/structured.ts:71:13) at run (/Users/ankushgola/Code/langchainjs/examples/src/prompts/fix_parser.ts:25:18) at <anonymous> (/Users/ankushgola/Code/langchainjs/examples/src/index.ts:33:22) */ } const fixParser = OutputFixingParser.fromLLM( new ChatOpenAI({ temperature: 0 }), parser ); const output = await fixParser.parse(badOutput); console.log("Fixed output: ", output); // Fixed output: { answer: 'foo', sources: [ 'foo.com' ] }};API Reference:ChatOpenAI from langchain/chat_models/openaiStructuredOutputParser from langchain/output_parsersOutputFixingParser from langchain/output_parsersPreviousCustom list parserNextString output parser |
4e9727215e95-679 | ModulesModel I/OOutput parsersAuto-fixing parserAuto-fixing parserThis output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it.For this example, we'll use the structured output parser. Here's what happens if we pass it a result that does not comply with the schema:import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { StructuredOutputParser, OutputFixingParser,} from "langchain/output_parsers";export const run = async () => { const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites. "), }) ); /** This is a bad output because sources is a string, not a list */ const badOutput = `\`\`\`json { "answer": "foo", "sources": "foo.com" } \`\`\``; try { await parser.parse(badOutput); } catch (e) { console.log("Failed to parse bad output: ", e); /* Failed to parse bad output: OutputParserException [Error]: Failed to parse. Text: ```json { "answer": "foo", "sources": "foo.com" } ```. |
4e9727215e95-680 | Error: [ { "code": "invalid_type", "expected": "array", "received": "string", "path": [ "sources" ], "message": "Expected array, received string" } ] at StructuredOutputParser.parse (/Users/ankushgola/Code/langchainjs/langchain/src/output_parsers/structured.ts:71:13) at run (/Users/ankushgola/Code/langchainjs/examples/src/prompts/fix_parser.ts:25:18) at <anonymous> (/Users/ankushgola/Code/langchainjs/examples/src/index.ts:33:22) */ } const fixParser = OutputFixingParser.fromLLM( new ChatOpenAI({ temperature: 0 }), parser ); const output = await fixParser.parse(badOutput); console.log("Fixed output: ", output); // Fixed output: { answer: 'foo', sources: [ 'foo.com' ] }};API Reference:ChatOpenAI from langchain/chat_models/openaiStructuredOutputParser from langchain/output_parsersOutputFixingParser from langchain/output_parsersPreviousCustom list parserNextString output parser |
4e9727215e95-681 | Auto-fixing parserThis output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it.For this example, we'll use the structured output parser. Here's what happens if we pass it a result that does not comply with the schema:import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { StructuredOutputParser, OutputFixingParser,} from "langchain/output_parsers";export const run = async () => { const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites. "), }) ); /** This is a bad output because sources is a string, not a list */ const badOutput = `\`\`\`json { "answer": "foo", "sources": "foo.com" } \`\`\``; try { await parser.parse(badOutput); } catch (e) { console.log("Failed to parse bad output: ", e); /* Failed to parse bad output: OutputParserException [Error]: Failed to parse. Text: ```json { "answer": "foo", "sources": "foo.com" } ```. |
4e9727215e95-682 | Error: [ { "code": "invalid_type", "expected": "array", "received": "string", "path": [ "sources" ], "message": "Expected array, received string" } ] at StructuredOutputParser.parse (/Users/ankushgola/Code/langchainjs/langchain/src/output_parsers/structured.ts:71:13) at run (/Users/ankushgola/Code/langchainjs/examples/src/prompts/fix_parser.ts:25:18) at <anonymous> (/Users/ankushgola/Code/langchainjs/examples/src/index.ts:33:22) */ } const fixParser = OutputFixingParser.fromLLM( new ChatOpenAI({ temperature: 0 }), parser ); const output = await fixParser.parse(badOutput); console.log("Fixed output: ", output); // Fixed output: { answer: 'foo', sources: [ 'foo.com' ] }};API Reference:ChatOpenAI from langchain/chat_models/openaiStructuredOutputParser from langchain/output_parsersOutputFixingParser from langchain/output_parsers
This output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.
But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it.
For this example, we'll use the structured output parser. Here's what happens if we pass it a result that does not comply with the schema: |
4e9727215e95-683 | import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { StructuredOutputParser, OutputFixingParser,} from "langchain/output_parsers";export const run = async () => { const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites. "), }) ); /** This is a bad output because sources is a string, not a list */ const badOutput = `\`\`\`json { "answer": "foo", "sources": "foo.com" } \`\`\``; try { await parser.parse(badOutput); } catch (e) { console.log("Failed to parse bad output: ", e); /* Failed to parse bad output: OutputParserException [Error]: Failed to parse. Text: ```json { "answer": "foo", "sources": "foo.com" } ```. |
4e9727215e95-684 | Error: [ { "code": "invalid_type", "expected": "array", "received": "string", "path": [ "sources" ], "message": "Expected array, received string" } ] at StructuredOutputParser.parse (/Users/ankushgola/Code/langchainjs/langchain/src/output_parsers/structured.ts:71:13) at run (/Users/ankushgola/Code/langchainjs/examples/src/prompts/fix_parser.ts:25:18) at <anonymous> (/Users/ankushgola/Code/langchainjs/examples/src/index.ts:33:22) */ } const fixParser = OutputFixingParser.fromLLM( new ChatOpenAI({ temperature: 0 }), parser ); const output = await fixParser.parse(badOutput); console.log("Fixed output: ", output); // Fixed output: { answer: 'foo', sources: [ 'foo.com' ] }};
API Reference:ChatOpenAI from langchain/chat_models/openaiStructuredOutputParser from langchain/output_parsersOutputFixingParser from langchain/output_parsers
String output parser
Page Title: String output parser | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersString output parserOn this pageString output parserThe StringOutputParser takes language model output (either an entire response or as a stream) and converts |
4e9727215e95-685 | it into a string. This is useful for standardizing chat model and LLM output.This output parser can act as a transform stream and work with streamed response chunks from a model.Usageimport { ChatOpenAI } from "langchain/chat_models/openai";import { StringOutputParser } from "langchain/schema/output_parser";const parser = new StringOutputParser();const model = new ChatOpenAI({ temperature: 0 });const stream = await model.pipe(parser).stream("Hello there! ");for await (const chunk of stream) { console.log(chunk);}API Reference:ChatOpenAI from langchain/chat_models/openaiStringOutputParser from langchain/schema/output_parserPreviousAuto-fixing parserNextStructured output parserUsageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersString output parserOn this pageString output parserThe StringOutputParser takes language model output (either an entire response or as a stream) and converts |
4e9727215e95-686 | it into a string. This is useful for standardizing chat model and LLM output.This output parser can act as a transform stream and work with streamed response chunks from a model.Usageimport { ChatOpenAI } from "langchain/chat_models/openai";import { StringOutputParser } from "langchain/schema/output_parser";const parser = new StringOutputParser();const model = new ChatOpenAI({ temperature: 0 });const stream = await model.pipe(parser).stream("Hello there! ");for await (const chunk of stream) { console.log(chunk);}API Reference:ChatOpenAI from langchain/chat_models/openaiStringOutputParser from langchain/schema/output_parserPreviousAuto-fixing parserNextStructured output parserUsage
ModulesModel I/OOutput parsersString output parserOn this pageString output parserThe StringOutputParser takes language model output (either an entire response or as a stream) and converts
it into a string. This is useful for standardizing chat model and LLM output.This output parser can act as a transform stream and work with streamed response chunks from a model.Usageimport { ChatOpenAI } from "langchain/chat_models/openai";import { StringOutputParser } from "langchain/schema/output_parser";const parser = new StringOutputParser();const model = new ChatOpenAI({ temperature: 0 });const stream = await model.pipe(parser).stream("Hello there! ");for await (const chunk of stream) { console.log(chunk);}API Reference:ChatOpenAI from langchain/chat_models/openaiStringOutputParser from langchain/schema/output_parserPreviousAuto-fixing parserNextStructured output parserUsage
ModulesModel I/OOutput parsersString output parserOn this pageString output parserThe StringOutputParser takes language model output (either an entire response or as a stream) and converts |
4e9727215e95-687 | it into a string. This is useful for standardizing chat model and LLM output.This output parser can act as a transform stream and work with streamed response chunks from a model.Usageimport { ChatOpenAI } from "langchain/chat_models/openai";import { StringOutputParser } from "langchain/schema/output_parser";const parser = new StringOutputParser();const model = new ChatOpenAI({ temperature: 0 });const stream = await model.pipe(parser).stream("Hello there! ");for await (const chunk of stream) { console.log(chunk);}API Reference:ChatOpenAI from langchain/chat_models/openaiStringOutputParser from langchain/schema/output_parserPreviousAuto-fixing parserNextStructured output parser
String output parserThe StringOutputParser takes language model output (either an entire response or as a stream) and converts
it into a string. This is useful for standardizing chat model and LLM output.This output parser can act as a transform stream and work with streamed response chunks from a model.Usageimport { ChatOpenAI } from "langchain/chat_models/openai";import { StringOutputParser } from "langchain/schema/output_parser";const parser = new StringOutputParser();const model = new ChatOpenAI({ temperature: 0 });const stream = await model.pipe(parser).stream("Hello there! ");for await (const chunk of stream) { console.log(chunk);}API Reference:ChatOpenAI from langchain/chat_models/openaiStringOutputParser from langchain/schema/output_parser
The StringOutputParser takes language model output (either an entire response or as a stream) and converts
it into a string. This is useful for standardizing chat model and LLM output. |
4e9727215e95-688 | it into a string. This is useful for standardizing chat model and LLM output.
import { ChatOpenAI } from "langchain/chat_models/openai";import { StringOutputParser } from "langchain/schema/output_parser";const parser = new StringOutputParser();const model = new ChatOpenAI({ temperature: 0 });const stream = await model.pipe(parser).stream("Hello there! ");for await (const chunk of stream) { console.log(chunk);}
API Reference:ChatOpenAI from langchain/chat_models/openaiStringOutputParser from langchain/schema/output_parser
Structured output parser
Page Title: Structured output parser | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersStructured output parserStructured output parserThis output parser can be used when you want to return multiple fields. If you want complex schema returned (i.e. a JSON object with arrays of strings), use the Zod Schema detailed below.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// With a `StructuredOutputParser` we can define a schema for the output.const parser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website. |
4e9727215e95-689 | ",});const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.You must format your output as a JSON value that adheres to a given "JSON Schema" instance. "JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}would match an object with one required property, "foo". The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". |
4e9727215e95-690 | The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:```json{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website. "}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? */console.log(response);/*{"answer": "Paris", "source": "https://en.wikipedia.org/wiki/Paris"}*/console.log(await parser.parse(response));// { answer: 'Paris', source: 'https://en.wikipedia.org/wiki/Paris' }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersStructured Output Parser with Zod SchemaThis output parser can be also be used when you want to define the output schema using Zod, a TypeScript validation library. The Zod schema passed in needs be parseable from a JSON string, so eg. |
4e9727215e95-691 | z.date() is not allowed.import { z } from "zod";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// We can use zod to define a schema for the output using the `fromZodSchema` method of `StructuredOutputParser`.const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites. "), }));const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. |
4e9727215e95-692 | The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"sources":{"type":"array","items":{"type":"string"},"description":"sources used to answer the question, should be websites. "}},"required":["answer","sources"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? */console.log(response);/*{"answer": "Paris", "sources": ["https://en.wikipedia.org/wiki/Paris"]}*/console.log(await parser.parse(response));/*{ answer: 'Paris', sources: [ 'https://en.wikipedia.org/wiki/Paris' ] }*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersPreviousString output parserNextData connectionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-693 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsOutput parsersHow-toBytes output parserCombining output parsersList parserCustom list parserAuto-fixing parserString output parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OOutput parsersStructured output parserStructured output parserThis output parser can be used when you want to return multiple fields. If you want complex schema returned (i.e. a JSON object with arrays of strings), use the Zod Schema detailed below.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// With a `StructuredOutputParser` we can define a schema for the output.const parser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website. ",});const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? |
4e9727215e95-694 | ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.You must format your output as a JSON value that adheres to a given "JSON Schema" instance. "JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}would match an object with one required property, "foo". The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:```json{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website. |
4e9727215e95-695 | "}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? */console.log(response);/*{"answer": "Paris", "source": "https://en.wikipedia.org/wiki/Paris"}*/console.log(await parser.parse(response));// { answer: 'Paris', source: 'https://en.wikipedia.org/wiki/Paris' }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersStructured Output Parser with Zod SchemaThis output parser can be also be used when you want to define the output schema using Zod, a TypeScript validation library. The Zod schema passed in needs be parseable from a JSON string, so eg. z.date() is not allowed.import { z } from "zod";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// We can use zod to define a schema for the output using the `fromZodSchema` method of `StructuredOutputParser`.const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites. |
4e9727215e95-696 | "), }));const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"sources":{"type":"array","items":{"type":"string"},"description":"sources used to answer the question, should be websites. "}},"required":["answer","sources"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? |
4e9727215e95-697 | /console.log(response);/*{"answer": "Paris", "sources": ["https://en.wikipedia.org/wiki/Paris"]}*/console.log(await parser.parse(response));/*{ answer: 'Paris', sources: [ 'https://en.wikipedia.org/wiki/Paris' ] }*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersPreviousString output parserNextData connection
ModulesModel I/OOutput parsersStructured output parserStructured output parserThis output parser can be used when you want to return multiple fields. If you want complex schema returned (i.e. a JSON object with arrays of strings), use the Zod Schema detailed below.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// With a `StructuredOutputParser` we can define a schema for the output.const parser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website. ",});const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.You must format your output as a JSON value that adheres to a given "JSON Schema" instance. |
4e9727215e95-698 | "JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}would match an object with one required property, "foo". The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:```json{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website. "}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? |
4e9727215e95-699 | /console.log(response);/*{"answer": "Paris", "source": "https://en.wikipedia.org/wiki/Paris"}*/console.log(await parser.parse(response));// { answer: 'Paris', source: 'https://en.wikipedia.org/wiki/Paris' }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersStructured Output Parser with Zod SchemaThis output parser can be also be used when you want to define the output schema using Zod, a TypeScript validation library. The Zod schema passed in needs be parseable from a JSON string, so eg. z.date() is not allowed.import { z } from "zod";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// We can use zod to define a schema for the output using the `fromZodSchema` method of `StructuredOutputParser`.const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.