id
stringlengths
14
17
text
stringlengths
42
2.1k
4e9727215e95-200
LangChain provides several classes and functions to make constructing and working with prompts easy.What is a prompt template?​A prompt template refers to a reproducible way to generate a prompt. It contains a text string ("the template"), that can take in a set of parameters from the end user and generates a prompt.A prompt template can contain:instructions to the language model,a set of few shot examples to help the language model generate a better response,a question to the language model.Here's a simple example:import { PromptTemplate } from "langchain/prompts";const prompt = PromptTemplate.fromTemplate<{product: string}>( `You are a naming consultant for new companies.What is a good name for a company that makes {product}?`);const formattedPrompt = await prompt.format({ product: "colorful socks",});/* You are a naming consultant for new companies. What is a good name for a company that makes colorful socks? */Create a prompt template​You can create simple hardcoded prompts using the PromptTemplate class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt.import { PromptTemplate } from "langchain/prompts";// An example prompt with no input variablesconst noInputPrompt = new PromptTemplate({ inputVariables: [], template: "Tell me a joke. ",});const formattedNoInputPrompt = await noInputPrompt.format();console.log(formattedNoInputPrompt);// "Tell me a joke.
4e9727215e95-201
"// An example prompt with one input variableconst oneInputPrompt = new PromptTemplate({ inputVariables: ["adjective"], template: "Tell me a {adjective} joke. "})const formattedOneInputPrompt = await oneInputPrompt.format({ adjective: "funny",});console.log(formattedOneInputPrompt);// "Tell me a funny joke. "// An example prompt with multiple input variablesconst multipleInputPrompt = new PromptTemplate({ inputVariables: ["adjective", "content"], template: "Tell me a {adjective} joke about {content}. ",});const formattedMultipleInputPrompt = await multipleInputPrompt.format({ adjective: "funny", content: "chickens",});console.log(formattedMultipleInputPrompt);// "Tell me a funny joke about chickens. "If you do not wish to specify inputVariables manually, you can also create a PromptTemplate using the fromTemplate class method. LangChain will automatically infer the inputVariables based on the template passed.import { PromptTemplate } from "langchain/prompts";const template = "Tell me a {adjective} joke about {content}. ";const promptTemplate = PromptTemplate.fromTemplate(template);console.log(promptTemplate.inputVariables);// ['adjective', 'content']const formattedPromptTemplate = await promptTemplate.format({ adjective: "funny", content: "chickens",});console.log(formattedPromptTemplate);// "Tell me a funny joke about chickens. "Note: If you're using TypeScript, keep in mind if you use .fromTemplate in this way, the compiler will not be able to automatically infer what inputs
4e9727215e95-202
are required. To get around this, you can manually specify a type parameter like this:const template = "Tell me a {adjective} joke about {content}. ";const promptTemplate = PromptTemplate.fromTemplate<{ adjective: string, content: string }>(template);You can create custom prompt templates that format the prompt in any way you want. For more information, see Custom Prompt Templates.Chat prompt template​Chat Models take a list of chat messages as input - this list commonly referred to as a prompt. These chat messages differ from raw string (which you would pass into a LLM model) in that every message is associated with a role.For example, in OpenAI Chat Completion API, a chat message can be associated with an AI, human or system role. The model is supposed to follow instruction from system chat message more closely.LangChain provides several prompt templates to make constructing and working with prompts easily. You are encouraged to use these chat related prompt templates instead of PromptTemplate when querying chat models to fully explore the potential of underlying chat model.import { ChatPromptTemplate, PromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,} from "langchain/prompts";import { AIMessage, HumanMessage, SystemMessage,} from "langchain/schema";To create a message template associated with a role, you use the corresponding <ROLE>MessagePromptTemplate.For convenience, there is a fromTemplate method exposed on these classes. If you were to use this template, this is what it would look like:const template = "You are a helpful assistant that translates {input_language} to {output_language}.
4e9727215e95-203
";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate externally and then pass it in, e.g. :const prompt = new PromptTemplate({ template: "You are a helpful assistant that translates {input_language} to {output_language}. ", inputVariables: ["input_language", "output_language"],});const systemMessagePrompt2 = new SystemMessagePromptTemplate({ prompt,});After that, you can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's formatPrompt method -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.const chatPrompt = ChatPromptTemplate.fromPromptMessages([ systemMessagePrompt, humanMessagePrompt]);// Format the messagesconst formattedChatPrompt = await chatPrompt.formatMessages({ input_language: "English", output_language: "French", text: "I love programming. ",});console.log(formattedChatPrompt);/* [ SystemMessage { content: 'You are a helpful assistant that translates English to French.' }, HumanMessage { content: 'I love programming.' } ]*/Note: Similarly to the PromptTemplate example, if using TypeScript, you can add typing to prompts created with .fromPromptMessages by passing a type parameter like this:const chatPrompt = ChatPromptTemplate.fromPromptMessages<{ input_language: string, output_language: string, text: string}>([ systemMessagePrompt, humanMessagePrompt]);PreviousPromptsNextPartial prompt templatesWhat is a prompt template?
4e9727215e95-204
Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesPartial prompt templatesCompositionExample selectorsPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference ModulesModel I/​OPromptsPrompt templatesOn this pagePrompt templatesLanguage models take text as input - that text is commonly referred to as a prompt. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
4e9727215e95-205
LangChain provides several classes and functions to make constructing and working with prompts easy.What is a prompt template?​A prompt template refers to a reproducible way to generate a prompt. It contains a text string ("the template"), that can take in a set of parameters from the end user and generates a prompt.A prompt template can contain:instructions to the language model,a set of few shot examples to help the language model generate a better response,a question to the language model.Here's a simple example:import { PromptTemplate } from "langchain/prompts";const prompt = PromptTemplate.fromTemplate<{product: string}>( `You are a naming consultant for new companies.What is a good name for a company that makes {product}?`);const formattedPrompt = await prompt.format({ product: "colorful socks",});/* You are a naming consultant for new companies. What is a good name for a company that makes colorful socks? */Create a prompt template​You can create simple hardcoded prompts using the PromptTemplate class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt.import { PromptTemplate } from "langchain/prompts";// An example prompt with no input variablesconst noInputPrompt = new PromptTemplate({ inputVariables: [], template: "Tell me a joke. ",});const formattedNoInputPrompt = await noInputPrompt.format();console.log(formattedNoInputPrompt);// "Tell me a joke.
4e9727215e95-206
"// An example prompt with one input variableconst oneInputPrompt = new PromptTemplate({ inputVariables: ["adjective"], template: "Tell me a {adjective} joke. "})const formattedOneInputPrompt = await oneInputPrompt.format({ adjective: "funny",});console.log(formattedOneInputPrompt);// "Tell me a funny joke. "// An example prompt with multiple input variablesconst multipleInputPrompt = new PromptTemplate({ inputVariables: ["adjective", "content"], template: "Tell me a {adjective} joke about {content}. ",});const formattedMultipleInputPrompt = await multipleInputPrompt.format({ adjective: "funny", content: "chickens",});console.log(formattedMultipleInputPrompt);// "Tell me a funny joke about chickens. "If you do not wish to specify inputVariables manually, you can also create a PromptTemplate using the fromTemplate class method. LangChain will automatically infer the inputVariables based on the template passed.import { PromptTemplate } from "langchain/prompts";const template = "Tell me a {adjective} joke about {content}. ";const promptTemplate = PromptTemplate.fromTemplate(template);console.log(promptTemplate.inputVariables);// ['adjective', 'content']const formattedPromptTemplate = await promptTemplate.format({ adjective: "funny", content: "chickens",});console.log(formattedPromptTemplate);// "Tell me a funny joke about chickens. "Note: If you're using TypeScript, keep in mind if you use .fromTemplate in this way, the compiler will not be able to automatically infer what inputs
4e9727215e95-207
are required. To get around this, you can manually specify a type parameter like this:const template = "Tell me a {adjective} joke about {content}. ";const promptTemplate = PromptTemplate.fromTemplate<{ adjective: string, content: string }>(template);You can create custom prompt templates that format the prompt in any way you want. For more information, see Custom Prompt Templates.Chat prompt template​Chat Models take a list of chat messages as input - this list commonly referred to as a prompt. These chat messages differ from raw string (which you would pass into a LLM model) in that every message is associated with a role.For example, in OpenAI Chat Completion API, a chat message can be associated with an AI, human or system role. The model is supposed to follow instruction from system chat message more closely.LangChain provides several prompt templates to make constructing and working with prompts easily. You are encouraged to use these chat related prompt templates instead of PromptTemplate when querying chat models to fully explore the potential of underlying chat model.import { ChatPromptTemplate, PromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,} from "langchain/prompts";import { AIMessage, HumanMessage, SystemMessage,} from "langchain/schema";To create a message template associated with a role, you use the corresponding <ROLE>MessagePromptTemplate.For convenience, there is a fromTemplate method exposed on these classes. If you were to use this template, this is what it would look like:const template = "You are a helpful assistant that translates {input_language} to {output_language}.
4e9727215e95-208
";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate externally and then pass it in, e.g. :const prompt = new PromptTemplate({ template: "You are a helpful assistant that translates {input_language} to {output_language}. ", inputVariables: ["input_language", "output_language"],});const systemMessagePrompt2 = new SystemMessagePromptTemplate({ prompt,});After that, you can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's formatPrompt method -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.const chatPrompt = ChatPromptTemplate.fromPromptMessages([ systemMessagePrompt, humanMessagePrompt]);// Format the messagesconst formattedChatPrompt = await chatPrompt.formatMessages({ input_language: "English", output_language: "French", text: "I love programming. ",});console.log(formattedChatPrompt);/* [ SystemMessage { content: 'You are a helpful assistant that translates English to French.' }, HumanMessage { content: 'I love programming.' } ]*/Note: Similarly to the PromptTemplate example, if using TypeScript, you can add typing to prompts created with .fromPromptMessages by passing a type parameter like this:const chatPrompt = ChatPromptTemplate.fromPromptMessages<{ input_language: string, output_language: string, text: string}>([ systemMessagePrompt, humanMessagePrompt]);PreviousPromptsNextPartial prompt templatesWhat is a prompt template?
4e9727215e95-209
ModulesModel I/​OPromptsPrompt templatesOn this pagePrompt templatesLanguage models take text as input - that text is commonly referred to as a prompt. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input. LangChain provides several classes and functions to make constructing and working with prompts easy.What is a prompt template?​A prompt template refers to a reproducible way to generate a prompt. It contains a text string ("the template"), that can take in a set of parameters from the end user and generates a prompt.A prompt template can contain:instructions to the language model,a set of few shot examples to help the language model generate a better response,a question to the language model.Here's a simple example:import { PromptTemplate } from "langchain/prompts";const prompt = PromptTemplate.fromTemplate<{product: string}>( `You are a naming consultant for new companies.What is a good name for a company that makes {product}?`);const formattedPrompt = await prompt.format({ product: "colorful socks",});/* You are a naming consultant for new companies. What is a good name for a company that makes colorful socks? */Create a prompt template​You can create simple hardcoded prompts using the PromptTemplate class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt.import { PromptTemplate } from "langchain/prompts";// An example prompt with no input variablesconst noInputPrompt = new PromptTemplate({ inputVariables: [], template: "Tell me a joke. ",});const formattedNoInputPrompt = await noInputPrompt.format();console.log(formattedNoInputPrompt);// "Tell me a joke.
4e9727215e95-210
"// An example prompt with one input variableconst oneInputPrompt = new PromptTemplate({ inputVariables: ["adjective"], template: "Tell me a {adjective} joke. "})const formattedOneInputPrompt = await oneInputPrompt.format({ adjective: "funny",});console.log(formattedOneInputPrompt);// "Tell me a funny joke. "// An example prompt with multiple input variablesconst multipleInputPrompt = new PromptTemplate({ inputVariables: ["adjective", "content"], template: "Tell me a {adjective} joke about {content}. ",});const formattedMultipleInputPrompt = await multipleInputPrompt.format({ adjective: "funny", content: "chickens",});console.log(formattedMultipleInputPrompt);// "Tell me a funny joke about chickens. "If you do not wish to specify inputVariables manually, you can also create a PromptTemplate using the fromTemplate class method. LangChain will automatically infer the inputVariables based on the template passed.import { PromptTemplate } from "langchain/prompts";const template = "Tell me a {adjective} joke about {content}. ";const promptTemplate = PromptTemplate.fromTemplate(template);console.log(promptTemplate.inputVariables);// ['adjective', 'content']const formattedPromptTemplate = await promptTemplate.format({ adjective: "funny", content: "chickens",});console.log(formattedPromptTemplate);// "Tell me a funny joke about chickens. "Note: If you're using TypeScript, keep in mind if you use .fromTemplate in this way, the compiler will not be able to automatically infer what inputs
4e9727215e95-211
are required. To get around this, you can manually specify a type parameter like this:const template = "Tell me a {adjective} joke about {content}. ";const promptTemplate = PromptTemplate.fromTemplate<{ adjective: string, content: string }>(template);You can create custom prompt templates that format the prompt in any way you want. For more information, see Custom Prompt Templates.Chat prompt template​Chat Models take a list of chat messages as input - this list commonly referred to as a prompt. These chat messages differ from raw string (which you would pass into a LLM model) in that every message is associated with a role.For example, in OpenAI Chat Completion API, a chat message can be associated with an AI, human or system role. The model is supposed to follow instruction from system chat message more closely.LangChain provides several prompt templates to make constructing and working with prompts easily. You are encouraged to use these chat related prompt templates instead of PromptTemplate when querying chat models to fully explore the potential of underlying chat model.import { ChatPromptTemplate, PromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,} from "langchain/prompts";import { AIMessage, HumanMessage, SystemMessage,} from "langchain/schema";To create a message template associated with a role, you use the corresponding <ROLE>MessagePromptTemplate.For convenience, there is a fromTemplate method exposed on these classes. If you were to use this template, this is what it would look like:const template = "You are a helpful assistant that translates {input_language} to {output_language}.
4e9727215e95-212
";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate externally and then pass it in, e.g. :const prompt = new PromptTemplate({ template: "You are a helpful assistant that translates {input_language} to {output_language}. ", inputVariables: ["input_language", "output_language"],});const systemMessagePrompt2 = new SystemMessagePromptTemplate({ prompt,});After that, you can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's formatPrompt method -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.const chatPrompt = ChatPromptTemplate.fromPromptMessages([ systemMessagePrompt, humanMessagePrompt]);// Format the messagesconst formattedChatPrompt = await chatPrompt.formatMessages({ input_language: "English", output_language: "French", text: "I love programming. ",});console.log(formattedChatPrompt);/* [ SystemMessage { content: 'You are a helpful assistant that translates English to French.' }, HumanMessage { content: 'I love programming.' } ]*/Note: Similarly to the PromptTemplate example, if using TypeScript, you can add typing to prompts created with .fromPromptMessages by passing a type parameter like this:const chatPrompt = ChatPromptTemplate.fromPromptMessages<{ input_language: string, output_language: string, text: string}>([ systemMessagePrompt, humanMessagePrompt]);PreviousPromptsNextPartial prompt templates Prompt templatesLanguage models take text as input - that text is commonly referred to as a prompt.
4e9727215e95-213
Prompt templatesLanguage models take text as input - that text is commonly referred to as a prompt. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input. LangChain provides several classes and functions to make constructing and working with prompts easy.What is a prompt template?​A prompt template refers to a reproducible way to generate a prompt. It contains a text string ("the template"), that can take in a set of parameters from the end user and generates a prompt.A prompt template can contain:instructions to the language model,a set of few shot examples to help the language model generate a better response,a question to the language model.Here's a simple example:import { PromptTemplate } from "langchain/prompts";const prompt = PromptTemplate.fromTemplate<{product: string}>( `You are a naming consultant for new companies.What is a good name for a company that makes {product}?`);const formattedPrompt = await prompt.format({ product: "colorful socks",});/* You are a naming consultant for new companies. What is a good name for a company that makes colorful socks? */Create a prompt template​You can create simple hardcoded prompts using the PromptTemplate class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt.import { PromptTemplate } from "langchain/prompts";// An example prompt with no input variablesconst noInputPrompt = new PromptTemplate({ inputVariables: [], template: "Tell me a joke. ",});const formattedNoInputPrompt = await noInputPrompt.format();console.log(formattedNoInputPrompt);// "Tell me a joke.
4e9727215e95-214
"// An example prompt with one input variableconst oneInputPrompt = new PromptTemplate({ inputVariables: ["adjective"], template: "Tell me a {adjective} joke. "})const formattedOneInputPrompt = await oneInputPrompt.format({ adjective: "funny",});console.log(formattedOneInputPrompt);// "Tell me a funny joke. "// An example prompt with multiple input variablesconst multipleInputPrompt = new PromptTemplate({ inputVariables: ["adjective", "content"], template: "Tell me a {adjective} joke about {content}. ",});const formattedMultipleInputPrompt = await multipleInputPrompt.format({ adjective: "funny", content: "chickens",});console.log(formattedMultipleInputPrompt);// "Tell me a funny joke about chickens. "If you do not wish to specify inputVariables manually, you can also create a PromptTemplate using the fromTemplate class method. LangChain will automatically infer the inputVariables based on the template passed.import { PromptTemplate } from "langchain/prompts";const template = "Tell me a {adjective} joke about {content}. ";const promptTemplate = PromptTemplate.fromTemplate(template);console.log(promptTemplate.inputVariables);// ['adjective', 'content']const formattedPromptTemplate = await promptTemplate.format({ adjective: "funny", content: "chickens",});console.log(formattedPromptTemplate);// "Tell me a funny joke about chickens. "Note: If you're using TypeScript, keep in mind if you use .fromTemplate in this way, the compiler will not be able to automatically infer what inputs
4e9727215e95-215
are required. To get around this, you can manually specify a type parameter like this:const template = "Tell me a {adjective} joke about {content}. ";const promptTemplate = PromptTemplate.fromTemplate<{ adjective: string, content: string }>(template);You can create custom prompt templates that format the prompt in any way you want. For more information, see Custom Prompt Templates.Chat prompt template​Chat Models take a list of chat messages as input - this list commonly referred to as a prompt. These chat messages differ from raw string (which you would pass into a LLM model) in that every message is associated with a role.For example, in OpenAI Chat Completion API, a chat message can be associated with an AI, human or system role. The model is supposed to follow instruction from system chat message more closely.LangChain provides several prompt templates to make constructing and working with prompts easily. You are encouraged to use these chat related prompt templates instead of PromptTemplate when querying chat models to fully explore the potential of underlying chat model.import { ChatPromptTemplate, PromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,} from "langchain/prompts";import { AIMessage, HumanMessage, SystemMessage,} from "langchain/schema";To create a message template associated with a role, you use the corresponding <ROLE>MessagePromptTemplate.For convenience, there is a fromTemplate method exposed on these classes. If you were to use this template, this is what it would look like:const template = "You are a helpful assistant that translates {input_language} to {output_language}.
4e9727215e95-216
";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate externally and then pass it in, e.g. :const prompt = new PromptTemplate({ template: "You are a helpful assistant that translates {input_language} to {output_language}. ", inputVariables: ["input_language", "output_language"],});const systemMessagePrompt2 = new SystemMessagePromptTemplate({ prompt,});After that, you can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's formatPrompt method -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.const chatPrompt = ChatPromptTemplate.fromPromptMessages([ systemMessagePrompt, humanMessagePrompt]);// Format the messagesconst formattedChatPrompt = await chatPrompt.formatMessages({ input_language: "English", output_language: "French", text: "I love programming. ",});console.log(formattedChatPrompt);/* [ SystemMessage { content: 'You are a helpful assistant that translates English to French.' }, HumanMessage { content: 'I love programming.' } ]*/Note: Similarly to the PromptTemplate example, if using TypeScript, you can add typing to prompts created with .fromPromptMessages by passing a type parameter like this:const chatPrompt = ChatPromptTemplate.fromPromptMessages<{ input_language: string, output_language: string, text: string}>([ systemMessagePrompt, humanMessagePrompt]); Language models take text as input - that text is commonly referred to as a prompt.
4e9727215e95-217
Language models take text as input - that text is commonly referred to as a prompt. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input. LangChain provides several classes and functions to make constructing and working with prompts easy. A prompt template refers to a reproducible way to generate a prompt. It contains a text string ("the template"), that can take in a set of parameters from the end user and generates a prompt. A prompt template can contain: Here's a simple example: import { PromptTemplate } from "langchain/prompts";const prompt = PromptTemplate.fromTemplate<{product: string}>( `You are a naming consultant for new companies.What is a good name for a company that makes {product}?`);const formattedPrompt = await prompt.format({ product: "colorful socks",});/* You are a naming consultant for new companies. What is a good name for a company that makes colorful socks? */ You can create simple hardcoded prompts using the PromptTemplate class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt.
4e9727215e95-218
import { PromptTemplate } from "langchain/prompts";// An example prompt with no input variablesconst noInputPrompt = new PromptTemplate({ inputVariables: [], template: "Tell me a joke. ",});const formattedNoInputPrompt = await noInputPrompt.format();console.log(formattedNoInputPrompt);// "Tell me a joke. "// An example prompt with one input variableconst oneInputPrompt = new PromptTemplate({ inputVariables: ["adjective"], template: "Tell me a {adjective} joke. "})const formattedOneInputPrompt = await oneInputPrompt.format({ adjective: "funny",});console.log(formattedOneInputPrompt);// "Tell me a funny joke. "// An example prompt with multiple input variablesconst multipleInputPrompt = new PromptTemplate({ inputVariables: ["adjective", "content"], template: "Tell me a {adjective} joke about {content}. ",});const formattedMultipleInputPrompt = await multipleInputPrompt.format({ adjective: "funny", content: "chickens",});console.log(formattedMultipleInputPrompt);// "Tell me a funny joke about chickens." If you do not wish to specify inputVariables manually, you can also create a PromptTemplate using the fromTemplate class method. LangChain will automatically infer the inputVariables based on the template passed. import { PromptTemplate } from "langchain/prompts";const template = "Tell me a {adjective} joke about {content}. ";const promptTemplate = PromptTemplate.fromTemplate(template);console.log(promptTemplate.inputVariables);// ['adjective', 'content']const formattedPromptTemplate = await promptTemplate.format({ adjective: "funny", content: "chickens",});console.log(formattedPromptTemplate);// "Tell me a funny joke about chickens."
4e9727215e95-219
Note: If you're using TypeScript, keep in mind if you use .fromTemplate in this way, the compiler will not be able to automatically infer what inputs are required. To get around this, you can manually specify a type parameter like this: const template = "Tell me a {adjective} joke about {content}. ";const promptTemplate = PromptTemplate.fromTemplate<{ adjective: string, content: string }>(template); You can create custom prompt templates that format the prompt in any way you want. For more information, see Custom Prompt Templates. Chat Models take a list of chat messages as input - this list commonly referred to as a prompt. These chat messages differ from raw string (which you would pass into a LLM model) in that every message is associated with a role. For example, in OpenAI Chat Completion API, a chat message can be associated with an AI, human or system role. The model is supposed to follow instruction from system chat message more closely. LangChain provides several prompt templates to make constructing and working with prompts easily. You are encouraged to use these chat related prompt templates instead of PromptTemplate when querying chat models to fully explore the potential of underlying chat model. import { ChatPromptTemplate, PromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,} from "langchain/prompts";import { AIMessage, HumanMessage, SystemMessage,} from "langchain/schema"; To create a message template associated with a role, you use the corresponding <ROLE>MessagePromptTemplate. For convenience, there is a fromTemplate method exposed on these classes. If you were to use this template, this is what it would look like:
4e9727215e95-220
const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate); If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate externally and then pass it in, e.g. : const prompt = new PromptTemplate({ template: "You are a helpful assistant that translates {input_language} to {output_language}. ", inputVariables: ["input_language", "output_language"],});const systemMessagePrompt2 = new SystemMessagePromptTemplate({ prompt,}); After that, you can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's formatPrompt method -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model. const chatPrompt = ChatPromptTemplate.fromPromptMessages([ systemMessagePrompt, humanMessagePrompt]);// Format the messagesconst formattedChatPrompt = await chatPrompt.formatMessages({ input_language: "English", output_language: "French", text: "I love programming. ",});console.log(formattedChatPrompt);/* [ SystemMessage { content: 'You are a helpful assistant that translates English to French.' }, HumanMessage { content: 'I love programming.' } ]*/ Note: Similarly to the PromptTemplate example, if using TypeScript, you can add typing to prompts created with .fromPromptMessages by passing a type parameter like this: const chatPrompt = ChatPromptTemplate.fromPromptMessages<{ input_language: string, output_language: string, text: string}>([ systemMessagePrompt, humanMessagePrompt]); Partial prompt templates
4e9727215e95-221
Partial prompt templates What is a prompt template? Page Title: Partial prompt templates | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesPartial prompt templatesCompositionExample selectorsPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OPromptsPrompt templatesPartial prompt templatesPartial prompt templatesLike other methods, it can make sense to "partial" a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.LangChain supports this in two ways:Partial formatting with string values.Partial formatting with functions that return string values.These two different ways support different use cases. In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain.Partial With Strings​One common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, foo and baz. If you get the foo value early on in the chain, but the baz value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the foo value, and then pass the partialed prompt template along and just use that.
4e9727215e95-222
Below is an example of doing this:import { PromptTemplate } from "langchain/prompts";const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["foo", "bar"]});const partialPrompt = await prompt.partial({ foo: "foo",});const formattedPrompt = await partialPrompt.format({ bar: "baz",});console.log(formattedPrompt);// foobazYou can also just initialize the prompt with the partialed variables.const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["bar"], partialVariables: { foo: "foo", },});const formattedPrompt = await prompt.format({ bar: "baz",});console.log(formattedPrompt);// foobazPartial With Functions​You can also partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables can be tedious.
4e9727215e95-223
In this case, it's very handy to be able to partial the prompt with a function that always returns the current date.const getCurrentDate = () => { return new Date().toISOString();};const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective", "date"],});const partialPrompt = await prompt.partial({ date: getCurrentDate,});const formattedPrompt = await partialPrompt.format({ adjective: "funny",});console.log(formattedPrompt)// Tell me a funny joke about the day 2023-07-13T00:54:59.287ZYou can also just initialize the prompt with the partialed variables:const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective"], partialVariables: { date: getCurrentDate, }});const formattedPrompt = await prompt.format({ adjective: "funny",});console.log(formattedPrompt)// Tell me a funny joke about the day 2023-07-13T00:54:59.287ZPreviousPrompt templatesNextCompositionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-224
Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesPartial prompt templatesCompositionExample selectorsPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OPromptsPrompt templatesPartial prompt templatesPartial prompt templatesLike other methods, it can make sense to "partial" a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.LangChain supports this in two ways:Partial formatting with string values.Partial formatting with functions that return string values.These two different ways support different use cases. In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain.Partial With Strings​One common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, foo and baz. If you get the foo value early on in the chain, but the baz value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the foo value, and then pass the partialed prompt template along and just use that.
4e9727215e95-225
Below is an example of doing this:import { PromptTemplate } from "langchain/prompts";const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["foo", "bar"]});const partialPrompt = await prompt.partial({ foo: "foo",});const formattedPrompt = await partialPrompt.format({ bar: "baz",});console.log(formattedPrompt);// foobazYou can also just initialize the prompt with the partialed variables.const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["bar"], partialVariables: { foo: "foo", },});const formattedPrompt = await prompt.format({ bar: "baz",});console.log(formattedPrompt);// foobazPartial With Functions​You can also partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables can be tedious.
4e9727215e95-226
In this case, it's very handy to be able to partial the prompt with a function that always returns the current date.const getCurrentDate = () => { return new Date().toISOString();};const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective", "date"],});const partialPrompt = await prompt.partial({ date: getCurrentDate,});const formattedPrompt = await partialPrompt.format({ adjective: "funny",});console.log(formattedPrompt)// Tell me a funny joke about the day 2023-07-13T00:54:59.287ZYou can also just initialize the prompt with the partialed variables:const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective"], partialVariables: { date: getCurrentDate, }});const formattedPrompt = await prompt.format({ adjective: "funny",});console.log(formattedPrompt)// Tell me a funny joke about the day 2023-07-13T00:54:59.287ZPreviousPrompt templatesNextComposition
4e9727215e95-227
ModulesModel I/​OPromptsPrompt templatesPartial prompt templatesPartial prompt templatesLike other methods, it can make sense to "partial" a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.LangChain supports this in two ways:Partial formatting with string values.Partial formatting with functions that return string values.These two different ways support different use cases. In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain.Partial With Strings​One common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, foo and baz. If you get the foo value early on in the chain, but the baz value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the foo value, and then pass the partialed prompt template along and just use that.
4e9727215e95-228
Below is an example of doing this:import { PromptTemplate } from "langchain/prompts";const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["foo", "bar"]});const partialPrompt = await prompt.partial({ foo: "foo",});const formattedPrompt = await partialPrompt.format({ bar: "baz",});console.log(formattedPrompt);// foobazYou can also just initialize the prompt with the partialed variables.const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["bar"], partialVariables: { foo: "foo", },});const formattedPrompt = await prompt.format({ bar: "baz",});console.log(formattedPrompt);// foobazPartial With Functions​You can also partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables can be tedious.
4e9727215e95-229
In this case, it's very handy to be able to partial the prompt with a function that always returns the current date.const getCurrentDate = () => { return new Date().toISOString();};const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective", "date"],});const partialPrompt = await prompt.partial({ date: getCurrentDate,});const formattedPrompt = await partialPrompt.format({ adjective: "funny",});console.log(formattedPrompt)// Tell me a funny joke about the day 2023-07-13T00:54:59.287ZYou can also just initialize the prompt with the partialed variables:const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective"], partialVariables: { date: getCurrentDate, }});const formattedPrompt = await prompt.format({ adjective: "funny",});console.log(formattedPrompt)// Tell me a funny joke about the day 2023-07-13T00:54:59.287ZPreviousPrompt templatesNextComposition
4e9727215e95-230
Partial prompt templatesLike other methods, it can make sense to "partial" a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.LangChain supports this in two ways:Partial formatting with string values.Partial formatting with functions that return string values.These two different ways support different use cases. In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain.Partial With Strings​One common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, foo and baz. If you get the foo value early on in the chain, but the baz value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the foo value, and then pass the partialed prompt template along and just use that.
4e9727215e95-231
Below is an example of doing this:import { PromptTemplate } from "langchain/prompts";const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["foo", "bar"]});const partialPrompt = await prompt.partial({ foo: "foo",});const formattedPrompt = await partialPrompt.format({ bar: "baz",});console.log(formattedPrompt);// foobazYou can also just initialize the prompt with the partialed variables.const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["bar"], partialVariables: { foo: "foo", },});const formattedPrompt = await prompt.format({ bar: "baz",});console.log(formattedPrompt);// foobazPartial With Functions​You can also partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables can be tedious.
4e9727215e95-232
In this case, it's very handy to be able to partial the prompt with a function that always returns the current date.const getCurrentDate = () => { return new Date().toISOString();};const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective", "date"],});const partialPrompt = await prompt.partial({ date: getCurrentDate,});const formattedPrompt = await partialPrompt.format({ adjective: "funny",});console.log(formattedPrompt)// Tell me a funny joke about the day 2023-07-13T00:54:59.287ZYou can also just initialize the prompt with the partialed variables:const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective"], partialVariables: { date: getCurrentDate, }});const formattedPrompt = await prompt.format({ adjective: "funny",});console.log(formattedPrompt)// Tell me a funny joke about the day 2023-07-13T00:54:59.287Z Like other methods, it can make sense to "partial" a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values. LangChain supports this in two ways: These two different ways support different use cases. In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain.
4e9727215e95-233
One common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, foo and baz. If you get the foo value early on in the chain, but the baz value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the foo value, and then pass the partialed prompt template along and just use that. Below is an example of doing this: import { PromptTemplate } from "langchain/prompts";const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["foo", "bar"]});const partialPrompt = await prompt.partial({ foo: "foo",});const formattedPrompt = await partialPrompt.format({ bar: "baz",});console.log(formattedPrompt);// foobaz You can also just initialize the prompt with the partialed variables. const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["bar"], partialVariables: { foo: "foo", },});const formattedPrompt = await prompt.format({ bar: "baz",});console.log(formattedPrompt);// foobaz You can also partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables can be tedious. In this case, it's very handy to be able to partial the prompt with a function that always returns the current date.
4e9727215e95-234
const getCurrentDate = () => { return new Date().toISOString();};const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective", "date"],});const partialPrompt = await prompt.partial({ date: getCurrentDate,});const formattedPrompt = await partialPrompt.format({ adjective: "funny",});console.log(formattedPrompt)// Tell me a funny joke about the day 2023-07-13T00:54:59.287Z You can also just initialize the prompt with the partialed variables: const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective"], partialVariables: { date: getCurrentDate, }});const formattedPrompt = await prompt.format({ adjective: "funny",});console.log(formattedPrompt)// Tell me a funny joke about the day 2023-07-13T00:54:59.287Z Composition Page Title: Composition | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-235
Page Title: Composition | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesPartial prompt templatesCompositionExample selectorsPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OPromptsPrompt templatesCompositionCompositionThis notebook goes over how to compose multiple prompts together. This can be useful when you want to reuse parts of prompts. This can be done with a PipelinePrompt. A PipelinePrompt consists of two main parts:Final prompt: This is the final prompt that is returnedPipeline prompts: This is a list of tuples, consisting of a string name and a prompt template.
4e9727215e95-236
Each prompt template will be formatted and then passed to future prompt templates as a variable with the same name.import { PromptTemplate, PipelinePromptTemplate } from "langchain/prompts";const fullPrompt = PromptTemplate.fromTemplate(`{introduction}{example}{start}`);const introductionPrompt = PromptTemplate.fromTemplate( `You are impersonating {person}.`);const examplePrompt = PromptTemplate.fromTemplate(`Here's an example of an interaction:Q: {example_q}A: {example_a}`);const startPrompt = PromptTemplate.fromTemplate(`Now, do this for real!Q: {input}A:`);const composedPrompt = new PipelinePromptTemplate({ pipelinePrompts: [ { name: "introduction", prompt: introductionPrompt, }, { name: "example", prompt: examplePrompt, }, { name: "start", prompt: startPrompt, }, ], finalPrompt: fullPrompt,});const formattedPrompt = await composedPrompt.format({ person: "Elon Musk", example_q: `What's your favorite car?`, example_a: "Telsa", input: `What's your favorite social media site?`,});console.log(formattedPrompt);/* You are impersonating Elon Musk. Here's an example of an interaction: Q: What's your favorite car? A: Telsa Now, do this for real! Q: What's your favorite social media site? A:*/API Reference:PromptTemplate from langchain/promptsPipelinePromptTemplate from langchain/promptsPreviousPartial prompt templatesNextExample selectorsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-237
Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesPartial prompt templatesCompositionExample selectorsPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OPromptsPrompt templatesCompositionCompositionThis notebook goes over how to compose multiple prompts together. This can be useful when you want to reuse parts of prompts. This can be done with a PipelinePrompt. A PipelinePrompt consists of two main parts:Final prompt: This is the final prompt that is returnedPipeline prompts: This is a list of tuples, consisting of a string name and a prompt template.
4e9727215e95-238
Each prompt template will be formatted and then passed to future prompt templates as a variable with the same name.import { PromptTemplate, PipelinePromptTemplate } from "langchain/prompts";const fullPrompt = PromptTemplate.fromTemplate(`{introduction}{example}{start}`);const introductionPrompt = PromptTemplate.fromTemplate( `You are impersonating {person}.`);const examplePrompt = PromptTemplate.fromTemplate(`Here's an example of an interaction:Q: {example_q}A: {example_a}`);const startPrompt = PromptTemplate.fromTemplate(`Now, do this for real!Q: {input}A:`);const composedPrompt = new PipelinePromptTemplate({ pipelinePrompts: [ { name: "introduction", prompt: introductionPrompt, }, { name: "example", prompt: examplePrompt, }, { name: "start", prompt: startPrompt, }, ], finalPrompt: fullPrompt,});const formattedPrompt = await composedPrompt.format({ person: "Elon Musk", example_q: `What's your favorite car?`, example_a: "Telsa", input: `What's your favorite social media site?`,});console.log(formattedPrompt);/* You are impersonating Elon Musk. Here's an example of an interaction: Q: What's your favorite car? A: Telsa Now, do this for real! Q: What's your favorite social media site? A:*/API Reference:PromptTemplate from langchain/promptsPipelinePromptTemplate from langchain/promptsPreviousPartial prompt templatesNextExample selectors
4e9727215e95-239
ModulesModel I/​OPromptsPrompt templatesCompositionCompositionThis notebook goes over how to compose multiple prompts together. This can be useful when you want to reuse parts of prompts. This can be done with a PipelinePrompt. A PipelinePrompt consists of two main parts:Final prompt: This is the final prompt that is returnedPipeline prompts: This is a list of tuples, consisting of a string name and a prompt template.
4e9727215e95-240
Each prompt template will be formatted and then passed to future prompt templates as a variable with the same name.import { PromptTemplate, PipelinePromptTemplate } from "langchain/prompts";const fullPrompt = PromptTemplate.fromTemplate(`{introduction}{example}{start}`);const introductionPrompt = PromptTemplate.fromTemplate( `You are impersonating {person}.`);const examplePrompt = PromptTemplate.fromTemplate(`Here's an example of an interaction:Q: {example_q}A: {example_a}`);const startPrompt = PromptTemplate.fromTemplate(`Now, do this for real!Q: {input}A:`);const composedPrompt = new PipelinePromptTemplate({ pipelinePrompts: [ { name: "introduction", prompt: introductionPrompt, }, { name: "example", prompt: examplePrompt, }, { name: "start", prompt: startPrompt, }, ], finalPrompt: fullPrompt,});const formattedPrompt = await composedPrompt.format({ person: "Elon Musk", example_q: `What's your favorite car?`, example_a: "Telsa", input: `What's your favorite social media site?`,});console.log(formattedPrompt);/* You are impersonating Elon Musk. Here's an example of an interaction: Q: What's your favorite car? A: Telsa Now, do this for real! Q: What's your favorite social media site? A:*/API Reference:PromptTemplate from langchain/promptsPipelinePromptTemplate from langchain/promptsPreviousPartial prompt templatesNextExample selectors
4e9727215e95-241
CompositionThis notebook goes over how to compose multiple prompts together. This can be useful when you want to reuse parts of prompts. This can be done with a PipelinePrompt. A PipelinePrompt consists of two main parts:Final prompt: This is the final prompt that is returnedPipeline prompts: This is a list of tuples, consisting of a string name and a prompt template. Each prompt template will be formatted and then passed to future prompt templates as a variable with the same name.import { PromptTemplate, PipelinePromptTemplate } from "langchain/prompts";const fullPrompt = PromptTemplate.fromTemplate(`{introduction}{example}{start}`);const introductionPrompt = PromptTemplate.fromTemplate( `You are impersonating {person}.`);const examplePrompt = PromptTemplate.fromTemplate(`Here's an example of an interaction:Q: {example_q}A: {example_a}`);const startPrompt = PromptTemplate.fromTemplate(`Now, do this for real!Q: {input}A:`);const composedPrompt = new PipelinePromptTemplate({ pipelinePrompts: [ { name: "introduction", prompt: introductionPrompt, }, { name: "example", prompt: examplePrompt, }, { name: "start", prompt: startPrompt, }, ], finalPrompt: fullPrompt,});const formattedPrompt = await composedPrompt.format({ person: "Elon Musk", example_q: `What's your favorite car?`, example_a: "Telsa", input: `What's your favorite social media site?`,});console.log(formattedPrompt);/* You are impersonating Elon Musk.
4e9727215e95-242
Here's an example of an interaction: Q: What's your favorite car? A: Telsa Now, do this for real! Q: What's your favorite social media site? A:*/API Reference:PromptTemplate from langchain/promptsPipelinePromptTemplate from langchain/prompts This notebook goes over how to compose multiple prompts together. This can be useful when you want to reuse parts of prompts. This can be done with a PipelinePrompt. A PipelinePrompt consists of two main parts: import { PromptTemplate, PipelinePromptTemplate } from "langchain/prompts";const fullPrompt = PromptTemplate.fromTemplate(`{introduction}{example}{start}`);const introductionPrompt = PromptTemplate.fromTemplate( `You are impersonating {person}.`);const examplePrompt = PromptTemplate.fromTemplate(`Here's an example of an interaction:Q: {example_q}A: {example_a}`);const startPrompt = PromptTemplate.fromTemplate(`Now, do this for real!Q: {input}A:`);const composedPrompt = new PipelinePromptTemplate({ pipelinePrompts: [ { name: "introduction", prompt: introductionPrompt, }, { name: "example", prompt: examplePrompt, }, { name: "start", prompt: startPrompt, }, ], finalPrompt: fullPrompt,});const formattedPrompt = await composedPrompt.format({ person: "Elon Musk", example_q: `What's your favorite car?`, example_a: "Telsa", input: `What's your favorite social media site?`,});console.log(formattedPrompt);/* You are impersonating Elon Musk. Here's an example of an interaction: Q: What's your favorite car? A: Telsa Now, do this for real! Q: What's your favorite social media site? A:*/
4e9727215e95-243
API Reference:PromptTemplate from langchain/promptsPipelinePromptTemplate from langchain/prompts Page Title: Example selectors | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsSelect by lengthSelect by similarityPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OPromptsExample selectorsExample selectorsIf you have a large number of examples, you may need to select which ones to include in the prompt. The Example Selector is the class responsible for doing so.The base interface is defined as below:If you have a large number of examples, you may need to programmatically select which ones to include in the prompt. The ExampleSelector is the class responsible for doing so. The base interface is defined as below.class BaseExampleSelector { addExample(example: Example): Promise<void | string>; selectExamples(input_variables: Example): Promise<Example[]>;}It needs to expose a selectExamples - this takes in the input variables and then returns a list of examples method - and an addExample method, which saves an example for later selection. It is up to each specific implementation as to how those examples are saved and selected.PreviousCompositionNextSelect by lengthCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-244
Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsSelect by lengthSelect by similarityPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OPromptsExample selectorsExample selectorsIf you have a large number of examples, you may need to select which ones to include in the prompt. The Example Selector is the class responsible for doing so.The base interface is defined as below:If you have a large number of examples, you may need to programmatically select which ones to include in the prompt. The ExampleSelector is the class responsible for doing so. The base interface is defined as below.class BaseExampleSelector { addExample(example: Example): Promise<void | string>; selectExamples(input_variables: Example): Promise<Example[]>;}It needs to expose a selectExamples - this takes in the input variables and then returns a list of examples method - and an addExample method, which saves an example for later selection. It is up to each specific implementation as to how those examples are saved and selected.PreviousCompositionNextSelect by length Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsSelect by lengthSelect by similarityPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference
4e9727215e95-245
ModulesModel I/​OPromptsExample selectorsExample selectorsIf you have a large number of examples, you may need to select which ones to include in the prompt. The Example Selector is the class responsible for doing so.The base interface is defined as below:If you have a large number of examples, you may need to programmatically select which ones to include in the prompt. The ExampleSelector is the class responsible for doing so. The base interface is defined as below.class BaseExampleSelector { addExample(example: Example): Promise<void | string>; selectExamples(input_variables: Example): Promise<Example[]>;}It needs to expose a selectExamples - this takes in the input variables and then returns a list of examples method - and an addExample method, which saves an example for later selection. It is up to each specific implementation as to how those examples are saved and selected.PreviousCompositionNextSelect by length Example selectorsIf you have a large number of examples, you may need to select which ones to include in the prompt. The Example Selector is the class responsible for doing so.The base interface is defined as below:If you have a large number of examples, you may need to programmatically select which ones to include in the prompt. The ExampleSelector is the class responsible for doing so. The base interface is defined as below.class BaseExampleSelector { addExample(example: Example): Promise<void | string>; selectExamples(input_variables: Example): Promise<Example[]>;}It needs to expose a selectExamples - this takes in the input variables and then returns a list of examples method - and an addExample method, which saves an example for later selection. It is up to each specific implementation as to how those examples are saved and selected. If you have a large number of examples, you may need to select which ones to include in the prompt. The Example Selector is the class responsible for doing so. The base interface is defined as below:
4e9727215e95-246
The base interface is defined as below: If you have a large number of examples, you may need to programmatically select which ones to include in the prompt. The ExampleSelector is the class responsible for doing so. The base interface is defined as below. class BaseExampleSelector { addExample(example: Example): Promise<void | string>; selectExamples(input_variables: Example): Promise<Example[]>;} It needs to expose a selectExamples - this takes in the input variables and then returns a list of examples method - and an addExample method, which saves an example for later selection. It is up to each specific implementation as to how those examples are saved and selected. Select by length Page Title: Select by length | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-247
Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsSelect by lengthSelect by similarityPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OPromptsExample selectorsSelect by lengthSelect by lengthThis example selector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.import { LengthBasedExampleSelector, PromptTemplate, FewShotPromptTemplate,} from "langchain/prompts";export async function run() { // Create a prompt template that will be used to format the examples. const examplePrompt = new PromptTemplate({ inputVariables: ["input", "output"], template: "Input: {input}\nOutput: {output}", }); // Create a LengthBasedExampleSelector that will be used to select the examples.
4e9727215e95-248
const exampleSelector = await LengthBasedExampleSelector.fromExamples( [ { input: "happy", output: "sad" }, { input: "tall", output: "short" }, { input: "energetic", output: "lethargic" }, { input: "sunny", output: "gloomy" }, { input: "windy", output: "calm" }, ], { examplePrompt, maxLength: 25, } ); // Create a FewShotPromptTemplate that will use the example selector. const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: "Give the antonym of every input", suffix: "Input: {adjective}\nOutput:", inputVariables: ["adjective"], }); // An example with small input, so it selects all examples. console.log(await dynamicPrompt.format({ adjective: "big" })); /* Give the antonym of every input Input: happy Output: sad Input: tall Output: short Input: energetic Output: lethargic Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: */ // An example with long input, so it selects only one example.
4e9727215e95-249
const longString = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"; console.log(await dynamicPrompt.format({ adjective: longString })); /* Give the antonym of every input Input: happy Output: sad Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else Output: */}API Reference:LengthBasedExampleSelector from langchain/promptsPromptTemplate from langchain/promptsFewShotPromptTemplate from langchain/promptsPreviousExample selectorsNextSelect by similarityCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsSelect by lengthSelect by similarityPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OPromptsExample selectorsSelect by lengthSelect by lengthThis example selector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.import { LengthBasedExampleSelector, PromptTemplate, FewShotPromptTemplate,} from "langchain/prompts";export async function run() { // Create a prompt template that will be used to format the examples. const examplePrompt = new PromptTemplate({ inputVariables: ["input", "output"], template: "Input: {input}\nOutput: {output}", }); // Create a LengthBasedExampleSelector that will be used to select the examples.
4e9727215e95-250
const exampleSelector = await LengthBasedExampleSelector.fromExamples( [ { input: "happy", output: "sad" }, { input: "tall", output: "short" }, { input: "energetic", output: "lethargic" }, { input: "sunny", output: "gloomy" }, { input: "windy", output: "calm" }, ], { examplePrompt, maxLength: 25, } ); // Create a FewShotPromptTemplate that will use the example selector. const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: "Give the antonym of every input", suffix: "Input: {adjective}\nOutput:", inputVariables: ["adjective"], }); // An example with small input, so it selects all examples. console.log(await dynamicPrompt.format({ adjective: "big" })); /* Give the antonym of every input Input: happy Output: sad Input: tall Output: short Input: energetic Output: lethargic Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: */ // An example with long input, so it selects only one example.
4e9727215e95-251
const longString = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"; console.log(await dynamicPrompt.format({ adjective: longString })); /* Give the antonym of every input Input: happy Output: sad Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else Output: */}API Reference:LengthBasedExampleSelector from langchain/promptsPromptTemplate from langchain/promptsFewShotPromptTemplate from langchain/promptsPreviousExample selectorsNextSelect by similarity
4e9727215e95-252
ModulesModel I/​OPromptsExample selectorsSelect by lengthSelect by lengthThis example selector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.import { LengthBasedExampleSelector, PromptTemplate, FewShotPromptTemplate,} from "langchain/prompts";export async function run() { // Create a prompt template that will be used to format the examples. const examplePrompt = new PromptTemplate({ inputVariables: ["input", "output"], template: "Input: {input}\nOutput: {output}", }); // Create a LengthBasedExampleSelector that will be used to select the examples. const exampleSelector = await LengthBasedExampleSelector.fromExamples( [ { input: "happy", output: "sad" }, { input: "tall", output: "short" }, { input: "energetic", output: "lethargic" }, { input: "sunny", output: "gloomy" }, { input: "windy", output: "calm" }, ], { examplePrompt, maxLength: 25, } ); // Create a FewShotPromptTemplate that will use the example selector. const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples.
4e9727215e95-253
exampleSelector, examplePrompt, prefix: "Give the antonym of every input", suffix: "Input: {adjective}\nOutput:", inputVariables: ["adjective"], }); // An example with small input, so it selects all examples. console.log(await dynamicPrompt.format({ adjective: "big" })); /* Give the antonym of every input Input: happy Output: sad Input: tall Output: short Input: energetic Output: lethargic Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: */ // An example with long input, so it selects only one example. const longString = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"; console.log(await dynamicPrompt.format({ adjective: longString })); /* Give the antonym of every input Input: happy Output: sad Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else Output: */}API Reference:LengthBasedExampleSelector from langchain/promptsPromptTemplate from langchain/promptsFewShotPromptTemplate from langchain/promptsPreviousExample selectorsNextSelect by similarity
4e9727215e95-254
Select by lengthThis example selector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.import { LengthBasedExampleSelector, PromptTemplate, FewShotPromptTemplate,} from "langchain/prompts";export async function run() { // Create a prompt template that will be used to format the examples. const examplePrompt = new PromptTemplate({ inputVariables: ["input", "output"], template: "Input: {input}\nOutput: {output}", }); // Create a LengthBasedExampleSelector that will be used to select the examples. const exampleSelector = await LengthBasedExampleSelector.fromExamples( [ { input: "happy", output: "sad" }, { input: "tall", output: "short" }, { input: "energetic", output: "lethargic" }, { input: "sunny", output: "gloomy" }, { input: "windy", output: "calm" }, ], { examplePrompt, maxLength: 25, } ); // Create a FewShotPromptTemplate that will use the example selector. const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: "Give the antonym of every input", suffix: "Input: {adjective}\nOutput:", inputVariables: ["adjective"], }); // An example with small input, so it selects all examples.
4e9727215e95-255
console.log(await dynamicPrompt.format({ adjective: "big" })); /* Give the antonym of every input Input: happy Output: sad Input: tall Output: short Input: energetic Output: lethargic Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: */ // An example with long input, so it selects only one example. const longString = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"; console.log(await dynamicPrompt.format({ adjective: longString })); /* Give the antonym of every input Input: happy Output: sad Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else Output: */}API Reference:LengthBasedExampleSelector from langchain/promptsPromptTemplate from langchain/promptsFewShotPromptTemplate from langchain/prompts This example selector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.
4e9727215e95-256
import { LengthBasedExampleSelector, PromptTemplate, FewShotPromptTemplate,} from "langchain/prompts";export async function run() { // Create a prompt template that will be used to format the examples. const examplePrompt = new PromptTemplate({ inputVariables: ["input", "output"], template: "Input: {input}\nOutput: {output}", }); // Create a LengthBasedExampleSelector that will be used to select the examples. const exampleSelector = await LengthBasedExampleSelector.fromExamples( [ { input: "happy", output: "sad" }, { input: "tall", output: "short" }, { input: "energetic", output: "lethargic" }, { input: "sunny", output: "gloomy" }, { input: "windy", output: "calm" }, ], { examplePrompt, maxLength: 25, } ); // Create a FewShotPromptTemplate that will use the example selector. const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: "Give the antonym of every input", suffix: "Input: {adjective}\nOutput:", inputVariables: ["adjective"], }); // An example with small input, so it selects all examples.
4e9727215e95-257
console.log(await dynamicPrompt.format({ adjective: "big" })); /* Give the antonym of every input Input: happy Output: sad Input: tall Output: short Input: energetic Output: lethargic Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: */ // An example with long input, so it selects only one example. const longString = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"; console.log(await dynamicPrompt.format({ adjective: longString })); /* Give the antonym of every input Input: happy Output: sad Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else Output: */} API Reference:LengthBasedExampleSelector from langchain/promptsPromptTemplate from langchain/promptsFewShotPromptTemplate from langchain/prompts Select by similarity Page Title: Select by similarity | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-258
Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsSelect by lengthSelect by similarityPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OPromptsExample selectorsSelect by similaritySelect by similarityThis object selects examples based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { SemanticSimilarityExampleSelector, PromptTemplate, FewShotPromptTemplate,} from "langchain/prompts";import { HNSWLib } from "langchain/vectorstores/hnswlib";export async function run() { // Create a prompt template that will be used to format the examples. const examplePrompt = new PromptTemplate({ inputVariables: ["input", "output"], template: "Input: {input}\nOutput: {output}", }); // Create a SemanticSimilarityExampleSelector that will be used to select the examples.
4e9727215e95-259
const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples( [ { input: "happy", output: "sad" }, { input: "tall", output: "short" }, { input: "energetic", output: "lethargic" }, { input: "sunny", output: "gloomy" }, { input: "windy", output: "calm" }, ], new OpenAIEmbeddings(), HNSWLib, { k: 1 } ); // Create a FewShotPromptTemplate that will use the example selector. const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: "Give the antonym of every input", suffix: "Input: {adjective}\nOutput:", inputVariables: ["adjective"], }); // Input is about the weather, so should select eg. the sunny/gloomy example console.log(await dynamicPrompt.format({ adjective: "rainy" })); /* Give the antonym of every input Input: sunny Output: gloomy Input: rainy Output: */ // Input is a measurement, so should select the tall/short example console.log(await dynamicPrompt.format({ adjective: "large" })); /* Give the antonym of every input Input: tall Output: short Input: large Output: */}API Reference:OpenAIEmbeddings from langchain/embeddings/openaiSemanticSimilarityExampleSelector from langchain/promptsPromptTemplate from langchain/promptsFewShotPromptTemplate from langchain/promptsHNSWLib from langchain/vectorstores/hnswlibPreviousSelect by lengthNextPrompt selectorsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-260
Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsSelect by lengthSelect by similarityPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OPromptsExample selectorsSelect by similaritySelect by similarityThis object selects examples based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { SemanticSimilarityExampleSelector, PromptTemplate, FewShotPromptTemplate,} from "langchain/prompts";import { HNSWLib } from "langchain/vectorstores/hnswlib";export async function run() { // Create a prompt template that will be used to format the examples. const examplePrompt = new PromptTemplate({ inputVariables: ["input", "output"], template: "Input: {input}\nOutput: {output}", }); // Create a SemanticSimilarityExampleSelector that will be used to select the examples.
4e9727215e95-261
const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples( [ { input: "happy", output: "sad" }, { input: "tall", output: "short" }, { input: "energetic", output: "lethargic" }, { input: "sunny", output: "gloomy" }, { input: "windy", output: "calm" }, ], new OpenAIEmbeddings(), HNSWLib, { k: 1 } ); // Create a FewShotPromptTemplate that will use the example selector. const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: "Give the antonym of every input", suffix: "Input: {adjective}\nOutput:", inputVariables: ["adjective"], }); // Input is about the weather, so should select eg. the sunny/gloomy example console.log(await dynamicPrompt.format({ adjective: "rainy" })); /* Give the antonym of every input Input: sunny Output: gloomy Input: rainy Output: */ // Input is a measurement, so should select the tall/short example console.log(await dynamicPrompt.format({ adjective: "large" })); /* Give the antonym of every input Input: tall Output: short Input: large Output: */}API Reference:OpenAIEmbeddings from langchain/embeddings/openaiSemanticSimilarityExampleSelector from langchain/promptsPromptTemplate from langchain/promptsFewShotPromptTemplate from langchain/promptsHNSWLib from langchain/vectorstores/hnswlibPreviousSelect by lengthNextPrompt selectors
4e9727215e95-262
ModulesModel I/​OPromptsExample selectorsSelect by similaritySelect by similarityThis object selects examples based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { SemanticSimilarityExampleSelector, PromptTemplate, FewShotPromptTemplate,} from "langchain/prompts";import { HNSWLib } from "langchain/vectorstores/hnswlib";export async function run() { // Create a prompt template that will be used to format the examples. const examplePrompt = new PromptTemplate({ inputVariables: ["input", "output"], template: "Input: {input}\nOutput: {output}", }); // Create a SemanticSimilarityExampleSelector that will be used to select the examples. const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples( [ { input: "happy", output: "sad" }, { input: "tall", output: "short" }, { input: "energetic", output: "lethargic" }, { input: "sunny", output: "gloomy" }, { input: "windy", output: "calm" }, ], new OpenAIEmbeddings(), HNSWLib, { k: 1 } ); // Create a FewShotPromptTemplate that will use the example selector. const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples.
4e9727215e95-263
exampleSelector, examplePrompt, prefix: "Give the antonym of every input", suffix: "Input: {adjective}\nOutput:", inputVariables: ["adjective"], }); // Input is about the weather, so should select eg. the sunny/gloomy example console.log(await dynamicPrompt.format({ adjective: "rainy" })); /* Give the antonym of every input Input: sunny Output: gloomy Input: rainy Output: */ // Input is a measurement, so should select the tall/short example console.log(await dynamicPrompt.format({ adjective: "large" })); /* Give the antonym of every input Input: tall Output: short Input: large Output: */}API Reference:OpenAIEmbeddings from langchain/embeddings/openaiSemanticSimilarityExampleSelector from langchain/promptsPromptTemplate from langchain/promptsFewShotPromptTemplate from langchain/promptsHNSWLib from langchain/vectorstores/hnswlibPreviousSelect by lengthNextPrompt selectors
4e9727215e95-264
Select by similarityThis object selects examples based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { SemanticSimilarityExampleSelector, PromptTemplate, FewShotPromptTemplate,} from "langchain/prompts";import { HNSWLib } from "langchain/vectorstores/hnswlib";export async function run() { // Create a prompt template that will be used to format the examples. const examplePrompt = new PromptTemplate({ inputVariables: ["input", "output"], template: "Input: {input}\nOutput: {output}", }); // Create a SemanticSimilarityExampleSelector that will be used to select the examples. const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples( [ { input: "happy", output: "sad" }, { input: "tall", output: "short" }, { input: "energetic", output: "lethargic" }, { input: "sunny", output: "gloomy" }, { input: "windy", output: "calm" }, ], new OpenAIEmbeddings(), HNSWLib, { k: 1 } ); // Create a FewShotPromptTemplate that will use the example selector. const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples.
4e9727215e95-265
exampleSelector, examplePrompt, prefix: "Give the antonym of every input", suffix: "Input: {adjective}\nOutput:", inputVariables: ["adjective"], }); // Input is about the weather, so should select eg. the sunny/gloomy example console.log(await dynamicPrompt.format({ adjective: "rainy" })); /* Give the antonym of every input Input: sunny Output: gloomy Input: rainy Output: */ // Input is a measurement, so should select the tall/short example console.log(await dynamicPrompt.format({ adjective: "large" })); /* Give the antonym of every input Input: tall Output: short Input: large Output: */}API Reference:OpenAIEmbeddings from langchain/embeddings/openaiSemanticSimilarityExampleSelector from langchain/promptsPromptTemplate from langchain/promptsFewShotPromptTemplate from langchain/promptsHNSWLib from langchain/vectorstores/hnswlib This object selects examples based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.
4e9727215e95-266
import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { SemanticSimilarityExampleSelector, PromptTemplate, FewShotPromptTemplate,} from "langchain/prompts";import { HNSWLib } from "langchain/vectorstores/hnswlib";export async function run() { // Create a prompt template that will be used to format the examples. const examplePrompt = new PromptTemplate({ inputVariables: ["input", "output"], template: "Input: {input}\nOutput: {output}", }); // Create a SemanticSimilarityExampleSelector that will be used to select the examples. const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples( [ { input: "happy", output: "sad" }, { input: "tall", output: "short" }, { input: "energetic", output: "lethargic" }, { input: "sunny", output: "gloomy" }, { input: "windy", output: "calm" }, ], new OpenAIEmbeddings(), HNSWLib, { k: 1 } ); // Create a FewShotPromptTemplate that will use the example selector. const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: "Give the antonym of every input", suffix: "Input: {adjective}\nOutput:", inputVariables: ["adjective"], }); // Input is about the weather, so should select eg.
4e9727215e95-267
the sunny/gloomy example console.log(await dynamicPrompt.format({ adjective: "rainy" })); /* Give the antonym of every input Input: sunny Output: gloomy Input: rainy Output: */ // Input is a measurement, so should select the tall/short example console.log(await dynamicPrompt.format({ adjective: "large" })); /* Give the antonym of every input Input: tall Output: short Input: large Output: */} API Reference:OpenAIEmbeddings from langchain/embeddings/openaiSemanticSimilarityExampleSelector from langchain/promptsPromptTemplate from langchain/promptsFewShotPromptTemplate from langchain/promptsHNSWLib from langchain/vectorstores/hnswlib Prompt selectors Page Title: Prompt selectors | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OPromptsPrompt selectorsPrompt selectorsPrompt selectors are useful when you want to programmatically select a prompt based on the type of model you are using in a chain. This is especially relevant when swapping chat models and LLMs.The interface for prompt selectors is quite simple:abstract class BasePromptSelector { abstract getPrompt(llm: BaseLanguageModel): BasePromptTemplate;}The getPrompt method takes in a language model and returns an appropriate prompt template.We currently offer a ConditionalPromptSelector that allows you to specify a set of conditions and prompt templates.
4e9727215e95-268
The first condition that evaluates to true will be used to select the prompt template.const QA_PROMPT_SELECTOR = new ConditionalPromptSelector(DEFAULT_QA_PROMPT, [ [isChatModel, CHAT_PROMPT],]);This will return DEFAULT_QA_PROMPT if the model is not a chat model, and CHAT_PROMPT if it is.The example below shows how to use a prompt selector when loading a chain:const loadQAStuffChain = ( llm: BaseLanguageModel, params: StuffQAChainParams = {}) => { const { prompt = QA_PROMPT_SELECTOR.getPrompt(llm) } = params; const llmChain = new LLMChain({ prompt, llm }); const chain = new StuffDocumentsChain({ llmChain }); return chain;};PreviousSelect by similarityNextLanguage modelsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OPromptsPrompt selectorsPrompt selectorsPrompt selectors are useful when you want to programmatically select a prompt based on the type of model you are using in a chain. This is especially relevant when swapping chat models and LLMs.The interface for prompt selectors is quite simple:abstract class BasePromptSelector { abstract getPrompt(llm: BaseLanguageModel): BasePromptTemplate;}The getPrompt method takes in a language model and returns an appropriate prompt template.We currently offer a ConditionalPromptSelector that allows you to specify a set of conditions and prompt templates.
4e9727215e95-269
The first condition that evaluates to true will be used to select the prompt template.const QA_PROMPT_SELECTOR = new ConditionalPromptSelector(DEFAULT_QA_PROMPT, [ [isChatModel, CHAT_PROMPT],]);This will return DEFAULT_QA_PROMPT if the model is not a chat model, and CHAT_PROMPT if it is.The example below shows how to use a prompt selector when loading a chain:const loadQAStuffChain = ( llm: BaseLanguageModel, params: StuffQAChainParams = {}) => { const { prompt = QA_PROMPT_SELECTOR.getPrompt(llm) } = params; const llmChain = new LLMChain({ prompt, llm }); const chain = new StuffDocumentsChain({ llmChain }); return chain;};PreviousSelect by similarityNextLanguage models
4e9727215e95-270
ModulesModel I/​OPromptsPrompt selectorsPrompt selectorsPrompt selectors are useful when you want to programmatically select a prompt based on the type of model you are using in a chain. This is especially relevant when swapping chat models and LLMs.The interface for prompt selectors is quite simple:abstract class BasePromptSelector { abstract getPrompt(llm: BaseLanguageModel): BasePromptTemplate;}The getPrompt method takes in a language model and returns an appropriate prompt template.We currently offer a ConditionalPromptSelector that allows you to specify a set of conditions and prompt templates. The first condition that evaluates to true will be used to select the prompt template.const QA_PROMPT_SELECTOR = new ConditionalPromptSelector(DEFAULT_QA_PROMPT, [ [isChatModel, CHAT_PROMPT],]);This will return DEFAULT_QA_PROMPT if the model is not a chat model, and CHAT_PROMPT if it is.The example below shows how to use a prompt selector when loading a chain:const loadQAStuffChain = ( llm: BaseLanguageModel, params: StuffQAChainParams = {}) => { const { prompt = QA_PROMPT_SELECTOR.getPrompt(llm) } = params; const llmChain = new LLMChain({ prompt, llm }); const chain = new StuffDocumentsChain({ llmChain }); return chain;};PreviousSelect by similarityNextLanguage models
4e9727215e95-271
Prompt selectorsPrompt selectors are useful when you want to programmatically select a prompt based on the type of model you are using in a chain. This is especially relevant when swapping chat models and LLMs.The interface for prompt selectors is quite simple:abstract class BasePromptSelector { abstract getPrompt(llm: BaseLanguageModel): BasePromptTemplate;}The getPrompt method takes in a language model and returns an appropriate prompt template.We currently offer a ConditionalPromptSelector that allows you to specify a set of conditions and prompt templates. The first condition that evaluates to true will be used to select the prompt template.const QA_PROMPT_SELECTOR = new ConditionalPromptSelector(DEFAULT_QA_PROMPT, [ [isChatModel, CHAT_PROMPT],]);This will return DEFAULT_QA_PROMPT if the model is not a chat model, and CHAT_PROMPT if it is.The example below shows how to use a prompt selector when loading a chain:const loadQAStuffChain = ( llm: BaseLanguageModel, params: StuffQAChainParams = {}) => { const { prompt = QA_PROMPT_SELECTOR.getPrompt(llm) } = params; const llmChain = new LLMChain({ prompt, llm }); const chain = new StuffDocumentsChain({ llmChain }); return chain;}; Prompt selectors are useful when you want to programmatically select a prompt based on the type of model you are using in a chain. This is especially relevant when swapping chat models and LLMs. The interface for prompt selectors is quite simple: abstract class BasePromptSelector { abstract getPrompt(llm: BaseLanguageModel): BasePromptTemplate;} The getPrompt method takes in a language model and returns an appropriate prompt template. We currently offer a ConditionalPromptSelector that allows you to specify a set of conditions and prompt templates. The first condition that evaluates to true will be used to select the prompt template.
4e9727215e95-272
const QA_PROMPT_SELECTOR = new ConditionalPromptSelector(DEFAULT_QA_PROMPT, [ [isChatModel, CHAT_PROMPT],]); This will return DEFAULT_QA_PROMPT if the model is not a chat model, and CHAT_PROMPT if it is. The example below shows how to use a prompt selector when loading a chain: const loadQAStuffChain = ( llm: BaseLanguageModel, params: StuffQAChainParams = {}) => { const { prompt = QA_PROMPT_SELECTOR.getPrompt(llm) } = params; const llmChain = new LLMChain({ prompt, llm }); const chain = new StuffDocumentsChain({ llmChain }); return chain;}; Page Title: Language models | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OLanguage modelsOn this pageLanguage modelsLangChain provides interfaces and integrations for two types of models:LLMs: Models that take a text string as input and return a text stringChat models: Models that are backed by a language model but take a list of Chat Messages as input and return a Chat MessageLLMs vs Chat Models​LLMs and Chat Models are subtly but importantly different. LLMs in LangChain refer to pure text completion models. The APIs they wrap take a string prompt as input and output a string completion. OpenAI's GPT-3 is implemented as an LLM. Chat models are often backed by LLMs but tuned specifically for having conversations.
4e9727215e95-273
Chat models are often backed by LLMs but tuned specifically for having conversations. And, crucially, their provider APIs expose a different interface than pure text completion models. Instead of a single string, they take a list of chat messages as input. Usually these messages are labeled with the speaker (usually one of "System", "AI", and "Human"). And they return a ("AI") chat message as output. GPT-4 and Anthropic's Claude are both implemented as Chat Models.To make it possible to swap LLMs and Chat Models, both implement the Base Language Model interface. This exposes common methods "predict", which takes a string and returns a string, and "predict messages", which takes messages and returns a message. If you are using a specific model it's recommended you use the methods specific to that model class (i.e., "predict" for LLMs and "predict messages" for Chat Models), but if you're creating an application that should work with different types of models the shared interface can be helpful.PreviousPrompt selectorsNextLLMsLLMs vs Chat ModelsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OLanguage modelsOn this pageLanguage modelsLangChain provides interfaces and integrations for two types of models:LLMs: Models that take a text string as input and return a text stringChat models: Models that are backed by a language model but take a list of Chat Messages as input and return a Chat MessageLLMs vs Chat Models​LLMs and Chat Models are subtly but importantly different. LLMs in LangChain refer to pure text completion models.
4e9727215e95-274
The APIs they wrap take a string prompt as input and output a string completion. OpenAI's GPT-3 is implemented as an LLM. Chat models are often backed by LLMs but tuned specifically for having conversations. And, crucially, their provider APIs expose a different interface than pure text completion models. Instead of a single string, they take a list of chat messages as input. Usually these messages are labeled with the speaker (usually one of "System", "AI", and "Human"). And they return a ("AI") chat message as output. GPT-4 and Anthropic's Claude are both implemented as Chat Models.To make it possible to swap LLMs and Chat Models, both implement the Base Language Model interface. This exposes common methods "predict", which takes a string and returns a string, and "predict messages", which takes messages and returns a message. If you are using a specific model it's recommended you use the methods specific to that model class (i.e., "predict" for LLMs and "predict messages" for Chat Models), but if you're creating an application that should work with different types of models the shared interface can be helpful.PreviousPrompt selectorsNextLLMsLLMs vs Chat Models Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference LLMs Chat models
4e9727215e95-275
LLMs Chat models ModulesModel I/​OLanguage modelsOn this pageLanguage modelsLangChain provides interfaces and integrations for two types of models:LLMs: Models that take a text string as input and return a text stringChat models: Models that are backed by a language model but take a list of Chat Messages as input and return a Chat MessageLLMs vs Chat Models​LLMs and Chat Models are subtly but importantly different. LLMs in LangChain refer to pure text completion models. The APIs they wrap take a string prompt as input and output a string completion. OpenAI's GPT-3 is implemented as an LLM. Chat models are often backed by LLMs but tuned specifically for having conversations. And, crucially, their provider APIs expose a different interface than pure text completion models. Instead of a single string, they take a list of chat messages as input. Usually these messages are labeled with the speaker (usually one of "System", "AI", and "Human"). And they return a ("AI") chat message as output. GPT-4 and Anthropic's Claude are both implemented as Chat Models.To make it possible to swap LLMs and Chat Models, both implement the Base Language Model interface. This exposes common methods "predict", which takes a string and returns a string, and "predict messages", which takes messages and returns a message. If you are using a specific model it's recommended you use the methods specific to that model class (i.e., "predict" for LLMs and "predict messages" for Chat Models), but if you're creating an application that should work with different types of models the shared interface can be helpful.PreviousPrompt selectorsNextLLMsLLMs vs Chat Models
4e9727215e95-276
ModulesModel I/​OLanguage modelsOn this pageLanguage modelsLangChain provides interfaces and integrations for two types of models:LLMs: Models that take a text string as input and return a text stringChat models: Models that are backed by a language model but take a list of Chat Messages as input and return a Chat MessageLLMs vs Chat Models​LLMs and Chat Models are subtly but importantly different. LLMs in LangChain refer to pure text completion models. The APIs they wrap take a string prompt as input and output a string completion. OpenAI's GPT-3 is implemented as an LLM. Chat models are often backed by LLMs but tuned specifically for having conversations. And, crucially, their provider APIs expose a different interface than pure text completion models. Instead of a single string, they take a list of chat messages as input. Usually these messages are labeled with the speaker (usually one of "System", "AI", and "Human"). And they return a ("AI") chat message as output. GPT-4 and Anthropic's Claude are both implemented as Chat Models.To make it possible to swap LLMs and Chat Models, both implement the Base Language Model interface. This exposes common methods "predict", which takes a string and returns a string, and "predict messages", which takes messages and returns a message. If you are using a specific model it's recommended you use the methods specific to that model class (i.e., "predict" for LLMs and "predict messages" for Chat Models), but if you're creating an application that should work with different types of models the shared interface can be helpful.PreviousPrompt selectorsNextLLMs
4e9727215e95-277
Language modelsLangChain provides interfaces and integrations for two types of models:LLMs: Models that take a text string as input and return a text stringChat models: Models that are backed by a language model but take a list of Chat Messages as input and return a Chat MessageLLMs vs Chat Models​LLMs and Chat Models are subtly but importantly different. LLMs in LangChain refer to pure text completion models. The APIs they wrap take a string prompt as input and output a string completion. OpenAI's GPT-3 is implemented as an LLM. Chat models are often backed by LLMs but tuned specifically for having conversations. And, crucially, their provider APIs expose a different interface than pure text completion models. Instead of a single string, they take a list of chat messages as input. Usually these messages are labeled with the speaker (usually one of "System", "AI", and "Human"). And they return a ("AI") chat message as output. GPT-4 and Anthropic's Claude are both implemented as Chat Models.To make it possible to swap LLMs and Chat Models, both implement the Base Language Model interface. This exposes common methods "predict", which takes a string and returns a string, and "predict messages", which takes messages and returns a message. If you are using a specific model it's recommended you use the methods specific to that model class (i.e., "predict" for LLMs and "predict messages" for Chat Models), but if you're creating an application that should work with different types of models the shared interface can be helpful. LangChain provides interfaces and integrations for two types of models: LLMs and Chat Models are subtly but importantly different. LLMs in LangChain refer to pure text completion models.
4e9727215e95-278
The APIs they wrap take a string prompt as input and output a string completion. OpenAI's GPT-3 is implemented as an LLM. Chat models are often backed by LLMs but tuned specifically for having conversations. And, crucially, their provider APIs expose a different interface than pure text completion models. Instead of a single string, they take a list of chat messages as input. Usually these messages are labeled with the speaker (usually one of "System", "AI", and "Human"). And they return a ("AI") chat message as output. GPT-4 and Anthropic's Claude are both implemented as Chat Models. To make it possible to swap LLMs and Chat Models, both implement the Base Language Model interface. This exposes common methods "predict", which takes a string and returns a string, and "predict messages", which takes messages and returns a message. If you are using a specific model it's recommended you use the methods specific to that model class (i.e., "predict" for LLMs and "predict messages" for Chat Models), but if you're creating an application that should work with different types of models the shared interface can be helpful. LLMs vs Chat Models Page Title: LLMs | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsHow-toIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OLanguage modelsLLMsOn this pageLLMsLarge Language Models (LLMs) are a core component of LangChain.
4e9727215e95-279
LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs.For more detailed documentation check out our:How-to guides: Walkthroughs of core functionality, like streaming, async, etc.Integrations: How to use different LLM providers (OpenAI, Anthropic, etc. )Get started​There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - the LLM class is designed to provide a standard interface for all of them.In this walkthrough we'll work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types.Setup​To start we'll need to install the official OpenAI package:npmYarnpnpmnpm install -S openaiyarn add openaipnpm add openaiAccessing the API requires an API key, which you can get by creating an account and heading here.
4e9727215e95-280
Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the OpenAI LLM class:import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ openAIApiKey: "YOUR_KEY_HERE",});otherwise you can initialize with an empty object:import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({});call: string in -> string out​The simplest way to use an LLM is the .call method: pass in a string, get a string completion.const res = await llm.call("Tell me a joke");console.log(res);// "Why did the chicken cross the road?\n\nTo get to the other side. "generate: batch calls, richer outputs​generate lets you can call the model with a list of strings, getting back a more complete response than just the text. This complete response can include things like multiple top responses and other LLM provider-specific information:const llmResult = await llm.generate(["Tell me a joke", "Tell me a poem"], ["Tell me a joke", "Tell me a poem"]);console.log(llmResult.generations.length)// 30console.log(llmResult.generations[0]);/* [ { text: "\n\nQ: What did the fish say when it hit the wall?\nA: Dam!
4e9727215e95-281
", generationInfo: { finishReason: "stop", logprobs: null } } ]*/console.log(llmResult.generations[1]);/* [ { text: "\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you. ", generationInfo: { finishReason: "stop", logprobs: null } } ]*/You can also access provider specific information that is returned. This information is NOT standardized across providers.console.log(llmResult.llmOutput);/* { tokenUsage: { completionTokens: 46, promptTokens: 8, totalTokens: 54 } }*/Here's an example with additional parameters, which sets -1 for max_tokens to turn on token size calculations:import { OpenAI } from "langchain/llms/openai";export const run = async () => { const model = new OpenAI({ // customize openai model that's used, `text-davinci-003` is the default modelName: "text-ada-001", // `max_tokens` supports a magic -1 param where the max token length for the specified modelName // is calculated and included in the request to OpenAI as the `max_tokens` param maxTokens: -1, // use `modelKwargs` to pass params directly to the openai call // note that they use snake_case instead of camelCase modelKwargs: { user: "me", }, // for additional logging for debugging purposes verbose: true, }); const resA = await model.call( "What would be a good company name a company that makes colorful socks?"
4e9727215e95-282
); console.log({ resA }); // { resA: '\n\nSocktastic Colors' }};API Reference:OpenAI from langchain/llms/openaiAdvanced​This section is for users who want a deeper technical understanding of how LangChain works. If you are just getting started, you can skip this section.Both LLMs and Chat Models are built on top of the BaseLanguageModel class. This class provides a common interface for all models, and allows us to easily swap out models in chains without changing the rest of the code.The BaseLanguageModel class has two abstract methods: generatePrompt and getNumTokens, which are implemented by BaseChatModel and BaseLLM respectively.BaseLLM is a subclass of BaseLanguageModel that provides a common interface for LLMs while BaseChatModel is a subclass of BaseLanguageModel that provides a common interface for chat models.PreviousLanguage modelsNextCancelling requestsGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsHow-toIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OLanguage modelsLLMsOn this pageLLMsLarge Language Models (LLMs) are a core component of LangChain.
4e9727215e95-283
LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs.For more detailed documentation check out our:How-to guides: Walkthroughs of core functionality, like streaming, async, etc.Integrations: How to use different LLM providers (OpenAI, Anthropic, etc. )Get started​There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - the LLM class is designed to provide a standard interface for all of them.In this walkthrough we'll work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types.Setup​To start we'll need to install the official OpenAI package:npmYarnpnpmnpm install -S openaiyarn add openaipnpm add openaiAccessing the API requires an API key, which you can get by creating an account and heading here.
4e9727215e95-284
Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the OpenAI LLM class:import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ openAIApiKey: "YOUR_KEY_HERE",});otherwise you can initialize with an empty object:import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({});call: string in -> string out​The simplest way to use an LLM is the .call method: pass in a string, get a string completion.const res = await llm.call("Tell me a joke");console.log(res);// "Why did the chicken cross the road?\n\nTo get to the other side. "generate: batch calls, richer outputs​generate lets you can call the model with a list of strings, getting back a more complete response than just the text. This complete response can include things like multiple top responses and other LLM provider-specific information:const llmResult = await llm.generate(["Tell me a joke", "Tell me a poem"], ["Tell me a joke", "Tell me a poem"]);console.log(llmResult.generations.length)// 30console.log(llmResult.generations[0]);/* [ { text: "\n\nQ: What did the fish say when it hit the wall?\nA: Dam!
4e9727215e95-285
", generationInfo: { finishReason: "stop", logprobs: null } } ]*/console.log(llmResult.generations[1]);/* [ { text: "\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you. ", generationInfo: { finishReason: "stop", logprobs: null } } ]*/You can also access provider specific information that is returned. This information is NOT standardized across providers.console.log(llmResult.llmOutput);/* { tokenUsage: { completionTokens: 46, promptTokens: 8, totalTokens: 54 } }*/Here's an example with additional parameters, which sets -1 for max_tokens to turn on token size calculations:import { OpenAI } from "langchain/llms/openai";export const run = async () => { const model = new OpenAI({ // customize openai model that's used, `text-davinci-003` is the default modelName: "text-ada-001", // `max_tokens` supports a magic -1 param where the max token length for the specified modelName // is calculated and included in the request to OpenAI as the `max_tokens` param maxTokens: -1, // use `modelKwargs` to pass params directly to the openai call // note that they use snake_case instead of camelCase modelKwargs: { user: "me", }, // for additional logging for debugging purposes verbose: true, }); const resA = await model.call( "What would be a good company name a company that makes colorful socks?"
4e9727215e95-286
); console.log({ resA }); // { resA: '\n\nSocktastic Colors' }};API Reference:OpenAI from langchain/llms/openaiAdvanced​This section is for users who want a deeper technical understanding of how LangChain works. If you are just getting started, you can skip this section.Both LLMs and Chat Models are built on top of the BaseLanguageModel class. This class provides a common interface for all models, and allows us to easily swap out models in chains without changing the rest of the code.The BaseLanguageModel class has two abstract methods: generatePrompt and getNumTokens, which are implemented by BaseChatModel and BaseLLM respectively.BaseLLM is a subclass of BaseLanguageModel that provides a common interface for LLMs while BaseChatModel is a subclass of BaseLanguageModel that provides a common interface for chat models.PreviousLanguage modelsNextCancelling requestsGet started Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsHow-toIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference How-to Integrations ModulesModel I/​OLanguage modelsLLMsOn this pageLLMsLarge Language Models (LLMs) are a core component of LangChain.
4e9727215e95-287
LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs.For more detailed documentation check out our:How-to guides: Walkthroughs of core functionality, like streaming, async, etc.Integrations: How to use different LLM providers (OpenAI, Anthropic, etc. )Get started​There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - the LLM class is designed to provide a standard interface for all of them.In this walkthrough we'll work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types.Setup​To start we'll need to install the official OpenAI package:npmYarnpnpmnpm install -S openaiyarn add openaipnpm add openaiAccessing the API requires an API key, which you can get by creating an account and heading here.
4e9727215e95-288
Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the OpenAI LLM class:import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ openAIApiKey: "YOUR_KEY_HERE",});otherwise you can initialize with an empty object:import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({});call: string in -> string out​The simplest way to use an LLM is the .call method: pass in a string, get a string completion.const res = await llm.call("Tell me a joke");console.log(res);// "Why did the chicken cross the road?\n\nTo get to the other side. "generate: batch calls, richer outputs​generate lets you can call the model with a list of strings, getting back a more complete response than just the text. This complete response can include things like multiple top responses and other LLM provider-specific information:const llmResult = await llm.generate(["Tell me a joke", "Tell me a poem"], ["Tell me a joke", "Tell me a poem"]);console.log(llmResult.generations.length)// 30console.log(llmResult.generations[0]);/* [ { text: "\n\nQ: What did the fish say when it hit the wall?\nA: Dam!
4e9727215e95-289
", generationInfo: { finishReason: "stop", logprobs: null } } ]*/console.log(llmResult.generations[1]);/* [ { text: "\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you. ", generationInfo: { finishReason: "stop", logprobs: null } } ]*/You can also access provider specific information that is returned. This information is NOT standardized across providers.console.log(llmResult.llmOutput);/* { tokenUsage: { completionTokens: 46, promptTokens: 8, totalTokens: 54 } }*/Here's an example with additional parameters, which sets -1 for max_tokens to turn on token size calculations:import { OpenAI } from "langchain/llms/openai";export const run = async () => { const model = new OpenAI({ // customize openai model that's used, `text-davinci-003` is the default modelName: "text-ada-001", // `max_tokens` supports a magic -1 param where the max token length for the specified modelName // is calculated and included in the request to OpenAI as the `max_tokens` param maxTokens: -1, // use `modelKwargs` to pass params directly to the openai call // note that they use snake_case instead of camelCase modelKwargs: { user: "me", }, // for additional logging for debugging purposes verbose: true, }); const resA = await model.call( "What would be a good company name a company that makes colorful socks?"
4e9727215e95-290
); console.log({ resA }); // { resA: '\n\nSocktastic Colors' }};API Reference:OpenAI from langchain/llms/openaiAdvanced​This section is for users who want a deeper technical understanding of how LangChain works. If you are just getting started, you can skip this section.Both LLMs and Chat Models are built on top of the BaseLanguageModel class. This class provides a common interface for all models, and allows us to easily swap out models in chains without changing the rest of the code.The BaseLanguageModel class has two abstract methods: generatePrompt and getNumTokens, which are implemented by BaseChatModel and BaseLLM respectively.BaseLLM is a subclass of BaseLanguageModel that provides a common interface for LLMs while BaseChatModel is a subclass of BaseLanguageModel that provides a common interface for chat models.PreviousLanguage modelsNextCancelling requestsGet started ModulesModel I/​OLanguage modelsLLMsOn this pageLLMsLarge Language Models (LLMs) are a core component of LangChain.
4e9727215e95-291
LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs.For more detailed documentation check out our:How-to guides: Walkthroughs of core functionality, like streaming, async, etc.Integrations: How to use different LLM providers (OpenAI, Anthropic, etc. )Get started​There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - the LLM class is designed to provide a standard interface for all of them.In this walkthrough we'll work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types.Setup​To start we'll need to install the official OpenAI package:npmYarnpnpmnpm install -S openaiyarn add openaipnpm add openaiAccessing the API requires an API key, which you can get by creating an account and heading here.
4e9727215e95-292
Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the OpenAI LLM class:import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ openAIApiKey: "YOUR_KEY_HERE",});otherwise you can initialize with an empty object:import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({});call: string in -> string out​The simplest way to use an LLM is the .call method: pass in a string, get a string completion.const res = await llm.call("Tell me a joke");console.log(res);// "Why did the chicken cross the road?\n\nTo get to the other side. "generate: batch calls, richer outputs​generate lets you can call the model with a list of strings, getting back a more complete response than just the text. This complete response can include things like multiple top responses and other LLM provider-specific information:const llmResult = await llm.generate(["Tell me a joke", "Tell me a poem"], ["Tell me a joke", "Tell me a poem"]);console.log(llmResult.generations.length)// 30console.log(llmResult.generations[0]);/* [ { text: "\n\nQ: What did the fish say when it hit the wall?\nA: Dam!
4e9727215e95-293
", generationInfo: { finishReason: "stop", logprobs: null } } ]*/console.log(llmResult.generations[1]);/* [ { text: "\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you. ", generationInfo: { finishReason: "stop", logprobs: null } } ]*/You can also access provider specific information that is returned. This information is NOT standardized across providers.console.log(llmResult.llmOutput);/* { tokenUsage: { completionTokens: 46, promptTokens: 8, totalTokens: 54 } }*/Here's an example with additional parameters, which sets -1 for max_tokens to turn on token size calculations:import { OpenAI } from "langchain/llms/openai";export const run = async () => { const model = new OpenAI({ // customize openai model that's used, `text-davinci-003` is the default modelName: "text-ada-001", // `max_tokens` supports a magic -1 param where the max token length for the specified modelName // is calculated and included in the request to OpenAI as the `max_tokens` param maxTokens: -1, // use `modelKwargs` to pass params directly to the openai call // note that they use snake_case instead of camelCase modelKwargs: { user: "me", }, // for additional logging for debugging purposes verbose: true, }); const resA = await model.call( "What would be a good company name a company that makes colorful socks?"
4e9727215e95-294
); console.log({ resA }); // { resA: '\n\nSocktastic Colors' }};API Reference:OpenAI from langchain/llms/openaiAdvanced​This section is for users who want a deeper technical understanding of how LangChain works. If you are just getting started, you can skip this section.Both LLMs and Chat Models are built on top of the BaseLanguageModel class. This class provides a common interface for all models, and allows us to easily swap out models in chains without changing the rest of the code.The BaseLanguageModel class has two abstract methods: generatePrompt and getNumTokens, which are implemented by BaseChatModel and BaseLLM respectively.BaseLLM is a subclass of BaseLanguageModel that provides a common interface for LLMs while BaseChatModel is a subclass of BaseLanguageModel that provides a common interface for chat models.PreviousLanguage modelsNextCancelling requests LLMsLarge Language Models (LLMs) are a core component of LangChain.
4e9727215e95-295
LLMsLarge Language Models (LLMs) are a core component of LangChain. LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs.For more detailed documentation check out our:How-to guides: Walkthroughs of core functionality, like streaming, async, etc.Integrations: How to use different LLM providers (OpenAI, Anthropic, etc. )Get started​There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - the LLM class is designed to provide a standard interface for all of them.In this walkthrough we'll work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types.Setup​To start we'll need to install the official OpenAI package:npmYarnpnpmnpm install -S openaiyarn add openaipnpm add openaiAccessing the API requires an API key, which you can get by creating an account and heading here.
4e9727215e95-296
Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the OpenAI LLM class:import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ openAIApiKey: "YOUR_KEY_HERE",});otherwise you can initialize with an empty object:import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({});call: string in -> string out​The simplest way to use an LLM is the .call method: pass in a string, get a string completion.const res = await llm.call("Tell me a joke");console.log(res);// "Why did the chicken cross the road?\n\nTo get to the other side. "generate: batch calls, richer outputs​generate lets you can call the model with a list of strings, getting back a more complete response than just the text. This complete response can include things like multiple top responses and other LLM provider-specific information:const llmResult = await llm.generate(["Tell me a joke", "Tell me a poem"], ["Tell me a joke", "Tell me a poem"]);console.log(llmResult.generations.length)// 30console.log(llmResult.generations[0]);/* [ { text: "\n\nQ: What did the fish say when it hit the wall?\nA: Dam!
4e9727215e95-297
", generationInfo: { finishReason: "stop", logprobs: null } } ]*/console.log(llmResult.generations[1]);/* [ { text: "\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you. ", generationInfo: { finishReason: "stop", logprobs: null } } ]*/You can also access provider specific information that is returned. This information is NOT standardized across providers.console.log(llmResult.llmOutput);/* { tokenUsage: { completionTokens: 46, promptTokens: 8, totalTokens: 54 } }*/Here's an example with additional parameters, which sets -1 for max_tokens to turn on token size calculations:import { OpenAI } from "langchain/llms/openai";export const run = async () => { const model = new OpenAI({ // customize openai model that's used, `text-davinci-003` is the default modelName: "text-ada-001", // `max_tokens` supports a magic -1 param where the max token length for the specified modelName // is calculated and included in the request to OpenAI as the `max_tokens` param maxTokens: -1, // use `modelKwargs` to pass params directly to the openai call // note that they use snake_case instead of camelCase modelKwargs: { user: "me", }, // for additional logging for debugging purposes verbose: true, }); const resA = await model.call( "What would be a good company name a company that makes colorful socks?"
4e9727215e95-298
); console.log({ resA }); // { resA: '\n\nSocktastic Colors' }};API Reference:OpenAI from langchain/llms/openaiAdvanced​This section is for users who want a deeper technical understanding of how LangChain works. If you are just getting started, you can skip this section.Both LLMs and Chat Models are built on top of the BaseLanguageModel class. This class provides a common interface for all models, and allows us to easily swap out models in chains without changing the rest of the code.The BaseLanguageModel class has two abstract methods: generatePrompt and getNumTokens, which are implemented by BaseChatModel and BaseLLM respectively.BaseLLM is a subclass of BaseLanguageModel that provides a common interface for LLMs while BaseChatModel is a subclass of BaseLanguageModel that provides a common interface for chat models. Large Language Models (LLMs) are a core component of LangChain. LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs. For more detailed documentation check out our: How-to guides: Walkthroughs of core functionality, like streaming, async, etc. Integrations: How to use different LLM providers (OpenAI, Anthropic, etc.) There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - the LLM class is designed to provide a standard interface for all of them. In this walkthrough we'll work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types. To start we'll need to install the official OpenAI package: npmYarnpnpmnpm install -S openaiyarn add openaipnpm add openai npm install -S openaiyarn add openaipnpm add openai npm install -S openai
4e9727215e95-299
npm install -S openai yarn add openai pnpm add openai Accessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running: otherwise you can initialize with an empty object: import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({}); The simplest way to use an LLM is the .call method: pass in a string, get a string completion. const res = await llm.call("Tell me a joke");console.log(res);// "Why did the chicken cross the road?\n\nTo get to the other side." generate lets you can call the model with a list of strings, getting back a more complete response than just the text. This complete response can include things like multiple top responses and other LLM provider-specific information: const llmResult = await llm.generate(["Tell me a joke", "Tell me a poem"], ["Tell me a joke", "Tell me a poem"]);console.log(llmResult.generations.length)// 30console.log(llmResult.generations[0]);/* [ { text: "\n\nQ: What did the fish say when it hit the wall?\nA: Dam! ", generationInfo: { finishReason: "stop", logprobs: null } } ]*/console.log(llmResult.generations[1]);/* [ { text: "\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you. ", generationInfo: { finishReason: "stop", logprobs: null } } ]*/ You can also access provider specific information that is returned. This information is NOT standardized across providers.