id
stringlengths 14
17
| text
stringlengths 42
2.1k
|
---|---|
4e9727215e95-500 | ")]);console.log(result);/* AIMessage { content: '', name: undefined, additional_kwargs: { function_call: { name: 'extractor', arguments: '{\n' + ' "tone": "positive",\n' + ' "entity": "day",\n' + ' "word_count": 4,\n' + ' "chat_response": "It certainly is a gorgeous day! ",\n' + ' "final_punctuation": "! "\n' + '}' } } }*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaPreviousDealing with rate limitsNextLLMChainUsageUsage with Zod
ModulesModel I/OLanguage modelsChat modelsHow-toOpenAI Function callingOn this pageOpenAI Function callingFunction calling is a useful way to get structured output from an LLM for a wide range of purposes.
By providing schemas for "functions", the LLM will choose one and attempt to output a response matching that schema.Though the name implies that the LLM is actually running code and calling a function, it is more accurate to say that the LLM is
populating parameters that match the schema for the arguments a hypothetical function would take. We can use these
structured responses for whatever we'd like!Function calling serves as a building block for several other popular features in LangChain, including the OpenAI Functions agent
and structured output chain. In addition to these more specific use cases, you can also attach function parameters
directly to the model and call it, as shown below.UsageOpenAI requires parameter schemas in the format below, where parameters must be JSON Schema.
Specifying the function_call parameter will force the model to return a response using the specified function. |
4e9727215e95-501 | Specifying the function_call parameter will force the model to return a response using the specified function.
This is useful if you have multiple schemas you'd like the model to pick from.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const extractionFunctionSchema = { name: "extractor", description: "Extracts fields from the input. ", parameters: { type: "object", properties: { tone: { type: "string", enum: ["positive", "negative"], description: "The overall tone of the input", }, word_count: { type: "number", description: "The number of words in the input", }, chat_response: { type: "string", description: "A response to the human's input", }, }, required: ["tone", "word_count", "chat_response"], },};// Bind function arguments to the model.// All subsequent invoke calls will use the bound parameters.// "functions.parameters" must be formatted as JSON Schema// Omit "function_call" if you want the model to choose a function to call.const model = new ChatOpenAI({ modelName: "gpt-4",}).bind({ functions: [extractionFunctionSchema], function_call: { name: "extractor" },});const result = await model.invoke([new HumanMessage("What a beautiful day! |
4e9727215e95-502 | ")]);console.log(result);/* AIMessage { content: '', name: undefined, additional_kwargs: { function_call: { name: 'extractor', arguments: '{\n' + ' "tone": "positive",\n' + ' "word_count": 4,\n' + ' "chat_response": "It certainly is a beautiful day! "\n' + '}' } } }*/// Alternatively, you can pass function call arguments as an additional argument as a one-off:/*const model = new ChatOpenAI({ modelName: "gpt-4",});const result = await model.call([ new HumanMessage("What a beautiful day! ")], { functions: [extractionFunctionSchema], function_call: {name: "extractor"}});*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaUsage with ZodAn alternative way to declare function schema is to use the Zod schema library with the |
4e9727215e95-503 | zod-to-json-schema utility package to translate it:npmYarnpnpmnpm install zodnpm install zod-to-json-schemayarn add zodyarn add zod-to-json-schemapnpm add zodpnpm add zod-to-json-schemaimport { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";const extractionFunctionZodSchema = z.object({ tone: z .enum(["positive", "negative"]) .describe("The overall tone of the input"), entity: z.string().describe("The entity mentioned in the input"), word_count: z.number().describe("The number of words in the input"), chat_response: z.string().describe("A response to the human's input"), final_punctuation: z .optional(z.string()) .describe("The final punctuation mark in the input, if any. "),});// Bind function arguments to the model.// "functions.parameters" must be formatted as JSON Schema.// We translate the above Zod schema into JSON schema using the "zodToJsonSchema" package.// Omit "function_call" if you want the model to choose a function to call.const model = new ChatOpenAI({ modelName: "gpt-4",}).bind({ functions: [ { name: "extractor", description: "Extracts fields from the input. ", parameters: zodToJsonSchema(extractionFunctionZodSchema), }, ], function_call: { name: "extractor" },});const result = await model.invoke([new HumanMessage("What a beautiful day! |
4e9727215e95-504 | ")]);console.log(result);/* AIMessage { content: '', name: undefined, additional_kwargs: { function_call: { name: 'extractor', arguments: '{\n' + ' "tone": "positive",\n' + ' "entity": "day",\n' + ' "word_count": 4,\n' + ' "chat_response": "It certainly is a gorgeous day! ",\n' + ' "final_punctuation": "! "\n' + '}' } } }*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaPreviousDealing with rate limitsNextLLMChain
OpenAI Function callingFunction calling is a useful way to get structured output from an LLM for a wide range of purposes.
By providing schemas for "functions", the LLM will choose one and attempt to output a response matching that schema.Though the name implies that the LLM is actually running code and calling a function, it is more accurate to say that the LLM is
populating parameters that match the schema for the arguments a hypothetical function would take. We can use these
structured responses for whatever we'd like!Function calling serves as a building block for several other popular features in LangChain, including the OpenAI Functions agent
and structured output chain. In addition to these more specific use cases, you can also attach function parameters
directly to the model and call it, as shown below.UsageOpenAI requires parameter schemas in the format below, where parameters must be JSON Schema.
Specifying the function_call parameter will force the model to return a response using the specified function. |
4e9727215e95-505 | Specifying the function_call parameter will force the model to return a response using the specified function.
This is useful if you have multiple schemas you'd like the model to pick from.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const extractionFunctionSchema = { name: "extractor", description: "Extracts fields from the input. ", parameters: { type: "object", properties: { tone: { type: "string", enum: ["positive", "negative"], description: "The overall tone of the input", }, word_count: { type: "number", description: "The number of words in the input", }, chat_response: { type: "string", description: "A response to the human's input", }, }, required: ["tone", "word_count", "chat_response"], },};// Bind function arguments to the model.// All subsequent invoke calls will use the bound parameters.// "functions.parameters" must be formatted as JSON Schema// Omit "function_call" if you want the model to choose a function to call.const model = new ChatOpenAI({ modelName: "gpt-4",}).bind({ functions: [extractionFunctionSchema], function_call: { name: "extractor" },});const result = await model.invoke([new HumanMessage("What a beautiful day! |
4e9727215e95-506 | ")]);console.log(result);/* AIMessage { content: '', name: undefined, additional_kwargs: { function_call: { name: 'extractor', arguments: '{\n' + ' "tone": "positive",\n' + ' "word_count": 4,\n' + ' "chat_response": "It certainly is a beautiful day! "\n' + '}' } } }*/// Alternatively, you can pass function call arguments as an additional argument as a one-off:/*const model = new ChatOpenAI({ modelName: "gpt-4",});const result = await model.call([ new HumanMessage("What a beautiful day! ")], { functions: [extractionFunctionSchema], function_call: {name: "extractor"}});*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaUsage with ZodAn alternative way to declare function schema is to use the Zod schema library with the |
4e9727215e95-507 | zod-to-json-schema utility package to translate it:npmYarnpnpmnpm install zodnpm install zod-to-json-schemayarn add zodyarn add zod-to-json-schemapnpm add zodpnpm add zod-to-json-schemaimport { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";const extractionFunctionZodSchema = z.object({ tone: z .enum(["positive", "negative"]) .describe("The overall tone of the input"), entity: z.string().describe("The entity mentioned in the input"), word_count: z.number().describe("The number of words in the input"), chat_response: z.string().describe("A response to the human's input"), final_punctuation: z .optional(z.string()) .describe("The final punctuation mark in the input, if any. "),});// Bind function arguments to the model.// "functions.parameters" must be formatted as JSON Schema.// We translate the above Zod schema into JSON schema using the "zodToJsonSchema" package.// Omit "function_call" if you want the model to choose a function to call.const model = new ChatOpenAI({ modelName: "gpt-4",}).bind({ functions: [ { name: "extractor", description: "Extracts fields from the input. ", parameters: zodToJsonSchema(extractionFunctionZodSchema), }, ], function_call: { name: "extractor" },});const result = await model.invoke([new HumanMessage("What a beautiful day! |
4e9727215e95-508 | ")]);console.log(result);/* AIMessage { content: '', name: undefined, additional_kwargs: { function_call: { name: 'extractor', arguments: '{\n' + ' "tone": "positive",\n' + ' "entity": "day",\n' + ' "word_count": 4,\n' + ' "chat_response": "It certainly is a gorgeous day! ",\n' + ' "final_punctuation": "! "\n' + '}' } } }*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schema
Function calling is a useful way to get structured output from an LLM for a wide range of purposes.
By providing schemas for "functions", the LLM will choose one and attempt to output a response matching that schema.
Though the name implies that the LLM is actually running code and calling a function, it is more accurate to say that the LLM is
populating parameters that match the schema for the arguments a hypothetical function would take. We can use these
structured responses for whatever we'd like!
Function calling serves as a building block for several other popular features in LangChain, including the OpenAI Functions agent
and structured output chain. In addition to these more specific use cases, you can also attach function parameters
directly to the model and call it, as shown below.
OpenAI requires parameter schemas in the format below, where parameters must be JSON Schema.
Specifying the function_call parameter will force the model to return a response using the specified function.
This is useful if you have multiple schemas you'd like the model to pick from. |
4e9727215e95-509 | This is useful if you have multiple schemas you'd like the model to pick from.
import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const extractionFunctionSchema = { name: "extractor", description: "Extracts fields from the input. ", parameters: { type: "object", properties: { tone: { type: "string", enum: ["positive", "negative"], description: "The overall tone of the input", }, word_count: { type: "number", description: "The number of words in the input", }, chat_response: { type: "string", description: "A response to the human's input", }, }, required: ["tone", "word_count", "chat_response"], },};// Bind function arguments to the model.// All subsequent invoke calls will use the bound parameters.// "functions.parameters" must be formatted as JSON Schema// Omit "function_call" if you want the model to choose a function to call.const model = new ChatOpenAI({ modelName: "gpt-4",}).bind({ functions: [extractionFunctionSchema], function_call: { name: "extractor" },});const result = await model.invoke([new HumanMessage("What a beautiful day! ")]);console.log(result);/* AIMessage { content: '', name: undefined, additional_kwargs: { function_call: { name: 'extractor', arguments: '{\n' + ' "tone": "positive",\n' + ' "word_count": 4,\n' + ' "chat_response": "It certainly is a beautiful day! |
4e9727215e95-510 | "\n' + '}' } } }*/// Alternatively, you can pass function call arguments as an additional argument as a one-off:/*const model = new ChatOpenAI({ modelName: "gpt-4",});const result = await model.call([ new HumanMessage("What a beautiful day! ")], { functions: [extractionFunctionSchema], function_call: {name: "extractor"}});*/
An alternative way to declare function schema is to use the Zod schema library with the
zod-to-json-schema utility package to translate it:
npmYarnpnpmnpm install zodnpm install zod-to-json-schemayarn add zodyarn add zod-to-json-schemapnpm add zodpnpm add zod-to-json-schema
npm install zodnpm install zod-to-json-schemayarn add zodyarn add zod-to-json-schemapnpm add zodpnpm add zod-to-json-schema
npm install zodnpm install zod-to-json-schema
yarn add zodyarn add zod-to-json-schema
pnpm add zodpnpm add zod-to-json-schema |
4e9727215e95-511 | pnpm add zodpnpm add zod-to-json-schema
import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";const extractionFunctionZodSchema = z.object({ tone: z .enum(["positive", "negative"]) .describe("The overall tone of the input"), entity: z.string().describe("The entity mentioned in the input"), word_count: z.number().describe("The number of words in the input"), chat_response: z.string().describe("A response to the human's input"), final_punctuation: z .optional(z.string()) .describe("The final punctuation mark in the input, if any. "),});// Bind function arguments to the model.// "functions.parameters" must be formatted as JSON Schema.// We translate the above Zod schema into JSON schema using the "zodToJsonSchema" package.// Omit "function_call" if you want the model to choose a function to call.const model = new ChatOpenAI({ modelName: "gpt-4",}).bind({ functions: [ { name: "extractor", description: "Extracts fields from the input. ", parameters: zodToJsonSchema(extractionFunctionZodSchema), }, ], function_call: { name: "extractor" },});const result = await model.invoke([new HumanMessage("What a beautiful day! |
4e9727215e95-512 | ")]);console.log(result);/* AIMessage { content: '', name: undefined, additional_kwargs: { function_call: { name: 'extractor', arguments: '{\n' + ' "tone": "positive",\n' + ' "entity": "day",\n' + ' "word_count": 4,\n' + ' "chat_response": "It certainly is a gorgeous day! ",\n' + ' "final_punctuation": "! "\n' + '}' } } }*/
LLMChain
UsageUsage with Zod
Page Title: LLMChain | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toLLMChainLLMChainYou can use the existing LLMChain in a very similar way to before - provide a prompt and a model.import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. |
4e9727215e95-513 | ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate)const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt])const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" }PreviousOpenAI Function callingNextPromptsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-514 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toLLMChainLLMChainYou can use the existing LLMChain in a very similar way to before - provide a prompt and a model.import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate)const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt])const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" }PreviousOpenAI Function callingNextPrompts |
4e9727215e95-515 | ModulesModel I/OLanguage modelsChat modelsHow-toLLMChainLLMChainYou can use the existing LLMChain in a very similar way to before - provide a prompt and a model.import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate)const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt])const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" }PreviousOpenAI Function callingNextPrompts |
4e9727215e95-516 | LLMChainYou can use the existing LLMChain in a very similar way to before - provide a prompt and a model.import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate)const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt])const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" }
You can use the existing LLMChain in a very similar way to before - provide a prompt and a model. |
4e9727215e95-517 | import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate)const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt])const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" }
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toPromptsPromptsPrompts for Chat models are built around messages, instead of just plain text.You can make use of templating by using a ChatPromptTemplate from one or more MessagePromptTemplates, then using ChatPromptTemplate's |
4e9727215e95-518 | formatPrompt method.For convenience, there is also a fromTemplate method exposed on the template. If you were to use this template, this is what it would look like:import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" }PreviousLLMChainNextStreamingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-519 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toPromptsPromptsPrompts for Chat models are built around messages, instead of just plain text.You can make use of templating by using a ChatPromptTemplate from one or more MessagePromptTemplates, then using ChatPromptTemplate's
formatPrompt method.For convenience, there is also a fromTemplate method exposed on the template. If you were to use this template, this is what it would look like:import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" }PreviousLLMChainNextStreaming |
4e9727215e95-520 | ModulesModel I/OLanguage modelsChat modelsHow-toPromptsPromptsPrompts for Chat models are built around messages, instead of just plain text.You can make use of templating by using a ChatPromptTemplate from one or more MessagePromptTemplates, then using ChatPromptTemplate's
formatPrompt method.For convenience, there is also a fromTemplate method exposed on the template. If you were to use this template, this is what it would look like:import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" }PreviousLLMChainNextStreaming
PromptsPrompts for Chat models are built around messages, instead of just plain text.You can make use of templating by using a ChatPromptTemplate from one or more MessagePromptTemplates, then using ChatPromptTemplate's |
4e9727215e95-521 | formatPrompt method.For convenience, there is also a fromTemplate method exposed on the template. If you were to use this template, this is what it would look like:import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" }
Prompts for Chat models are built around messages, instead of just plain text.
You can make use of templating by using a ChatPromptTemplate from one or more MessagePromptTemplates, then using ChatPromptTemplate's
formatPrompt method.
For convenience, there is also a fromTemplate method exposed on the template. If you were to use this template, this is what it would look like:
Paragraphs: |
4e9727215e95-522 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toStreamingStreamingSome Chat models provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const chat = new ChatOpenAI({ maxTokens: 25, streaming: true,});const response = await chat.call([new HumanMessage("Tell me a joke. |
4e9727215e95-523 | ")], { callbacks: [ { handleLLMNewToken(token: string) { console.log({ token }); }, }, ],});console.log(response);// { token: '' }// { token: '\n\n' }// { token: 'Why' }// { token: ' don' }// { token: "'t" }// { token: ' scientists' }// { token: ' trust' }// { token: ' atoms' }// { token: '?\n\n' }// { token: 'Because' }// { token: ' they' }// { token: ' make' }// { token: ' up' }// { token: ' everything' }// { token: '.' }// { token: '' }// AIMessage {// text: "\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything. "// }API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaPreviousPromptsNextSubscribing to eventsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-524 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toStreamingStreamingSome Chat models provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const chat = new ChatOpenAI({ maxTokens: 25, streaming: true,});const response = await chat.call([new HumanMessage("Tell me a joke. ")], { callbacks: [ { handleLLMNewToken(token: string) { console.log({ token }); }, }, ],});console.log(response);// { token: '' }// { token: '\n\n' }// { token: 'Why' }// { token: ' don' }// { token: "'t" }// { token: ' scientists' }// { token: ' trust' }// { token: ' atoms' }// { token: '?\n\n' }// { token: 'Because' }// { token: ' they' }// { token: ' make' }// { token: ' up' }// { token: ' everything' }// { token: '.' |
4e9727215e95-525 | }// { token: '' }// AIMessage {// text: "\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything. "// }API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaPreviousPromptsNextSubscribing to events |
4e9727215e95-526 | ModulesModel I/OLanguage modelsChat modelsHow-toStreamingStreamingSome Chat models provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const chat = new ChatOpenAI({ maxTokens: 25, streaming: true,});const response = await chat.call([new HumanMessage("Tell me a joke. ")], { callbacks: [ { handleLLMNewToken(token: string) { console.log({ token }); }, }, ],});console.log(response);// { token: '' }// { token: '\n\n' }// { token: 'Why' }// { token: ' don' }// { token: "'t" }// { token: ' scientists' }// { token: ' trust' }// { token: ' atoms' }// { token: '?\n\n' }// { token: 'Because' }// { token: ' they' }// { token: ' make' }// { token: ' up' }// { token: ' everything' }// { token: '.' }// { token: '' }// AIMessage {// text: "\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything. "// }API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaPreviousPromptsNextSubscribing to events |
4e9727215e95-527 | StreamingSome Chat models provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const chat = new ChatOpenAI({ maxTokens: 25, streaming: true,});const response = await chat.call([new HumanMessage("Tell me a joke. ")], { callbacks: [ { handleLLMNewToken(token: string) { console.log({ token }); }, }, ],});console.log(response);// { token: '' }// { token: '\n\n' }// { token: 'Why' }// { token: ' don' }// { token: "'t" }// { token: ' scientists' }// { token: ' trust' }// { token: ' atoms' }// { token: '?\n\n' }// { token: 'Because' }// { token: ' they' }// { token: ' make' }// { token: ' up' }// { token: ' everything' }// { token: '.' }// { token: '' }// AIMessage {// text: "\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything. "// }API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schema
Some Chat models provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated. |
4e9727215e95-528 | import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const chat = new ChatOpenAI({ maxTokens: 25, streaming: true,});const response = await chat.call([new HumanMessage("Tell me a joke. ")], { callbacks: [ { handleLLMNewToken(token: string) { console.log({ token }); }, }, ],});console.log(response);// { token: '' }// { token: '\n\n' }// { token: 'Why' }// { token: ' don' }// { token: "'t" }// { token: ' scientists' }// { token: ' trust' }// { token: ' atoms' }// { token: '?\n\n' }// { token: 'Because' }// { token: ' they' }// { token: ' make' }// { token: ' up' }// { token: ' everything' }// { token: '.' }// { token: '' }// AIMessage {// text: "\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything. "// }
Paragraphs: |
4e9727215e95-529 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toSubscribing to eventsSubscribing to eventsEspecially when using an agent, there can be a lot of back-and-forth going on behind the scenes as a Chat Model processes a prompt. For agents, the response object contains an intermediateSteps object that you can print to see an overview of the steps it took to get there. |
4e9727215e95-530 | If that's not enough and you want to see every exchange with the Chat Model, you can pass callbacks to the Chat Model for custom logging (or anything else you want to do) as the model goes through the steps:For more info on the events available see the Callbacks section of the docs.import { HumanMessage, LLMResult } from "langchain/schema";import { ChatOpenAI } from "langchain/chat_models/openai";import { Serialized } from "langchain/load/serializable";// We can pass in a list of CallbackHandlers to the LLM constructor to get callbacks for various events.const model = new ChatOpenAI({ callbacks: [ { handleLLMStart: async (llm: Serialized, prompts: string[]) => { console.log(JSON.stringify(llm, null, 2)); console.log(JSON.stringify(prompts, null, 2)); }, handleLLMEnd: async (output: LLMResult) => { console.log(JSON.stringify(output, null, 2)); }, handleLLMError: async (err: Error) => { console.error(err); }, }, ],});await model.call([ new HumanMessage( "What is a good name for a company that makes colorful socks?" ),]);/*{ "name": "openai"}[ "Human: What is a good name for a company that makes colorful socks? |
4e9727215e95-531 | "]{ "generations": [ [ { "text": "Rainbow Soles", "message": { "text": "Rainbow Soles" } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 4, "promptTokens": 21, "totalTokens": 25 } }}*/API Reference:HumanMessage from langchain/schemaLLMResult from langchain/schemaChatOpenAI from langchain/chat_models/openaiSerialized from langchain/load/serializablePreviousStreamingNextAdding a timeoutCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toSubscribing to eventsSubscribing to eventsEspecially when using an agent, there can be a lot of back-and-forth going on behind the scenes as a Chat Model processes a prompt. For agents, the response object contains an intermediateSteps object that you can print to see an overview of the steps it took to get there. |
4e9727215e95-532 | If that's not enough and you want to see every exchange with the Chat Model, you can pass callbacks to the Chat Model for custom logging (or anything else you want to do) as the model goes through the steps:For more info on the events available see the Callbacks section of the docs.import { HumanMessage, LLMResult } from "langchain/schema";import { ChatOpenAI } from "langchain/chat_models/openai";import { Serialized } from "langchain/load/serializable";// We can pass in a list of CallbackHandlers to the LLM constructor to get callbacks for various events.const model = new ChatOpenAI({ callbacks: [ { handleLLMStart: async (llm: Serialized, prompts: string[]) => { console.log(JSON.stringify(llm, null, 2)); console.log(JSON.stringify(prompts, null, 2)); }, handleLLMEnd: async (output: LLMResult) => { console.log(JSON.stringify(output, null, 2)); }, handleLLMError: async (err: Error) => { console.error(err); }, }, ],});await model.call([ new HumanMessage( "What is a good name for a company that makes colorful socks?" ),]);/*{ "name": "openai"}[ "Human: What is a good name for a company that makes colorful socks? |
4e9727215e95-533 | "]{ "generations": [ [ { "text": "Rainbow Soles", "message": { "text": "Rainbow Soles" } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 4, "promptTokens": 21, "totalTokens": 25 } }}*/API Reference:HumanMessage from langchain/schemaLLMResult from langchain/schemaChatOpenAI from langchain/chat_models/openaiSerialized from langchain/load/serializablePreviousStreamingNextAdding a timeout |
4e9727215e95-534 | ModulesModel I/OLanguage modelsChat modelsHow-toSubscribing to eventsSubscribing to eventsEspecially when using an agent, there can be a lot of back-and-forth going on behind the scenes as a Chat Model processes a prompt. For agents, the response object contains an intermediateSteps object that you can print to see an overview of the steps it took to get there. If that's not enough and you want to see every exchange with the Chat Model, you can pass callbacks to the Chat Model for custom logging (or anything else you want to do) as the model goes through the steps:For more info on the events available see the Callbacks section of the docs.import { HumanMessage, LLMResult } from "langchain/schema";import { ChatOpenAI } from "langchain/chat_models/openai";import { Serialized } from "langchain/load/serializable";// We can pass in a list of CallbackHandlers to the LLM constructor to get callbacks for various events.const model = new ChatOpenAI({ callbacks: [ { handleLLMStart: async (llm: Serialized, prompts: string[]) => { console.log(JSON.stringify(llm, null, 2)); console.log(JSON.stringify(prompts, null, 2)); }, handleLLMEnd: async (output: LLMResult) => { console.log(JSON.stringify(output, null, 2)); }, handleLLMError: async (err: Error) => { console.error(err); }, }, ],});await model.call([ new HumanMessage( "What is a good name for a company that makes colorful socks?" |
4e9727215e95-535 | ),]);/*{ "name": "openai"}[ "Human: What is a good name for a company that makes colorful socks? "]{ "generations": [ [ { "text": "Rainbow Soles", "message": { "text": "Rainbow Soles" } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 4, "promptTokens": 21, "totalTokens": 25 } }}*/API Reference:HumanMessage from langchain/schemaLLMResult from langchain/schemaChatOpenAI from langchain/chat_models/openaiSerialized from langchain/load/serializablePreviousStreamingNextAdding a timeout |
4e9727215e95-536 | Subscribing to eventsEspecially when using an agent, there can be a lot of back-and-forth going on behind the scenes as a Chat Model processes a prompt. For agents, the response object contains an intermediateSteps object that you can print to see an overview of the steps it took to get there. If that's not enough and you want to see every exchange with the Chat Model, you can pass callbacks to the Chat Model for custom logging (or anything else you want to do) as the model goes through the steps:For more info on the events available see the Callbacks section of the docs.import { HumanMessage, LLMResult } from "langchain/schema";import { ChatOpenAI } from "langchain/chat_models/openai";import { Serialized } from "langchain/load/serializable";// We can pass in a list of CallbackHandlers to the LLM constructor to get callbacks for various events.const model = new ChatOpenAI({ callbacks: [ { handleLLMStart: async (llm: Serialized, prompts: string[]) => { console.log(JSON.stringify(llm, null, 2)); console.log(JSON.stringify(prompts, null, 2)); }, handleLLMEnd: async (output: LLMResult) => { console.log(JSON.stringify(output, null, 2)); }, handleLLMError: async (err: Error) => { console.error(err); }, }, ],});await model.call([ new HumanMessage( "What is a good name for a company that makes colorful socks?" ),]);/*{ "name": "openai"}[ "Human: What is a good name for a company that makes colorful socks? |
4e9727215e95-537 | "]{ "generations": [ [ { "text": "Rainbow Soles", "message": { "text": "Rainbow Soles" } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 4, "promptTokens": 21, "totalTokens": 25 } }}*/API Reference:HumanMessage from langchain/schemaLLMResult from langchain/schemaChatOpenAI from langchain/chat_models/openaiSerialized from langchain/load/serializable
Especially when using an agent, there can be a lot of back-and-forth going on behind the scenes as a Chat Model processes a prompt. For agents, the response object contains an intermediateSteps object that you can print to see an overview of the steps it took to get there. If that's not enough and you want to see every exchange with the Chat Model, you can pass callbacks to the Chat Model for custom logging (or anything else you want to do) as the model goes through the steps: |
4e9727215e95-538 | import { HumanMessage, LLMResult } from "langchain/schema";import { ChatOpenAI } from "langchain/chat_models/openai";import { Serialized } from "langchain/load/serializable";// We can pass in a list of CallbackHandlers to the LLM constructor to get callbacks for various events.const model = new ChatOpenAI({ callbacks: [ { handleLLMStart: async (llm: Serialized, prompts: string[]) => { console.log(JSON.stringify(llm, null, 2)); console.log(JSON.stringify(prompts, null, 2)); }, handleLLMEnd: async (output: LLMResult) => { console.log(JSON.stringify(output, null, 2)); }, handleLLMError: async (err: Error) => { console.error(err); }, }, ],});await model.call([ new HumanMessage( "What is a good name for a company that makes colorful socks?" ),]);/*{ "name": "openai"}[ "Human: What is a good name for a company that makes colorful socks? "]{ "generations": [ [ { "text": "Rainbow Soles", "message": { "text": "Rainbow Soles" } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 4, "promptTokens": 21, "totalTokens": 25 } }}*/
API Reference:HumanMessage from langchain/schemaLLMResult from langchain/schemaChatOpenAI from langchain/chat_models/openaiSerialized from langchain/load/serializable
Paragraphs: |
4e9727215e95-539 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toAdding a timeoutAdding a timeoutBy default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a timeout option, in milliseconds, when you call the model. For example, for OpenAI:import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const chat = new ChatOpenAI({ temperature: 1 });const response = await chat.call( [ new HumanMessage( "What is a good name for a company that makes colorful socks?" ), ], { timeout: 1000 } // 1s timeout);console.log(response);// AIMessage { text: '\n\nRainbow Sox Co.' }API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaPreviousSubscribing to eventsNextAnthropicCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-540 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toAdding a timeoutAdding a timeoutBy default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a timeout option, in milliseconds, when you call the model. For example, for OpenAI:import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const chat = new ChatOpenAI({ temperature: 1 });const response = await chat.call( [ new HumanMessage( "What is a good name for a company that makes colorful socks?" ), ], { timeout: 1000 } // 1s timeout);console.log(response);// AIMessage { text: '\n\nRainbow Sox Co.' }API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaPreviousSubscribing to eventsNextAnthropic |
4e9727215e95-541 | ModulesModel I/OLanguage modelsChat modelsHow-toAdding a timeoutAdding a timeoutBy default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a timeout option, in milliseconds, when you call the model. For example, for OpenAI:import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const chat = new ChatOpenAI({ temperature: 1 });const response = await chat.call( [ new HumanMessage( "What is a good name for a company that makes colorful socks?" ), ], { timeout: 1000 } // 1s timeout);console.log(response);// AIMessage { text: '\n\nRainbow Sox Co.' }API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaPreviousSubscribing to eventsNextAnthropic
Adding a timeoutBy default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a timeout option, in milliseconds, when you call the model. For example, for OpenAI:import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const chat = new ChatOpenAI({ temperature: 1 });const response = await chat.call( [ new HumanMessage( "What is a good name for a company that makes colorful socks?" ), ], { timeout: 1000 } // 1s timeout);console.log(response);// AIMessage { text: '\n\nRainbow Sox Co.' }API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schema |
4e9727215e95-542 | import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const chat = new ChatOpenAI({ temperature: 1 });const response = await chat.call( [ new HumanMessage( "What is a good name for a company that makes colorful socks?" ), ], { timeout: 1000 } // 1s timeout);console.log(response);// AIMessage { text: '\n\nRainbow Sox Co.' }
Anthropic
Page Title: ChatAnthropic | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsIntegrationsAnthropicChatAnthropicLangChain supports Anthropic's Claude family of chat models. You can initialize an instance like this:import { ChatAnthropic } from "langchain/chat_models/anthropic";const model = new ChatAnthropic({ temperature: 0.9, anthropicApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ANTHROPIC_API_KEY});API Reference:ChatAnthropic from langchain/chat_models/anthropicPreviousAdding a timeoutNextAzure OpenAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-543 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsIntegrationsAnthropicChatAnthropicLangChain supports Anthropic's Claude family of chat models. You can initialize an instance like this:import { ChatAnthropic } from "langchain/chat_models/anthropic";const model = new ChatAnthropic({ temperature: 0.9, anthropicApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ANTHROPIC_API_KEY});API Reference:ChatAnthropic from langchain/chat_models/anthropicPreviousAdding a timeoutNextAzure OpenAI
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference
ModulesModel I/OLanguage modelsChat modelsIntegrationsAnthropicChatAnthropicLangChain supports Anthropic's Claude family of chat models. You can initialize an instance like this:import { ChatAnthropic } from "langchain/chat_models/anthropic";const model = new ChatAnthropic({ temperature: 0.9, anthropicApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ANTHROPIC_API_KEY});API Reference:ChatAnthropic from langchain/chat_models/anthropicPreviousAdding a timeoutNextAzure OpenAI |
4e9727215e95-544 | ChatAnthropicLangChain supports Anthropic's Claude family of chat models. You can initialize an instance like this:import { ChatAnthropic } from "langchain/chat_models/anthropic";const model = new ChatAnthropic({ temperature: 0.9, anthropicApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ANTHROPIC_API_KEY});API Reference:ChatAnthropic from langchain/chat_models/anthropic
LangChain supports Anthropic's Claude family of chat models. You can initialize an instance like this:
import { ChatAnthropic } from "langchain/chat_models/anthropic";const model = new ChatAnthropic({ temperature: 0.9, anthropicApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ANTHROPIC_API_KEY});
API Reference:ChatAnthropic from langchain/chat_models/anthropic
Page Title: Azure ChatOpenAI | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsIntegrationsAzure OpenAIAzure ChatOpenAIYou can also use the ChatOpenAI class to access OpenAI instances hosted on Azure.For example, if your Azure instance is hosted under https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}, you |
4e9727215e95-545 | could initialize your instance like this:import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ temperature: 0.9, azureOpenAIApiKey: "SOME_SECRET_VALUE", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME});API Reference:ChatOpenAI from langchain/chat_models/openaiIf your instance is hosted under a domain other than the default openai.azure.com, you'll need to use the alternate AZURE_OPENAI_BASE_PATH environemnt variable. |
4e9727215e95-546 | For example, here's how you would connect to the domain https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}:import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ temperature: 0.9, azureOpenAIApiKey: "SOME_SECRET_VALUE", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});API Reference:ChatOpenAI from langchain/chat_models/openaiPreviousAnthropicNextBaidu WenxinCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsIntegrationsAzure OpenAIAzure ChatOpenAIYou can also use the ChatOpenAI class to access OpenAI instances hosted on Azure.For example, if your Azure instance is hosted under https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}, you |
4e9727215e95-547 | could initialize your instance like this:import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ temperature: 0.9, azureOpenAIApiKey: "SOME_SECRET_VALUE", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME});API Reference:ChatOpenAI from langchain/chat_models/openaiIf your instance is hosted under a domain other than the default openai.azure.com, you'll need to use the alternate AZURE_OPENAI_BASE_PATH environemnt variable. |
4e9727215e95-548 | For example, here's how you would connect to the domain https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}:import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ temperature: 0.9, azureOpenAIApiKey: "SOME_SECRET_VALUE", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});API Reference:ChatOpenAI from langchain/chat_models/openaiPreviousAnthropicNextBaidu Wenxin
ModulesModel I/OLanguage modelsChat modelsIntegrationsAzure OpenAIAzure ChatOpenAIYou can also use the ChatOpenAI class to access OpenAI instances hosted on Azure.For example, if your Azure instance is hosted under https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}, you |
4e9727215e95-549 | could initialize your instance like this:import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ temperature: 0.9, azureOpenAIApiKey: "SOME_SECRET_VALUE", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME});API Reference:ChatOpenAI from langchain/chat_models/openaiIf your instance is hosted under a domain other than the default openai.azure.com, you'll need to use the alternate AZURE_OPENAI_BASE_PATH environemnt variable. |
4e9727215e95-550 | For example, here's how you would connect to the domain https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}:import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ temperature: 0.9, azureOpenAIApiKey: "SOME_SECRET_VALUE", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});API Reference:ChatOpenAI from langchain/chat_models/openaiPreviousAnthropicNextBaidu Wenxin
Azure ChatOpenAIYou can also use the ChatOpenAI class to access OpenAI instances hosted on Azure.For example, if your Azure instance is hosted under https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}, you |
4e9727215e95-551 | could initialize your instance like this:import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ temperature: 0.9, azureOpenAIApiKey: "SOME_SECRET_VALUE", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME});API Reference:ChatOpenAI from langchain/chat_models/openaiIf your instance is hosted under a domain other than the default openai.azure.com, you'll need to use the alternate AZURE_OPENAI_BASE_PATH environemnt variable. |
4e9727215e95-552 | For example, here's how you would connect to the domain https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}:import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ temperature: 0.9, azureOpenAIApiKey: "SOME_SECRET_VALUE", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});API Reference:ChatOpenAI from langchain/chat_models/openai
You can also use the ChatOpenAI class to access OpenAI instances hosted on Azure.
import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ temperature: 0.9, azureOpenAIApiKey: "SOME_SECRET_VALUE", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME});
API Reference:ChatOpenAI from langchain/chat_models/openai |
4e9727215e95-553 | API Reference:ChatOpenAI from langchain/chat_models/openai
import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ temperature: 0.9, azureOpenAIApiKey: "SOME_SECRET_VALUE", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});
Baidu Wenxin
Page Title: ChatBaiduWenxin | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-554 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsIntegrationsBaidu WenxinChatBaiduWenxinLangChain.js supports Baidu's ERNIE-bot family of models. Here's an example:import { ChatBaiduWenxin } from "langchain/chat_models/baiduwenxin";import { HumanMessage } from "langchain/schema";// Default model is ERNIE-Bot-turboconst ernieTurbo = new ChatBaiduWenxin({ baiduApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.BAIDU_API_KEY baiduSecretKey: "YOUR-SECRET-KEY", // In Node.js defaults to process.env.BAIDU_SECRET_KEY});// Use ERNIE-Botconst ernie = new ChatBaiduWenxin({ modelName: "ERNIE-Bot", temperature: 1, // Only ERNIE-Bot supports temperature baiduApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.BAIDU_API_KEY baiduSecretKey: "YOUR-SECRET-KEY", // In Node.js defaults to process.env.BAIDU_SECRET_KEY});const messages = [new HumanMessage("Hello")];let res = await ernieTurbo.call(messages);/*AIChatMessage { text: 'Hello! How may I assist you today? |
4e9727215e95-555 | ', name: undefined, additional_kwargs: {} }}*/res = await ernie.call(messages);/*AIChatMessage { text: 'Hello! How may I assist you today? ', name: undefined, additional_kwargs: {} }}*/API Reference:ChatBaiduWenxin from langchain/chat_models/baiduwenxinHumanMessage from langchain/schemaPreviousAzure OpenAINextGoogle PaLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-556 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsIntegrationsBaidu WenxinChatBaiduWenxinLangChain.js supports Baidu's ERNIE-bot family of models. Here's an example:import { ChatBaiduWenxin } from "langchain/chat_models/baiduwenxin";import { HumanMessage } from "langchain/schema";// Default model is ERNIE-Bot-turboconst ernieTurbo = new ChatBaiduWenxin({ baiduApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.BAIDU_API_KEY baiduSecretKey: "YOUR-SECRET-KEY", // In Node.js defaults to process.env.BAIDU_SECRET_KEY});// Use ERNIE-Botconst ernie = new ChatBaiduWenxin({ modelName: "ERNIE-Bot", temperature: 1, // Only ERNIE-Bot supports temperature baiduApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.BAIDU_API_KEY baiduSecretKey: "YOUR-SECRET-KEY", // In Node.js defaults to process.env.BAIDU_SECRET_KEY});const messages = [new HumanMessage("Hello")];let res = await ernieTurbo.call(messages);/*AIChatMessage { text: 'Hello! How may I assist you today? ', name: undefined, additional_kwargs: {} }}*/res = await ernie.call(messages);/*AIChatMessage { text: 'Hello! How may I assist you today? |
4e9727215e95-557 | ', name: undefined, additional_kwargs: {} }}*/API Reference:ChatBaiduWenxin from langchain/chat_models/baiduwenxinHumanMessage from langchain/schemaPreviousAzure OpenAINextGoogle PaLM |
4e9727215e95-558 | ModulesModel I/OLanguage modelsChat modelsIntegrationsBaidu WenxinChatBaiduWenxinLangChain.js supports Baidu's ERNIE-bot family of models. Here's an example:import { ChatBaiduWenxin } from "langchain/chat_models/baiduwenxin";import { HumanMessage } from "langchain/schema";// Default model is ERNIE-Bot-turboconst ernieTurbo = new ChatBaiduWenxin({ baiduApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.BAIDU_API_KEY baiduSecretKey: "YOUR-SECRET-KEY", // In Node.js defaults to process.env.BAIDU_SECRET_KEY});// Use ERNIE-Botconst ernie = new ChatBaiduWenxin({ modelName: "ERNIE-Bot", temperature: 1, // Only ERNIE-Bot supports temperature baiduApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.BAIDU_API_KEY baiduSecretKey: "YOUR-SECRET-KEY", // In Node.js defaults to process.env.BAIDU_SECRET_KEY});const messages = [new HumanMessage("Hello")];let res = await ernieTurbo.call(messages);/*AIChatMessage { text: 'Hello! How may I assist you today? ', name: undefined, additional_kwargs: {} }}*/res = await ernie.call(messages);/*AIChatMessage { text: 'Hello! How may I assist you today? ', name: undefined, additional_kwargs: {} }}*/API Reference:ChatBaiduWenxin from langchain/chat_models/baiduwenxinHumanMessage from langchain/schemaPreviousAzure OpenAINextGoogle PaLM |
4e9727215e95-559 | ChatBaiduWenxinLangChain.js supports Baidu's ERNIE-bot family of models. Here's an example:import { ChatBaiduWenxin } from "langchain/chat_models/baiduwenxin";import { HumanMessage } from "langchain/schema";// Default model is ERNIE-Bot-turboconst ernieTurbo = new ChatBaiduWenxin({ baiduApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.BAIDU_API_KEY baiduSecretKey: "YOUR-SECRET-KEY", // In Node.js defaults to process.env.BAIDU_SECRET_KEY});// Use ERNIE-Botconst ernie = new ChatBaiduWenxin({ modelName: "ERNIE-Bot", temperature: 1, // Only ERNIE-Bot supports temperature baiduApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.BAIDU_API_KEY baiduSecretKey: "YOUR-SECRET-KEY", // In Node.js defaults to process.env.BAIDU_SECRET_KEY});const messages = [new HumanMessage("Hello")];let res = await ernieTurbo.call(messages);/*AIChatMessage { text: 'Hello! How may I assist you today? ', name: undefined, additional_kwargs: {} }}*/res = await ernie.call(messages);/*AIChatMessage { text: 'Hello! How may I assist you today? ', name: undefined, additional_kwargs: {} }}*/API Reference:ChatBaiduWenxin from langchain/chat_models/baiduwenxinHumanMessage from langchain/schema
LangChain.js supports Baidu's ERNIE-bot family of models. Here's an example: |
4e9727215e95-560 | import { ChatBaiduWenxin } from "langchain/chat_models/baiduwenxin";import { HumanMessage } from "langchain/schema";// Default model is ERNIE-Bot-turboconst ernieTurbo = new ChatBaiduWenxin({ baiduApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.BAIDU_API_KEY baiduSecretKey: "YOUR-SECRET-KEY", // In Node.js defaults to process.env.BAIDU_SECRET_KEY});// Use ERNIE-Botconst ernie = new ChatBaiduWenxin({ modelName: "ERNIE-Bot", temperature: 1, // Only ERNIE-Bot supports temperature baiduApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.BAIDU_API_KEY baiduSecretKey: "YOUR-SECRET-KEY", // In Node.js defaults to process.env.BAIDU_SECRET_KEY});const messages = [new HumanMessage("Hello")];let res = await ernieTurbo.call(messages);/*AIChatMessage { text: 'Hello! How may I assist you today? ', name: undefined, additional_kwargs: {} }}*/res = await ernie.call(messages);/*AIChatMessage { text: 'Hello! How may I assist you today? ', name: undefined, additional_kwargs: {} }}*/
API Reference:ChatBaiduWenxin from langchain/chat_models/baiduwenxinHumanMessage from langchain/schema
Page Title: ChatGooglePaLM | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-561 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsIntegrationsGoogle PaLMChatGooglePaLMThe Google PaLM API can be integrated by first
installing the required packages:npmYarnpnpmnpm install google-auth-library @google-ai/generativelanguageyarn add google-auth-library @google-ai/generativelanguagepnpm add google-auth-library @google-ai/generativelanguageCreate an API key from Google MakerSuite. You can then set
the key as GOOGLE_PALM_API_KEY environment variable or pass it as apiKey parameter while instantiating |
4e9727215e95-562 | the model.import { ChatGooglePaLM } from "langchain/chat_models/googlepalm";import { AIMessage, HumanMessage, SystemMessage } from "langchain/schema";export const run = async () => { const model = new ChatGooglePaLM({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` temperature: 0.7, // OPTIONAL modelName: "models/chat-bison-001", // OPTIONAL topK: 40, // OPTIONAL topP: 3, // OPTIONAL examples: [ // OPTIONAL { input: new HumanMessage("What is your favorite sock color? "), output: new AIMessage("My favorite sock color be arrrr-ange! "), }, ], }); // ask questions const questions = [ new SystemMessage( "You are a funny assistant that answers in pirate language." ), new HumanMessage("What is your favorite food? "), ]; // You can also use the model as part of a chain const res = await model.call(questions); console.log({ res });};API Reference:ChatGooglePaLM from langchain/chat_models/googlepalmAIMessage from langchain/schemaHumanMessage from langchain/schemaSystemMessage from langchain/schemaPreviousBaidu WenxinNextGoogle Vertex AICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-563 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsIntegrationsGoogle PaLMChatGooglePaLMThe Google PaLM API can be integrated by first
installing the required packages:npmYarnpnpmnpm install google-auth-library @google-ai/generativelanguageyarn add google-auth-library @google-ai/generativelanguagepnpm add google-auth-library @google-ai/generativelanguageCreate an API key from Google MakerSuite. You can then set
the key as GOOGLE_PALM_API_KEY environment variable or pass it as apiKey parameter while instantiating |
4e9727215e95-564 | the model.import { ChatGooglePaLM } from "langchain/chat_models/googlepalm";import { AIMessage, HumanMessage, SystemMessage } from "langchain/schema";export const run = async () => { const model = new ChatGooglePaLM({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` temperature: 0.7, // OPTIONAL modelName: "models/chat-bison-001", // OPTIONAL topK: 40, // OPTIONAL topP: 3, // OPTIONAL examples: [ // OPTIONAL { input: new HumanMessage("What is your favorite sock color? "), output: new AIMessage("My favorite sock color be arrrr-ange! "), }, ], }); // ask questions const questions = [ new SystemMessage( "You are a funny assistant that answers in pirate language." ), new HumanMessage("What is your favorite food? "), ]; // You can also use the model as part of a chain const res = await model.call(questions); console.log({ res });};API Reference:ChatGooglePaLM from langchain/chat_models/googlepalmAIMessage from langchain/schemaHumanMessage from langchain/schemaSystemMessage from langchain/schemaPreviousBaidu WenxinNextGoogle Vertex AI
ModulesModel I/OLanguage modelsChat modelsIntegrationsGoogle PaLMChatGooglePaLMThe Google PaLM API can be integrated by first
installing the required packages:npmYarnpnpmnpm install google-auth-library @google-ai/generativelanguageyarn add google-auth-library @google-ai/generativelanguagepnpm add google-auth-library @google-ai/generativelanguageCreate an API key from Google MakerSuite. You can then set |
4e9727215e95-565 | the key as GOOGLE_PALM_API_KEY environment variable or pass it as apiKey parameter while instantiating
the model.import { ChatGooglePaLM } from "langchain/chat_models/googlepalm";import { AIMessage, HumanMessage, SystemMessage } from "langchain/schema";export const run = async () => { const model = new ChatGooglePaLM({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` temperature: 0.7, // OPTIONAL modelName: "models/chat-bison-001", // OPTIONAL topK: 40, // OPTIONAL topP: 3, // OPTIONAL examples: [ // OPTIONAL { input: new HumanMessage("What is your favorite sock color? "), output: new AIMessage("My favorite sock color be arrrr-ange! "), }, ], }); // ask questions const questions = [ new SystemMessage( "You are a funny assistant that answers in pirate language." ), new HumanMessage("What is your favorite food? "), ]; // You can also use the model as part of a chain const res = await model.call(questions); console.log({ res });};API Reference:ChatGooglePaLM from langchain/chat_models/googlepalmAIMessage from langchain/schemaHumanMessage from langchain/schemaSystemMessage from langchain/schemaPreviousBaidu WenxinNextGoogle Vertex AI
ChatGooglePaLMThe Google PaLM API can be integrated by first
installing the required packages:npmYarnpnpmnpm install google-auth-library @google-ai/generativelanguageyarn add google-auth-library @google-ai/generativelanguagepnpm add google-auth-library @google-ai/generativelanguageCreate an API key from Google MakerSuite. You can then set |
4e9727215e95-566 | the key as GOOGLE_PALM_API_KEY environment variable or pass it as apiKey parameter while instantiating
the model.import { ChatGooglePaLM } from "langchain/chat_models/googlepalm";import { AIMessage, HumanMessage, SystemMessage } from "langchain/schema";export const run = async () => { const model = new ChatGooglePaLM({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` temperature: 0.7, // OPTIONAL modelName: "models/chat-bison-001", // OPTIONAL topK: 40, // OPTIONAL topP: 3, // OPTIONAL examples: [ // OPTIONAL { input: new HumanMessage("What is your favorite sock color? "), output: new AIMessage("My favorite sock color be arrrr-ange! "), }, ], }); // ask questions const questions = [ new SystemMessage( "You are a funny assistant that answers in pirate language." ), new HumanMessage("What is your favorite food? "), ]; // You can also use the model as part of a chain const res = await model.call(questions); console.log({ res });};API Reference:ChatGooglePaLM from langchain/chat_models/googlepalmAIMessage from langchain/schemaHumanMessage from langchain/schemaSystemMessage from langchain/schema |
4e9727215e95-567 | import { ChatGooglePaLM } from "langchain/chat_models/googlepalm";import { AIMessage, HumanMessage, SystemMessage } from "langchain/schema";export const run = async () => { const model = new ChatGooglePaLM({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` temperature: 0.7, // OPTIONAL modelName: "models/chat-bison-001", // OPTIONAL topK: 40, // OPTIONAL topP: 3, // OPTIONAL examples: [ // OPTIONAL { input: new HumanMessage("What is your favorite sock color? "), output: new AIMessage("My favorite sock color be arrrr-ange! "), }, ], }); // ask questions const questions = [ new SystemMessage( "You are a funny assistant that answers in pirate language." ), new HumanMessage("What is your favorite food? "), ]; // You can also use the model as part of a chain const res = await model.call(questions); console.log({ res });};
API Reference:ChatGooglePaLM from langchain/chat_models/googlepalmAIMessage from langchain/schemaHumanMessage from langchain/schemaSystemMessage from langchain/schema
Page Title: ChatGoogleVertexAI | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-568 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsIntegrationsGoogle Vertex AIChatGoogleVertexAIThe Vertex AI implementation is meant to be used in Node.js and not
directly from a browser, since it requires a service account to use.Before running this code, you should make sure the Vertex AI API is
enabled for the relevant project and that you've authenticated to
Google Cloud using one of these methods:You are logged into an account (using gcloud auth application-default login)
permitted to that project.You are running on a machine using a service account that is permitted
to the project.You have downloaded the credentials for a service account that is permitted
to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryThe ChatGoogleVertexAI class works just like other chat-based LLMs,
with a few exceptions:The first SystemMessage passed in is mapped to the "context" parameter that the PaLM model expects. |
4e9727215e95-569 | No other SystemMessages are allowed.After the first SystemMessage, there must be an odd number of messages, representing a conversation between a human and the model.Human messages must alternate with AI messages.import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai";const model = new ChatGoogleVertexAI({ temperature: 0.7,});API Reference:ChatGoogleVertexAI from langchain/chat_models/googlevertexaiThere is also an optional examples constructor parameter that can help the model understand what an appropriate response
looks like.import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai";import { AIMessage, HumanMessage, SystemMessage } from "langchain/schema";export const run = async () => { const examples = [ { input: new HumanMessage("What is your favorite sock color? "), output: new AIMessage("My favorite sock color be arrrr-ange! "), }, ]; const model = new ChatGoogleVertexAI({ temperature: 0.7, examples, }); const questions = [ new SystemMessage( "You are a funny assistant that answers in pirate language." ), new HumanMessage("What is your favorite food? "), ]; // You can also use the model as part of a chain const res = await model.call(questions); console.log({ res });};API Reference:ChatGoogleVertexAI from langchain/chat_models/googlevertexaiAIMessage from langchain/schemaHumanMessage from langchain/schemaSystemMessage from langchain/schemaPreviousGoogle PaLMNextOllamaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-570 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsIntegrationsGoogle Vertex AIChatGoogleVertexAIThe Vertex AI implementation is meant to be used in Node.js and not
directly from a browser, since it requires a service account to use.Before running this code, you should make sure the Vertex AI API is
enabled for the relevant project and that you've authenticated to
Google Cloud using one of these methods:You are logged into an account (using gcloud auth application-default login)
permitted to that project.You are running on a machine using a service account that is permitted
to the project.You have downloaded the credentials for a service account that is permitted
to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryThe ChatGoogleVertexAI class works just like other chat-based LLMs,
with a few exceptions:The first SystemMessage passed in is mapped to the "context" parameter that the PaLM model expects.
No other SystemMessages are allowed.After the first SystemMessage, there must be an odd number of messages, representing a conversation between a human and the model.Human messages must alternate with AI messages.import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai";const model = new ChatGoogleVertexAI({ temperature: 0.7,});API Reference:ChatGoogleVertexAI from langchain/chat_models/googlevertexaiThere is also an optional examples constructor parameter that can help the model understand what an appropriate response |
4e9727215e95-571 | looks like.import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai";import { AIMessage, HumanMessage, SystemMessage } from "langchain/schema";export const run = async () => { const examples = [ { input: new HumanMessage("What is your favorite sock color? "), output: new AIMessage("My favorite sock color be arrrr-ange! "), }, ]; const model = new ChatGoogleVertexAI({ temperature: 0.7, examples, }); const questions = [ new SystemMessage( "You are a funny assistant that answers in pirate language." ), new HumanMessage("What is your favorite food? "), ]; // You can also use the model as part of a chain const res = await model.call(questions); console.log({ res });};API Reference:ChatGoogleVertexAI from langchain/chat_models/googlevertexaiAIMessage from langchain/schemaHumanMessage from langchain/schemaSystemMessage from langchain/schemaPreviousGoogle PaLMNextOllama
ModulesModel I/OLanguage modelsChat modelsIntegrationsGoogle Vertex AIChatGoogleVertexAIThe Vertex AI implementation is meant to be used in Node.js and not
directly from a browser, since it requires a service account to use.Before running this code, you should make sure the Vertex AI API is
enabled for the relevant project and that you've authenticated to
Google Cloud using one of these methods:You are logged into an account (using gcloud auth application-default login)
permitted to that project.You are running on a machine using a service account that is permitted
to the project.You have downloaded the credentials for a service account that is permitted
to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment |
4e9727215e95-572 | to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryThe ChatGoogleVertexAI class works just like other chat-based LLMs,
with a few exceptions:The first SystemMessage passed in is mapped to the "context" parameter that the PaLM model expects.
No other SystemMessages are allowed.After the first SystemMessage, there must be an odd number of messages, representing a conversation between a human and the model.Human messages must alternate with AI messages.import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai";const model = new ChatGoogleVertexAI({ temperature: 0.7,});API Reference:ChatGoogleVertexAI from langchain/chat_models/googlevertexaiThere is also an optional examples constructor parameter that can help the model understand what an appropriate response |
4e9727215e95-573 | looks like.import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai";import { AIMessage, HumanMessage, SystemMessage } from "langchain/schema";export const run = async () => { const examples = [ { input: new HumanMessage("What is your favorite sock color? "), output: new AIMessage("My favorite sock color be arrrr-ange! "), }, ]; const model = new ChatGoogleVertexAI({ temperature: 0.7, examples, }); const questions = [ new SystemMessage( "You are a funny assistant that answers in pirate language." ), new HumanMessage("What is your favorite food? "), ]; // You can also use the model as part of a chain const res = await model.call(questions); console.log({ res });};API Reference:ChatGoogleVertexAI from langchain/chat_models/googlevertexaiAIMessage from langchain/schemaHumanMessage from langchain/schemaSystemMessage from langchain/schemaPreviousGoogle PaLMNextOllama
ChatGoogleVertexAIThe Vertex AI implementation is meant to be used in Node.js and not
directly from a browser, since it requires a service account to use.Before running this code, you should make sure the Vertex AI API is
enabled for the relevant project and that you've authenticated to
Google Cloud using one of these methods:You are logged into an account (using gcloud auth application-default login)
permitted to that project.You are running on a machine using a service account that is permitted
to the project.You have downloaded the credentials for a service account that is permitted
to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryThe ChatGoogleVertexAI class works just like other chat-based LLMs, |
4e9727215e95-574 | with a few exceptions:The first SystemMessage passed in is mapped to the "context" parameter that the PaLM model expects.
No other SystemMessages are allowed.After the first SystemMessage, there must be an odd number of messages, representing a conversation between a human and the model.Human messages must alternate with AI messages.import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai";const model = new ChatGoogleVertexAI({ temperature: 0.7,});API Reference:ChatGoogleVertexAI from langchain/chat_models/googlevertexaiThere is also an optional examples constructor parameter that can help the model understand what an appropriate response
looks like.import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai";import { AIMessage, HumanMessage, SystemMessage } from "langchain/schema";export const run = async () => { const examples = [ { input: new HumanMessage("What is your favorite sock color? "), output: new AIMessage("My favorite sock color be arrrr-ange! "), }, ]; const model = new ChatGoogleVertexAI({ temperature: 0.7, examples, }); const questions = [ new SystemMessage( "You are a funny assistant that answers in pirate language." ), new HumanMessage("What is your favorite food? "), ]; // You can also use the model as part of a chain const res = await model.call(questions); console.log({ res });};API Reference:ChatGoogleVertexAI from langchain/chat_models/googlevertexaiAIMessage from langchain/schemaHumanMessage from langchain/schemaSystemMessage from langchain/schema
The Vertex AI implementation is meant to be used in Node.js and not
directly from a browser, since it requires a service account to use.
Before running this code, you should make sure the Vertex AI API is |
4e9727215e95-575 | Before running this code, you should make sure the Vertex AI API is
enabled for the relevant project and that you've authenticated to
Google Cloud using one of these methods:
The ChatGoogleVertexAI class works just like other chat-based LLMs,
with a few exceptions:
import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai";const model = new ChatGoogleVertexAI({ temperature: 0.7,});
API Reference:ChatGoogleVertexAI from langchain/chat_models/googlevertexai
There is also an optional examples constructor parameter that can help the model understand what an appropriate response
looks like.
import { ChatGoogleVertexAI } from "langchain/chat_models/googlevertexai";import { AIMessage, HumanMessage, SystemMessage } from "langchain/schema";export const run = async () => { const examples = [ { input: new HumanMessage("What is your favorite sock color? "), output: new AIMessage("My favorite sock color be arrrr-ange! "), }, ]; const model = new ChatGoogleVertexAI({ temperature: 0.7, examples, }); const questions = [ new SystemMessage( "You are a funny assistant that answers in pirate language." ), new HumanMessage("What is your favorite food? "), ]; // You can also use the model as part of a chain const res = await model.call(questions); console.log({ res });};
API Reference:ChatGoogleVertexAI from langchain/chat_models/googlevertexaiAIMessage from langchain/schemaHumanMessage from langchain/schemaSystemMessage from langchain/schema
Page Title: ChatOllama | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-576 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsIntegrationsOllamaOn this pageChatOllamaOllama allows you to run open-source large language models, such as Llama 2, locally.Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance as a chat model. |
4e9727215e95-577 | For a complete list of supported models and model variants, see the Ollama model library.SetupFollow these instructions to set up and run a local Ollama instance.Usageimport { ChatOllama } from "langchain/chat_models/ollama";import { StringOutputParser } from "langchain/schema/output_parser";const model = new ChatOllama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value});const stream = await model .pipe(new StringOutputParser()) .stream(`Translate "I love programming" into German.`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/* Thank you for your question! I'm happy to help. However, I must point out that the phrase "I love programming" is not grammatically correct in German. The word "love" does not have a direct translation in German, and it would be more appropriate to say "I enjoy programming" or "I am passionate about programming." In German, you can express your enthusiasm for something like this: * Ich möchte Programmieren (I want to program) * Ich mag Programmieren (I like to program) * Ich bin passioniert über Programmieren (I am passionate about programming) I hope this helps! Let me know if you have any other questions.
/API Reference:ChatOllama from langchain/chat_models/ollamaStringOutputParser from langchain/schema/output_parserPreviousGoogle Vertex AINextOpenAISetupUsageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-578 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsIntegrationsOllamaOn this pageChatOllamaOllama allows you to run open-source large language models, such as Llama 2, locally.Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance as a chat model. |
4e9727215e95-579 | For a complete list of supported models and model variants, see the Ollama model library.SetupFollow these instructions to set up and run a local Ollama instance.Usageimport { ChatOllama } from "langchain/chat_models/ollama";import { StringOutputParser } from "langchain/schema/output_parser";const model = new ChatOllama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value});const stream = await model .pipe(new StringOutputParser()) .stream(`Translate "I love programming" into German.`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/* Thank you for your question! I'm happy to help. However, I must point out that the phrase "I love programming" is not grammatically correct in German. The word "love" does not have a direct translation in German, and it would be more appropriate to say "I enjoy programming" or "I am passionate about programming." In German, you can express your enthusiasm for something like this: * Ich möchte Programmieren (I want to program) * Ich mag Programmieren (I like to program) * Ich bin passioniert über Programmieren (I am passionate about programming) I hope this helps! Let me know if you have any other questions. */API Reference:ChatOllama from langchain/chat_models/ollamaStringOutputParser from langchain/schema/output_parserPreviousGoogle Vertex AINextOpenAISetupUsage |
4e9727215e95-580 | ModulesModel I/OLanguage modelsChat modelsIntegrationsOllamaOn this pageChatOllamaOllama allows you to run open-source large language models, such as Llama 2, locally.Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance as a chat model. |
4e9727215e95-581 | For a complete list of supported models and model variants, see the Ollama model library.SetupFollow these instructions to set up and run a local Ollama instance.Usageimport { ChatOllama } from "langchain/chat_models/ollama";import { StringOutputParser } from "langchain/schema/output_parser";const model = new ChatOllama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value});const stream = await model .pipe(new StringOutputParser()) .stream(`Translate "I love programming" into German.`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/* Thank you for your question! I'm happy to help. However, I must point out that the phrase "I love programming" is not grammatically correct in German. The word "love" does not have a direct translation in German, and it would be more appropriate to say "I enjoy programming" or "I am passionate about programming." In German, you can express your enthusiasm for something like this: * Ich möchte Programmieren (I want to program) * Ich mag Programmieren (I like to program) * Ich bin passioniert über Programmieren (I am passionate about programming) I hope this helps! Let me know if you have any other questions. */API Reference:ChatOllama from langchain/chat_models/ollamaStringOutputParser from langchain/schema/output_parserPreviousGoogle Vertex AINextOpenAISetupUsage |
4e9727215e95-582 | ModulesModel I/OLanguage modelsChat modelsIntegrationsOllamaOn this pageChatOllamaOllama allows you to run open-source large language models, such as Llama 2, locally.Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance as a chat model. |
4e9727215e95-583 | For a complete list of supported models and model variants, see the Ollama model library.SetupFollow these instructions to set up and run a local Ollama instance.Usageimport { ChatOllama } from "langchain/chat_models/ollama";import { StringOutputParser } from "langchain/schema/output_parser";const model = new ChatOllama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value});const stream = await model .pipe(new StringOutputParser()) .stream(`Translate "I love programming" into German.`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/* Thank you for your question! I'm happy to help. However, I must point out that the phrase "I love programming" is not grammatically correct in German. The word "love" does not have a direct translation in German, and it would be more appropriate to say "I enjoy programming" or "I am passionate about programming." In German, you can express your enthusiasm for something like this: * Ich möchte Programmieren (I want to program) * Ich mag Programmieren (I like to program) * Ich bin passioniert über Programmieren (I am passionate about programming) I hope this helps! Let me know if you have any other questions. */API Reference:ChatOllama from langchain/chat_models/ollamaStringOutputParser from langchain/schema/output_parserPreviousGoogle Vertex AINextOpenAI |
4e9727215e95-584 | ChatOllamaOllama allows you to run open-source large language models, such as Llama 2, locally.Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance as a chat model. |
4e9727215e95-585 | For a complete list of supported models and model variants, see the Ollama model library.SetupFollow these instructions to set up and run a local Ollama instance.Usageimport { ChatOllama } from "langchain/chat_models/ollama";import { StringOutputParser } from "langchain/schema/output_parser";const model = new ChatOllama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value});const stream = await model .pipe(new StringOutputParser()) .stream(`Translate "I love programming" into German.`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/* Thank you for your question! I'm happy to help. However, I must point out that the phrase "I love programming" is not grammatically correct in German. The word "love" does not have a direct translation in German, and it would be more appropriate to say "I enjoy programming" or "I am passionate about programming." In German, you can express your enthusiasm for something like this: * Ich möchte Programmieren (I want to program) * Ich mag Programmieren (I like to program) * Ich bin passioniert über Programmieren (I am passionate about programming) I hope this helps! Let me know if you have any other questions. */API Reference:ChatOllama from langchain/chat_models/ollamaStringOutputParser from langchain/schema/output_parser
This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance as a chat model.
For a complete list of supported models and model variants, see the Ollama model library. |
4e9727215e95-586 | For a complete list of supported models and model variants, see the Ollama model library.
import { ChatOllama } from "langchain/chat_models/ollama";import { StringOutputParser } from "langchain/schema/output_parser";const model = new ChatOllama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value});const stream = await model .pipe(new StringOutputParser()) .stream(`Translate "I love programming" into German.`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/* Thank you for your question! I'm happy to help. However, I must point out that the phrase "I love programming" is not grammatically correct in German. The word "love" does not have a direct translation in German, and it would be more appropriate to say "I enjoy programming" or "I am passionate about programming." In German, you can express your enthusiasm for something like this: * Ich möchte Programmieren (I want to program) * Ich mag Programmieren (I like to program) * Ich bin passioniert über Programmieren (I am passionate about programming) I hope this helps! Let me know if you have any other questions. */
API Reference:ChatOllama from langchain/chat_models/ollamaStringOutputParser from langchain/schema/output_parser
Page Title: ChatOpenAI | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-587 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsIntegrationsOpenAIChatOpenAIYou can use OpenAI's chat models as follows:import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";import { SerpAPI } from "langchain/tools";const model = new ChatOpenAI({ temperature: 0.9, openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY});// You can also pass tools or functions to the model, learn more here// https://platform.openai.com/docs/guides/gpt/function-callingconst modelForFunctionCalling = new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0,});await modelForFunctionCalling.predictMessages( [new HumanMessage("What is the weather in New York? |
4e9727215e95-588 | ")], { tools: [new SerpAPI()] } // Tools will be automatically formatted as functions in the OpenAI format);/*AIMessage { text: '', name: undefined, additional_kwargs: { function_call: { name: 'search', arguments: '{\n "input": "current weather in New York"\n}' } }}*/await modelForFunctionCalling.predictMessages( [new HumanMessage("What is the weather in New York? ")], { functions: [ { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, ], // You can set the `function_call` arg to force the model to use a function function_call: { name: "get_current_weather", }, });/*AIMessage { text: '', name: undefined, additional_kwargs: { function_call: { name: 'get_current_weather', arguments: '{\n "location": "New York"\n}' } }}*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaSerpAPI from langchain/toolsIf you're part of an organization, you can set process.env.OPENAI_ORGANIZATION with your OpenAI organization id, or pass it in as organization when
initializing the model.PreviousOllamaNextPromptLayer OpenAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-589 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsIntegrationsOpenAIChatOpenAIYou can use OpenAI's chat models as follows:import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";import { SerpAPI } from "langchain/tools";const model = new ChatOpenAI({ temperature: 0.9, openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY});// You can also pass tools or functions to the model, learn more here// https://platform.openai.com/docs/guides/gpt/function-callingconst modelForFunctionCalling = new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0,});await modelForFunctionCalling.predictMessages( [new HumanMessage("What is the weather in New York? ")], { tools: [new SerpAPI()] } // Tools will be automatically formatted as functions in the OpenAI format);/*AIMessage { text: '', name: undefined, additional_kwargs: { function_call: { name: 'search', arguments: '{\n "input": "current weather in New York"\n}' } }}*/await modelForFunctionCalling.predictMessages( [new HumanMessage("What is the weather in New York? |
4e9727215e95-590 | ")], { functions: [ { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, ], // You can set the `function_call` arg to force the model to use a function function_call: { name: "get_current_weather", }, });/*AIMessage { text: '', name: undefined, additional_kwargs: { function_call: { name: 'get_current_weather', arguments: '{\n "location": "New York"\n}' } }}*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaSerpAPI from langchain/toolsIf you're part of an organization, you can set process.env.OPENAI_ORGANIZATION with your OpenAI organization id, or pass it in as organization when
initializing the model.PreviousOllamaNextPromptLayer OpenAI |
4e9727215e95-591 | initializing the model.PreviousOllamaNextPromptLayer OpenAI
ModulesModel I/OLanguage modelsChat modelsIntegrationsOpenAIChatOpenAIYou can use OpenAI's chat models as follows:import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";import { SerpAPI } from "langchain/tools";const model = new ChatOpenAI({ temperature: 0.9, openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY});// You can also pass tools or functions to the model, learn more here// https://platform.openai.com/docs/guides/gpt/function-callingconst modelForFunctionCalling = new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0,});await modelForFunctionCalling.predictMessages( [new HumanMessage("What is the weather in New York? ")], { tools: [new SerpAPI()] } // Tools will be automatically formatted as functions in the OpenAI format);/*AIMessage { text: '', name: undefined, additional_kwargs: { function_call: { name: 'search', arguments: '{\n "input": "current weather in New York"\n}' } }}*/await modelForFunctionCalling.predictMessages( [new HumanMessage("What is the weather in New York? ")], { functions: [ { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. |
4e9727215e95-592 | San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, ], // You can set the `function_call` arg to force the model to use a function function_call: { name: "get_current_weather", }, });/*AIMessage { text: '', name: undefined, additional_kwargs: { function_call: { name: 'get_current_weather', arguments: '{\n "location": "New York"\n}' } }}*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaSerpAPI from langchain/toolsIf you're part of an organization, you can set process.env.OPENAI_ORGANIZATION with your OpenAI organization id, or pass it in as organization when
initializing the model.PreviousOllamaNextPromptLayer OpenAI |
4e9727215e95-593 | initializing the model.PreviousOllamaNextPromptLayer OpenAI
ChatOpenAIYou can use OpenAI's chat models as follows:import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";import { SerpAPI } from "langchain/tools";const model = new ChatOpenAI({ temperature: 0.9, openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY});// You can also pass tools or functions to the model, learn more here// https://platform.openai.com/docs/guides/gpt/function-callingconst modelForFunctionCalling = new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0,});await modelForFunctionCalling.predictMessages( [new HumanMessage("What is the weather in New York? ")], { tools: [new SerpAPI()] } // Tools will be automatically formatted as functions in the OpenAI format);/*AIMessage { text: '', name: undefined, additional_kwargs: { function_call: { name: 'search', arguments: '{\n "input": "current weather in New York"\n}' } }}*/await modelForFunctionCalling.predictMessages( [new HumanMessage("What is the weather in New York? ")], { functions: [ { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. |
4e9727215e95-594 | San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, ], // You can set the `function_call` arg to force the model to use a function function_call: { name: "get_current_weather", }, });/*AIMessage { text: '', name: undefined, additional_kwargs: { function_call: { name: 'get_current_weather', arguments: '{\n "location": "New York"\n}' } }}*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaSerpAPI from langchain/toolsIf you're part of an organization, you can set process.env.OPENAI_ORGANIZATION with your OpenAI organization id, or pass it in as organization when
initializing the model.
You can use OpenAI's chat models as follows: |
4e9727215e95-595 | initializing the model.
You can use OpenAI's chat models as follows:
import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";import { SerpAPI } from "langchain/tools";const model = new ChatOpenAI({ temperature: 0.9, openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY});// You can also pass tools or functions to the model, learn more here// https://platform.openai.com/docs/guides/gpt/function-callingconst modelForFunctionCalling = new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0,});await modelForFunctionCalling.predictMessages( [new HumanMessage("What is the weather in New York? ")], { tools: [new SerpAPI()] } // Tools will be automatically formatted as functions in the OpenAI format);/*AIMessage { text: '', name: undefined, additional_kwargs: { function_call: { name: 'search', arguments: '{\n "input": "current weather in New York"\n}' } }}*/await modelForFunctionCalling.predictMessages( [new HumanMessage("What is the weather in New York? ")], { functions: [ { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. |
4e9727215e95-596 | San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, ], // You can set the `function_call` arg to force the model to use a function function_call: { name: "get_current_weather", }, });/*AIMessage { text: '', name: undefined, additional_kwargs: { function_call: { name: 'get_current_weather', arguments: '{\n "location": "New York"\n}' } }}*/
API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaSerpAPI from langchain/tools
If you're part of an organization, you can set process.env.OPENAI_ORGANIZATION with your OpenAI organization id, or pass it in as organization when
initializing the model.
Page Title: PromptLayerChatOpenAI | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-597 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsIntegrationsPromptLayer OpenAIPromptLayerChatOpenAIYou can pass in the optional returnPromptLayerId boolean to get a promptLayerRequestId like below. Here is an example of getting the PromptLayerChatOpenAI requestID:import { PromptLayerChatOpenAI } from "langchain/chat_models/openai";const chat = new PromptLayerChatOpenAI({ returnPromptLayerId: true,});const respA = await chat.generate([ [ new SystemMessage( "You are a helpful assistant that translates English to French." ), ],]);console.log(JSON.stringify(respA, null, 3));/* { "generations": [ [ { "text": "Bonjour! Je suis un assistant utile qui peut vous aider à traduire de l'anglais vers le français. Que puis-je faire pour vous aujourd'hui? ", "message": { "type": "ai", "data": { "content": "Bonjour! Je suis un assistant utile qui peut vous aider à traduire de l'anglais vers le français. Que puis-je faire pour vous aujourd'hui?" |
4e9727215e95-598 | } }, "generationInfo": { "promptLayerRequestId": 2300682 } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 35, "promptTokens": 19, "totalTokens": 54 } } }*/PreviousOpenAINextOutput parsersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-599 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsAnthropicAzure OpenAIBaidu WenxinGoogle PaLMGoogle Vertex AIOllamaOpenAIPromptLayer OpenAIOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsIntegrationsPromptLayer OpenAIPromptLayerChatOpenAIYou can pass in the optional returnPromptLayerId boolean to get a promptLayerRequestId like below. Here is an example of getting the PromptLayerChatOpenAI requestID:import { PromptLayerChatOpenAI } from "langchain/chat_models/openai";const chat = new PromptLayerChatOpenAI({ returnPromptLayerId: true,});const respA = await chat.generate([ [ new SystemMessage( "You are a helpful assistant that translates English to French." ), ],]);console.log(JSON.stringify(respA, null, 3));/* { "generations": [ [ { "text": "Bonjour! Je suis un assistant utile qui peut vous aider à traduire de l'anglais vers le français. Que puis-je faire pour vous aujourd'hui? ", "message": { "type": "ai", "data": { "content": "Bonjour! Je suis un assistant utile qui peut vous aider à traduire de l'anglais vers le français. Que puis-je faire pour vous aujourd'hui?"
} }, "generationInfo": { "promptLayerRequestId": 2300682 } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 35, "promptTokens": 19, "totalTokens": 54 } } }*/PreviousOpenAINextOutput parsers |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.