id
stringlengths 14
17
| text
stringlengths 42
2.1k
|
---|---|
4e9727215e95-400 | to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryimport { GoogleVertexAI } from "langchain/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. * Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */export const run = async () => { const model = new GoogleVertexAI({ temperature: 0.7, }); const res = await model.call( "What would be a good company name a company that makes colorful socks?" ); console.log({ res });};API Reference:GoogleVertexAI from langchain/llms/googlevertexaiGoogle also has separate models for their "Codey" code generation models.The "code-gecko" model is useful for code completion:import { GoogleVertexAI } from "langchain/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. |
4e9727215e95-401 | Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */const model = new GoogleVertexAI({ model: "code-gecko",});const res = await model.call("for (let co=0;");console.log({ res });API Reference:GoogleVertexAI from langchain/llms/googlevertexaiWhile the "code-bison" model is better at larger code generation based on
a text prompt:import { GoogleVertexAI } from "langchain/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. * Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */const model = new GoogleVertexAI({ model: "code-bison", maxOutputTokens: 2048,});const res = await model.call("A Javascript function that counts from 1 to 10. ");console.log({ res });API Reference:GoogleVertexAI from langchain/llms/googlevertexaiPreviousGoogle PaLMNextHuggingFaceInferenceCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-402 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsGoogle Vertex AIGoogle Vertex AIThe Vertex AI implementation is meant to be used in Node.js and not
directly in a browser, since it requires a service account to use.Before running this code, you should make sure the Vertex AI API is
enabled for the relevant project in your Google Cloud dashboard and that you've authenticated to
Google Cloud using one of these methods:You are logged into an account (using gcloud auth application-default login)
permitted to that project.You are running on a machine using a service account that is permitted
to the project.You have downloaded the credentials for a service account that is permitted
to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment |
4e9727215e95-403 | to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryimport { GoogleVertexAI } from "langchain/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. * Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */export const run = async () => { const model = new GoogleVertexAI({ temperature: 0.7, }); const res = await model.call( "What would be a good company name a company that makes colorful socks?" ); console.log({ res });};API Reference:GoogleVertexAI from langchain/llms/googlevertexaiGoogle also has separate models for their "Codey" code generation models.The "code-gecko" model is useful for code completion:import { GoogleVertexAI } from "langchain/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. |
4e9727215e95-404 | Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */const model = new GoogleVertexAI({ model: "code-gecko",});const res = await model.call("for (let co=0;");console.log({ res });API Reference:GoogleVertexAI from langchain/llms/googlevertexaiWhile the "code-bison" model is better at larger code generation based on
a text prompt:import { GoogleVertexAI } from "langchain/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. * Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */const model = new GoogleVertexAI({ model: "code-bison", maxOutputTokens: 2048,});const res = await model.call("A Javascript function that counts from 1 to 10. ");console.log({ res });API Reference:GoogleVertexAI from langchain/llms/googlevertexaiPreviousGoogle PaLMNextHuggingFaceInference |
4e9727215e95-405 | ModulesModel I/OLanguage modelsLLMsIntegrationsGoogle Vertex AIGoogle Vertex AIThe Vertex AI implementation is meant to be used in Node.js and not
directly in a browser, since it requires a service account to use.Before running this code, you should make sure the Vertex AI API is
enabled for the relevant project in your Google Cloud dashboard and that you've authenticated to
Google Cloud using one of these methods:You are logged into an account (using gcloud auth application-default login)
permitted to that project.You are running on a machine using a service account that is permitted
to the project.You have downloaded the credentials for a service account that is permitted
to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment |
4e9727215e95-406 | to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryimport { GoogleVertexAI } from "langchain/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. * Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */export const run = async () => { const model = new GoogleVertexAI({ temperature: 0.7, }); const res = await model.call( "What would be a good company name a company that makes colorful socks?" ); console.log({ res });};API Reference:GoogleVertexAI from langchain/llms/googlevertexaiGoogle also has separate models for their "Codey" code generation models.The "code-gecko" model is useful for code completion:import { GoogleVertexAI } from "langchain/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. |
4e9727215e95-407 | Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */const model = new GoogleVertexAI({ model: "code-gecko",});const res = await model.call("for (let co=0;");console.log({ res });API Reference:GoogleVertexAI from langchain/llms/googlevertexaiWhile the "code-bison" model is better at larger code generation based on
a text prompt:import { GoogleVertexAI } from "langchain/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. * Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */const model = new GoogleVertexAI({ model: "code-bison", maxOutputTokens: 2048,});const res = await model.call("A Javascript function that counts from 1 to 10. ");console.log({ res });API Reference:GoogleVertexAI from langchain/llms/googlevertexaiPreviousGoogle PaLMNextHuggingFaceInference
Google Vertex AIThe Vertex AI implementation is meant to be used in Node.js and not |
4e9727215e95-408 | Google Vertex AIThe Vertex AI implementation is meant to be used in Node.js and not
directly in a browser, since it requires a service account to use.Before running this code, you should make sure the Vertex AI API is
enabled for the relevant project in your Google Cloud dashboard and that you've authenticated to
Google Cloud using one of these methods:You are logged into an account (using gcloud auth application-default login)
permitted to that project.You are running on a machine using a service account that is permitted
to the project.You have downloaded the credentials for a service account that is permitted
to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment |
4e9727215e95-409 | to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryimport { GoogleVertexAI } from "langchain/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. * Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */export const run = async () => { const model = new GoogleVertexAI({ temperature: 0.7, }); const res = await model.call( "What would be a good company name a company that makes colorful socks?" ); console.log({ res });};API Reference:GoogleVertexAI from langchain/llms/googlevertexaiGoogle also has separate models for their "Codey" code generation models.The "code-gecko" model is useful for code completion:import { GoogleVertexAI } from "langchain/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. |
4e9727215e95-410 | Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */const model = new GoogleVertexAI({ model: "code-gecko",});const res = await model.call("for (let co=0;");console.log({ res });API Reference:GoogleVertexAI from langchain/llms/googlevertexaiWhile the "code-bison" model is better at larger code generation based on
a text prompt:import { GoogleVertexAI } from "langchain/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. * Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */const model = new GoogleVertexAI({ model: "code-bison", maxOutputTokens: 2048,});const res = await model.call("A Javascript function that counts from 1 to 10. ");console.log({ res });API Reference:GoogleVertexAI from langchain/llms/googlevertexai
The Vertex AI implementation is meant to be used in Node.js and not
directly in a browser, since it requires a service account to use. |
4e9727215e95-411 | directly in a browser, since it requires a service account to use.
Before running this code, you should make sure the Vertex AI API is
enabled for the relevant project in your Google Cloud dashboard and that you've authenticated to
Google Cloud using one of these methods:
npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-library
npm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-library
npm install google-auth-library
yarn add google-auth-library
pnpm add google-auth-library
import { GoogleVertexAI } from "langchain/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. * Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */export const run = async () => { const model = new GoogleVertexAI({ temperature: 0.7, }); const res = await model.call( "What would be a good company name a company that makes colorful socks?" ); console.log({ res });};
API Reference:GoogleVertexAI from langchain/llms/googlevertexai
Google also has separate models for their "Codey" code generation models.
The "code-gecko" model is useful for code completion: |
4e9727215e95-412 | The "code-gecko" model is useful for code completion:
import { GoogleVertexAI } from "langchain/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. * Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */const model = new GoogleVertexAI({ model: "code-gecko",});const res = await model.call("for (let co=0;");console.log({ res });
While the "code-bison" model is better at larger code generation based on
a text prompt: |
4e9727215e95-413 | a text prompt:
import { GoogleVertexAI } from "langchain/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. * Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */const model = new GoogleVertexAI({ model: "code-bison", maxOutputTokens: 2048,});const res = await model.call("A Javascript function that counts from 1 to 10. ");console.log({ res });
HuggingFaceInference
Page Title: HuggingFaceInference | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-414 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsHuggingFaceInferenceHuggingFaceInferenceHere's an example of calling a HugggingFaceInference model as an LLM:npmYarnpnpmnpm install @huggingface/inference@1yarn add @huggingface/inference@1pnpm add @huggingface/inference@1import { HuggingFaceInference } from "langchain/llms/hf";const model = new HuggingFaceInference({ model: "gpt2", apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY});const res = await model.call("1 + 1 =");console.log({ res });PreviousGoogle Vertex AINextOllamaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-415 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsHuggingFaceInferenceHuggingFaceInferenceHere's an example of calling a HugggingFaceInference model as an LLM:npmYarnpnpmnpm install @huggingface/inference@1yarn add @huggingface/inference@1pnpm add @huggingface/inference@1import { HuggingFaceInference } from "langchain/llms/hf";const model = new HuggingFaceInference({ model: "gpt2", apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY});const res = await model.call("1 + 1 =");console.log({ res });PreviousGoogle Vertex AINextOllama |
4e9727215e95-416 | ModulesModel I/OLanguage modelsLLMsIntegrationsHuggingFaceInferenceHuggingFaceInferenceHere's an example of calling a HugggingFaceInference model as an LLM:npmYarnpnpmnpm install @huggingface/inference@1yarn add @huggingface/inference@1pnpm add @huggingface/inference@1import { HuggingFaceInference } from "langchain/llms/hf";const model = new HuggingFaceInference({ model: "gpt2", apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY});const res = await model.call("1 + 1 =");console.log({ res });PreviousGoogle Vertex AINextOllama
HuggingFaceInferenceHere's an example of calling a HugggingFaceInference model as an LLM:npmYarnpnpmnpm install @huggingface/inference@1yarn add @huggingface/inference@1pnpm add @huggingface/inference@1import { HuggingFaceInference } from "langchain/llms/hf";const model = new HuggingFaceInference({ model: "gpt2", apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY});const res = await model.call("1 + 1 =");console.log({ res });
Here's an example of calling a HugggingFaceInference model as an LLM:
npmYarnpnpmnpm install @huggingface/inference@1yarn add @huggingface/inference@1pnpm add @huggingface/inference@1 |
4e9727215e95-417 | npm install @huggingface/inference@1yarn add @huggingface/inference@1pnpm add @huggingface/inference@1
npm install @huggingface/inference@1
yarn add @huggingface/inference@1
pnpm add @huggingface/inference@1
import { HuggingFaceInference } from "langchain/llms/hf";const model = new HuggingFaceInference({ model: "gpt2", apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY});const res = await model.call("1 + 1 =");console.log({ res });
Ollama
Page Title: Ollama | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsOllamaOn this pageOllamaOllama allows you to run open-source large language models, such as Llama 2, locally.Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance. |
4e9727215e95-418 | For a complete list of supported models and model variants, see the Ollama model library.SetupFollow these instructions to set up and run a local Ollama instance.Usageimport { Ollama } from "langchain/llms/ollama";const ollama = new Ollama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value});const stream = await ollama.stream( `Translate "I love programming" into German.`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/* I'm glad to help! "I love programming" can be translated to German as "Ich liebe Programmieren." It's important to note that the translation of "I love" in German is "ich liebe," which is a more formal and polite way of saying "I love." In informal situations, people might use "mag ich" or "möchte ich" instead. Additionally, the word "Programmieren" is the correct term for "programming" in German. It's a combination of two words: "Programm" and "-ieren," which means "to do something." So, the full translation of "I love programming" would be "Ich liebe Programmieren. */API Reference:Ollama from langchain/llms/ollamaPreviousHuggingFaceInferenceNextOpenAISetupUsageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-419 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsOllamaOn this pageOllamaOllama allows you to run open-source large language models, such as Llama 2, locally.Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance. |
4e9727215e95-420 | For a complete list of supported models and model variants, see the Ollama model library.SetupFollow these instructions to set up and run a local Ollama instance.Usageimport { Ollama } from "langchain/llms/ollama";const ollama = new Ollama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value});const stream = await ollama.stream( `Translate "I love programming" into German.`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/* I'm glad to help! "I love programming" can be translated to German as "Ich liebe Programmieren." It's important to note that the translation of "I love" in German is "ich liebe," which is a more formal and polite way of saying "I love." In informal situations, people might use "mag ich" or "möchte ich" instead. Additionally, the word "Programmieren" is the correct term for "programming" in German. It's a combination of two words: "Programm" and "-ieren," which means "to do something." So, the full translation of "I love programming" would be "Ich liebe Programmieren. */API Reference:Ollama from langchain/llms/ollamaPreviousHuggingFaceInferenceNextOpenAISetupUsage |
4e9727215e95-421 | ModulesModel I/OLanguage modelsLLMsIntegrationsOllamaOn this pageOllamaOllama allows you to run open-source large language models, such as Llama 2, locally.Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance. |
4e9727215e95-422 | For a complete list of supported models and model variants, see the Ollama model library.SetupFollow these instructions to set up and run a local Ollama instance.Usageimport { Ollama } from "langchain/llms/ollama";const ollama = new Ollama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value});const stream = await ollama.stream( `Translate "I love programming" into German.`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/* I'm glad to help! "I love programming" can be translated to German as "Ich liebe Programmieren." It's important to note that the translation of "I love" in German is "ich liebe," which is a more formal and polite way of saying "I love." In informal situations, people might use "mag ich" or "möchte ich" instead. Additionally, the word "Programmieren" is the correct term for "programming" in German. It's a combination of two words: "Programm" and "-ieren," which means "to do something." So, the full translation of "I love programming" would be "Ich liebe Programmieren. */API Reference:Ollama from langchain/llms/ollamaPreviousHuggingFaceInferenceNextOpenAISetupUsage |
4e9727215e95-423 | ModulesModel I/OLanguage modelsLLMsIntegrationsOllamaOn this pageOllamaOllama allows you to run open-source large language models, such as Llama 2, locally.Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance. |
4e9727215e95-424 | For a complete list of supported models and model variants, see the Ollama model library.SetupFollow these instructions to set up and run a local Ollama instance.Usageimport { Ollama } from "langchain/llms/ollama";const ollama = new Ollama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value});const stream = await ollama.stream( `Translate "I love programming" into German.`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/* I'm glad to help! "I love programming" can be translated to German as "Ich liebe Programmieren." It's important to note that the translation of "I love" in German is "ich liebe," which is a more formal and polite way of saying "I love." In informal situations, people might use "mag ich" or "möchte ich" instead. Additionally, the word "Programmieren" is the correct term for "programming" in German. It's a combination of two words: "Programm" and "-ieren," which means "to do something." So, the full translation of "I love programming" would be "Ich liebe Programmieren. */API Reference:Ollama from langchain/llms/ollamaPreviousHuggingFaceInferenceNextOpenAI
OllamaOllama allows you to run open-source large language models, such as Llama 2, locally.Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance. |
4e9727215e95-425 | For a complete list of supported models and model variants, see the Ollama model library.SetupFollow these instructions to set up and run a local Ollama instance.Usageimport { Ollama } from "langchain/llms/ollama";const ollama = new Ollama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value});const stream = await ollama.stream( `Translate "I love programming" into German.`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/* I'm glad to help! "I love programming" can be translated to German as "Ich liebe Programmieren." It's important to note that the translation of "I love" in German is "ich liebe," which is a more formal and polite way of saying "I love." In informal situations, people might use "mag ich" or "möchte ich" instead. Additionally, the word "Programmieren" is the correct term for "programming" in German. It's a combination of two words: "Programm" and "-ieren," which means "to do something." So, the full translation of "I love programming" would be "Ich liebe Programmieren. */API Reference:Ollama from langchain/llms/ollama
Ollama allows you to run open-source large language models, such as Llama 2, locally.
Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.
This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance.
For a complete list of supported models and model variants, see the Ollama model library. |
4e9727215e95-426 | For a complete list of supported models and model variants, see the Ollama model library.
Follow these instructions to set up and run a local Ollama instance.
import { Ollama } from "langchain/llms/ollama";const ollama = new Ollama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value});const stream = await ollama.stream( `Translate "I love programming" into German.`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/* I'm glad to help! "I love programming" can be translated to German as "Ich liebe Programmieren." It's important to note that the translation of "I love" in German is "ich liebe," which is a more formal and polite way of saying "I love." In informal situations, people might use "mag ich" or "möchte ich" instead. Additionally, the word "Programmieren" is the correct term for "programming" in German. It's a combination of two words: "Programm" and "-ieren," which means "to do something." So, the full translation of "I love programming" would be "Ich liebe Programmieren. */
API Reference:Ollama from langchain/llms/ollama
OpenAI
Page Title: OpenAI | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-427 | Page Title: OpenAI | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsOpenAIOpenAIHere's how you can initialize an OpenAI LLM instance:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ modelName: "text-davinci-003", // Defaults to "text-davinci-003" if no model provided. temperature: 0.9, openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });If you're part of an organization, you can set process.env.OPENAI_ORGANIZATION to your OpenAI organization id, or pass it in as organization when
initializing the model.PreviousOllamaNextPromptLayer OpenAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-428 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsOpenAIOpenAIHere's how you can initialize an OpenAI LLM instance:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ modelName: "text-davinci-003", // Defaults to "text-davinci-003" if no model provided. temperature: 0.9, openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });If you're part of an organization, you can set process.env.OPENAI_ORGANIZATION to your OpenAI organization id, or pass it in as organization when
initializing the model.PreviousOllamaNextPromptLayer OpenAI |
4e9727215e95-429 | initializing the model.PreviousOllamaNextPromptLayer OpenAI
ModulesModel I/OLanguage modelsLLMsIntegrationsOpenAIOpenAIHere's how you can initialize an OpenAI LLM instance:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ modelName: "text-davinci-003", // Defaults to "text-davinci-003" if no model provided. temperature: 0.9, openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });If you're part of an organization, you can set process.env.OPENAI_ORGANIZATION to your OpenAI organization id, or pass it in as organization when
initializing the model.PreviousOllamaNextPromptLayer OpenAI
OpenAIHere's how you can initialize an OpenAI LLM instance:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ modelName: "text-davinci-003", // Defaults to "text-davinci-003" if no model provided. temperature: 0.9, openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });If you're part of an organization, you can set process.env.OPENAI_ORGANIZATION to your OpenAI organization id, or pass it in as organization when
initializing the model.
Here's how you can initialize an OpenAI LLM instance: |
4e9727215e95-430 | initializing the model.
Here's how you can initialize an OpenAI LLM instance:
import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ modelName: "text-davinci-003", // Defaults to "text-davinci-003" if no model provided. temperature: 0.9, openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });
If you're part of an organization, you can set process.env.OPENAI_ORGANIZATION to your OpenAI organization id, or pass it in as organization when
initializing the model.
PromptLayer OpenAI
Page Title: PromptLayer OpenAI | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-431 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsPromptLayer OpenAIPromptLayer OpenAILangChain integrates with PromptLayer for logging and debugging prompts and responses. To add support for PromptLayer:Create a PromptLayer account here: https://promptlayer.com.Create an API token and pass it either as promptLayerApiKey argument in the PromptLayerOpenAI constructor or in the PROMPTLAYER_API_KEY environment variable.import { PromptLayerOpenAI } from "langchain/llms/openai";const model = new PromptLayerOpenAI({ temperature: 0.9, openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? |
4e9727215e95-432 | ");Azure PromptLayerOpenAILangChain also integrates with PromptLayer for Azure-hosted OpenAI instances:import { PromptLayerOpenAI } from "langchain/llms/openai";const model = new PromptLayerOpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-AOAI-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiInstanceName: "YOUR-AOAI-INSTANCE-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "YOUR-AOAI-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME azureOpenAIApiCompletionsDeploymentName: "YOUR-AOAI-COMPLETIONS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME azureOpenAIApiEmbeddingsDeploymentName: "YOUR-AOAI-EMBEDDINGS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME azureOpenAIApiVersion: "YOUR-AOAI-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIBasePath: "YOUR-AZURE-OPENAI-BASE-PATH", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? |
4e9727215e95-433 | ");The request and the response will be logged in the PromptLayer dashboard.Note: In streaming mode PromptLayer will not log the response.PreviousOpenAINextRaycastAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsPromptLayer OpenAIPromptLayer OpenAILangChain integrates with PromptLayer for logging and debugging prompts and responses. To add support for PromptLayer:Create a PromptLayer account here: https://promptlayer.com.Create an API token and pass it either as promptLayerApiKey argument in the PromptLayerOpenAI constructor or in the PROMPTLAYER_API_KEY environment variable.import { PromptLayerOpenAI } from "langchain/llms/openai";const model = new PromptLayerOpenAI({ temperature: 0.9, openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? |
4e9727215e95-434 | ");Azure PromptLayerOpenAILangChain also integrates with PromptLayer for Azure-hosted OpenAI instances:import { PromptLayerOpenAI } from "langchain/llms/openai";const model = new PromptLayerOpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-AOAI-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiInstanceName: "YOUR-AOAI-INSTANCE-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "YOUR-AOAI-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME azureOpenAIApiCompletionsDeploymentName: "YOUR-AOAI-COMPLETIONS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME azureOpenAIApiEmbeddingsDeploymentName: "YOUR-AOAI-EMBEDDINGS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME azureOpenAIApiVersion: "YOUR-AOAI-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIBasePath: "YOUR-AZURE-OPENAI-BASE-PATH", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");The request and the response will be logged in the PromptLayer |
4e9727215e95-435 | a company that makes colorful socks? ");The request and the response will be logged in the PromptLayer dashboard.Note: In streaming mode PromptLayer will not log the response.PreviousOpenAINextRaycastAI |
4e9727215e95-436 | ModulesModel I/OLanguage modelsLLMsIntegrationsPromptLayer OpenAIPromptLayer OpenAILangChain integrates with PromptLayer for logging and debugging prompts and responses. To add support for PromptLayer:Create a PromptLayer account here: https://promptlayer.com.Create an API token and pass it either as promptLayerApiKey argument in the PromptLayerOpenAI constructor or in the PROMPTLAYER_API_KEY environment variable.import { PromptLayerOpenAI } from "langchain/llms/openai";const model = new PromptLayerOpenAI({ temperature: 0.9, openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? |
4e9727215e95-437 | ");Azure PromptLayerOpenAILangChain also integrates with PromptLayer for Azure-hosted OpenAI instances:import { PromptLayerOpenAI } from "langchain/llms/openai";const model = new PromptLayerOpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-AOAI-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiInstanceName: "YOUR-AOAI-INSTANCE-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "YOUR-AOAI-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME azureOpenAIApiCompletionsDeploymentName: "YOUR-AOAI-COMPLETIONS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME azureOpenAIApiEmbeddingsDeploymentName: "YOUR-AOAI-EMBEDDINGS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME azureOpenAIApiVersion: "YOUR-AOAI-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIBasePath: "YOUR-AZURE-OPENAI-BASE-PATH", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");The request and the response will be logged in the PromptLayer |
4e9727215e95-438 | a company that makes colorful socks? ");The request and the response will be logged in the PromptLayer dashboard.Note: In streaming mode PromptLayer will not log the response.PreviousOpenAINextRaycastAI |
4e9727215e95-439 | PromptLayer OpenAILangChain integrates with PromptLayer for logging and debugging prompts and responses. To add support for PromptLayer:Create a PromptLayer account here: https://promptlayer.com.Create an API token and pass it either as promptLayerApiKey argument in the PromptLayerOpenAI constructor or in the PROMPTLAYER_API_KEY environment variable.import { PromptLayerOpenAI } from "langchain/llms/openai";const model = new PromptLayerOpenAI({ temperature: 0.9, openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? |
4e9727215e95-440 | ");Azure PromptLayerOpenAILangChain also integrates with PromptLayer for Azure-hosted OpenAI instances:import { PromptLayerOpenAI } from "langchain/llms/openai";const model = new PromptLayerOpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-AOAI-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiInstanceName: "YOUR-AOAI-INSTANCE-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "YOUR-AOAI-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME azureOpenAIApiCompletionsDeploymentName: "YOUR-AOAI-COMPLETIONS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME azureOpenAIApiEmbeddingsDeploymentName: "YOUR-AOAI-EMBEDDINGS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME azureOpenAIApiVersion: "YOUR-AOAI-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIBasePath: "YOUR-AZURE-OPENAI-BASE-PATH", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");The request and the response will be logged in the PromptLayer |
4e9727215e95-441 | a company that makes colorful socks? ");The request and the response will be logged in the PromptLayer dashboard.Note: In streaming mode PromptLayer will not log the response. |
4e9727215e95-442 | LangChain integrates with PromptLayer for logging and debugging prompts and responses. To add support for PromptLayer:
import { PromptLayerOpenAI } from "langchain/llms/openai";const model = new PromptLayerOpenAI({ temperature: 0.9, openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");
LangChain also integrates with PromptLayer for Azure-hosted OpenAI instances: |
4e9727215e95-443 | LangChain also integrates with PromptLayer for Azure-hosted OpenAI instances:
import { PromptLayerOpenAI } from "langchain/llms/openai";const model = new PromptLayerOpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-AOAI-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiInstanceName: "YOUR-AOAI-INSTANCE-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "YOUR-AOAI-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME azureOpenAIApiCompletionsDeploymentName: "YOUR-AOAI-COMPLETIONS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME azureOpenAIApiEmbeddingsDeploymentName: "YOUR-AOAI-EMBEDDINGS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME azureOpenAIApiVersion: "YOUR-AOAI-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIBasePath: "YOUR-AZURE-OPENAI-BASE-PATH", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");
The request and the response will be logged in the PromptLayer dashboard. |
4e9727215e95-444 | The request and the response will be logged in the PromptLayer dashboard.
Note: In streaming mode PromptLayer will not log the response.
RaycastAI
Page Title: RaycastAI | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsRaycastAIRaycastAINote: This is a community-built integration and is not officially supported by Raycast.You can utilize the LangChain's RaycastAI class within the Raycast Environment to enhance your Raycast extension with Langchain's capabilities.The RaycastAI class is only available in the Raycast environment and only to Raycast Pro users as of August 2023. You may check how to create an extension for Raycast here.There is a rate limit of approx 10 requests per minute for each Raycast Pro user. If you exceed this limit, you will receive an error. |
4e9727215e95-445 | You can set your desired rpm limit by passing rateLimitPerMinute to the RaycastAI constructor as shown in the example, as this rate limit may change in the future.import { RaycastAI } from "langchain/llms/raycast";import { showHUD } from "@raycast/api";import { Tool } from "langchain/tools";import { initializeAgentExecutorWithOptions } from "langchain/agents";const model = new RaycastAI({ rateLimitPerMinute: 10, // It is 10 by default so you can omit this line model: "gpt-3.5-turbo", creativity: 0, // `creativity` is a term used by Raycast which is equivalent to `temperature` in some other LLMs});const tools: Tool[] = [ // Add your tools here];export default async function main() { // Initialize the agent executor with RaycastAI model const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "chat-conversational-react-description", }); const input = `Describe my today's schedule as Gabriel Garcia Marquez would describe it`; const answer = await executor.call({ input }); await showHUD(answer.output);}API Reference:RaycastAI from langchain/llms/raycastTool from langchain/toolsinitializeAgentExecutorWithOptions from langchain/agentsPreviousPromptLayer OpenAINextReplicateCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-446 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsRaycastAIRaycastAINote: This is a community-built integration and is not officially supported by Raycast.You can utilize the LangChain's RaycastAI class within the Raycast Environment to enhance your Raycast extension with Langchain's capabilities.The RaycastAI class is only available in the Raycast environment and only to Raycast Pro users as of August 2023. You may check how to create an extension for Raycast here.There is a rate limit of approx 10 requests per minute for each Raycast Pro user. If you exceed this limit, you will receive an error. |
4e9727215e95-447 | You can set your desired rpm limit by passing rateLimitPerMinute to the RaycastAI constructor as shown in the example, as this rate limit may change in the future.import { RaycastAI } from "langchain/llms/raycast";import { showHUD } from "@raycast/api";import { Tool } from "langchain/tools";import { initializeAgentExecutorWithOptions } from "langchain/agents";const model = new RaycastAI({ rateLimitPerMinute: 10, // It is 10 by default so you can omit this line model: "gpt-3.5-turbo", creativity: 0, // `creativity` is a term used by Raycast which is equivalent to `temperature` in some other LLMs});const tools: Tool[] = [ // Add your tools here];export default async function main() { // Initialize the agent executor with RaycastAI model const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "chat-conversational-react-description", }); const input = `Describe my today's schedule as Gabriel Garcia Marquez would describe it`; const answer = await executor.call({ input }); await showHUD(answer.output);}API Reference:RaycastAI from langchain/llms/raycastTool from langchain/toolsinitializeAgentExecutorWithOptions from langchain/agentsPreviousPromptLayer OpenAINextReplicate |
4e9727215e95-448 | ModulesModel I/OLanguage modelsLLMsIntegrationsRaycastAIRaycastAINote: This is a community-built integration and is not officially supported by Raycast.You can utilize the LangChain's RaycastAI class within the Raycast Environment to enhance your Raycast extension with Langchain's capabilities.The RaycastAI class is only available in the Raycast environment and only to Raycast Pro users as of August 2023. You may check how to create an extension for Raycast here.There is a rate limit of approx 10 requests per minute for each Raycast Pro user. If you exceed this limit, you will receive an error. |
4e9727215e95-449 | You can set your desired rpm limit by passing rateLimitPerMinute to the RaycastAI constructor as shown in the example, as this rate limit may change in the future.import { RaycastAI } from "langchain/llms/raycast";import { showHUD } from "@raycast/api";import { Tool } from "langchain/tools";import { initializeAgentExecutorWithOptions } from "langchain/agents";const model = new RaycastAI({ rateLimitPerMinute: 10, // It is 10 by default so you can omit this line model: "gpt-3.5-turbo", creativity: 0, // `creativity` is a term used by Raycast which is equivalent to `temperature` in some other LLMs});const tools: Tool[] = [ // Add your tools here];export default async function main() { // Initialize the agent executor with RaycastAI model const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "chat-conversational-react-description", }); const input = `Describe my today's schedule as Gabriel Garcia Marquez would describe it`; const answer = await executor.call({ input }); await showHUD(answer.output);}API Reference:RaycastAI from langchain/llms/raycastTool from langchain/toolsinitializeAgentExecutorWithOptions from langchain/agentsPreviousPromptLayer OpenAINextReplicate |
4e9727215e95-450 | RaycastAINote: This is a community-built integration and is not officially supported by Raycast.You can utilize the LangChain's RaycastAI class within the Raycast Environment to enhance your Raycast extension with Langchain's capabilities.The RaycastAI class is only available in the Raycast environment and only to Raycast Pro users as of August 2023. You may check how to create an extension for Raycast here.There is a rate limit of approx 10 requests per minute for each Raycast Pro user. If you exceed this limit, you will receive an error. |
4e9727215e95-451 | You can set your desired rpm limit by passing rateLimitPerMinute to the RaycastAI constructor as shown in the example, as this rate limit may change in the future.import { RaycastAI } from "langchain/llms/raycast";import { showHUD } from "@raycast/api";import { Tool } from "langchain/tools";import { initializeAgentExecutorWithOptions } from "langchain/agents";const model = new RaycastAI({ rateLimitPerMinute: 10, // It is 10 by default so you can omit this line model: "gpt-3.5-turbo", creativity: 0, // `creativity` is a term used by Raycast which is equivalent to `temperature` in some other LLMs});const tools: Tool[] = [ // Add your tools here];export default async function main() { // Initialize the agent executor with RaycastAI model const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "chat-conversational-react-description", }); const input = `Describe my today's schedule as Gabriel Garcia Marquez would describe it`; const answer = await executor.call({ input }); await showHUD(answer.output);}API Reference:RaycastAI from langchain/llms/raycastTool from langchain/toolsinitializeAgentExecutorWithOptions from langchain/agents
Note: This is a community-built integration and is not officially supported by Raycast.
You can utilize the LangChain's RaycastAI class within the Raycast Environment to enhance your Raycast extension with Langchain's capabilities.
The RaycastAI class is only available in the Raycast environment and only to Raycast Pro users as of August 2023. You may check how to create an extension for Raycast here. |
4e9727215e95-452 | There is a rate limit of approx 10 requests per minute for each Raycast Pro user. If you exceed this limit, you will receive an error. You can set your desired rpm limit by passing rateLimitPerMinute to the RaycastAI constructor as shown in the example, as this rate limit may change in the future.
import { RaycastAI } from "langchain/llms/raycast";import { showHUD } from "@raycast/api";import { Tool } from "langchain/tools";import { initializeAgentExecutorWithOptions } from "langchain/agents";const model = new RaycastAI({ rateLimitPerMinute: 10, // It is 10 by default so you can omit this line model: "gpt-3.5-turbo", creativity: 0, // `creativity` is a term used by Raycast which is equivalent to `temperature` in some other LLMs});const tools: Tool[] = [ // Add your tools here];export default async function main() { // Initialize the agent executor with RaycastAI model const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "chat-conversational-react-description", }); const input = `Describe my today's schedule as Gabriel Garcia Marquez would describe it`; const answer = await executor.call({ input }); await showHUD(answer.output);}
API Reference:RaycastAI from langchain/llms/raycastTool from langchain/toolsinitializeAgentExecutorWithOptions from langchain/agents
Replicate
Page Title: Replicate | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-453 | Page Title: Replicate | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsReplicateReplicateHere's an example of calling a Replicate model as an LLM:npmYarnpnpmnpm install replicateyarn add replicatepnpm add replicateimport { Replicate } from "langchain/llms/replicate";const model = new Replicate({ model: "a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5",});const prompt = `User: How much wood would a woodchuck chuck if a wood chuck could chuck wood?Assistant:`;const res = await model.call(prompt);console.log({ res });/* { res: "I'm happy to help! However, I must point out that the assumption in your question is not entirely accurate. " + + "Woodchucks, also known as groundhogs, do not actually chuck wood. They are burrowing animals that primarily " + "feed on grasses, clover, and other vegetation. |
4e9727215e95-454 | They do not have the physical ability to chuck wood.\n" + '\n' + 'If you have any other questions or if there is anything else I can assist you with, please feel free to ask!' }*/API Reference:Replicate from langchain/llms/replicateYou can run other models through Replicate by changing the model parameter.You can find a full list of models on Replicate's website.PreviousRaycastAINextChat modelsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-455 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsReplicateReplicateHere's an example of calling a Replicate model as an LLM:npmYarnpnpmnpm install replicateyarn add replicatepnpm add replicateimport { Replicate } from "langchain/llms/replicate";const model = new Replicate({ model: "a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5",});const prompt = `User: How much wood would a woodchuck chuck if a wood chuck could chuck wood?Assistant:`;const res = await model.call(prompt);console.log({ res });/* { res: "I'm happy to help! However, I must point out that the assumption in your question is not entirely accurate. " + + "Woodchucks, also known as groundhogs, do not actually chuck wood. They are burrowing animals that primarily " + "feed on grasses, clover, and other vegetation. They do not have the physical ability to chuck wood.\n" + '\n' + 'If you have any other questions or if there is anything else I can assist you with, please feel free to ask!' |
4e9727215e95-456 | }*/API Reference:Replicate from langchain/llms/replicateYou can run other models through Replicate by changing the model parameter.You can find a full list of models on Replicate's website.PreviousRaycastAINextChat models
ModulesModel I/OLanguage modelsLLMsIntegrationsReplicateReplicateHere's an example of calling a Replicate model as an LLM:npmYarnpnpmnpm install replicateyarn add replicatepnpm add replicateimport { Replicate } from "langchain/llms/replicate";const model = new Replicate({ model: "a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5",});const prompt = `User: How much wood would a woodchuck chuck if a wood chuck could chuck wood?Assistant:`;const res = await model.call(prompt);console.log({ res });/* { res: "I'm happy to help! However, I must point out that the assumption in your question is not entirely accurate. " + + "Woodchucks, also known as groundhogs, do not actually chuck wood. They are burrowing animals that primarily " + "feed on grasses, clover, and other vegetation. They do not have the physical ability to chuck wood.\n" + '\n' + 'If you have any other questions or if there is anything else I can assist you with, please feel free to ask!' }*/API Reference:Replicate from langchain/llms/replicateYou can run other models through Replicate by changing the model parameter.You can find a full list of models on Replicate's website.PreviousRaycastAINextChat models |
4e9727215e95-457 | ReplicateHere's an example of calling a Replicate model as an LLM:npmYarnpnpmnpm install replicateyarn add replicatepnpm add replicateimport { Replicate } from "langchain/llms/replicate";const model = new Replicate({ model: "a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5",});const prompt = `User: How much wood would a woodchuck chuck if a wood chuck could chuck wood?Assistant:`;const res = await model.call(prompt);console.log({ res });/* { res: "I'm happy to help! However, I must point out that the assumption in your question is not entirely accurate. " + + "Woodchucks, also known as groundhogs, do not actually chuck wood. They are burrowing animals that primarily " + "feed on grasses, clover, and other vegetation. They do not have the physical ability to chuck wood.\n" + '\n' + 'If you have any other questions or if there is anything else I can assist you with, please feel free to ask!' }*/API Reference:Replicate from langchain/llms/replicateYou can run other models through Replicate by changing the model parameter.You can find a full list of models on Replicate's website.
Here's an example of calling a Replicate model as an LLM:
npmYarnpnpmnpm install replicateyarn add replicatepnpm add replicate
npm install replicateyarn add replicatepnpm add replicate
npm install replicate
yarn add replicate
pnpm add replicate |
4e9727215e95-458 | npm install replicate
yarn add replicate
pnpm add replicate
import { Replicate } from "langchain/llms/replicate";const model = new Replicate({ model: "a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5",});const prompt = `User: How much wood would a woodchuck chuck if a wood chuck could chuck wood?Assistant:`;const res = await model.call(prompt);console.log({ res });/* { res: "I'm happy to help! However, I must point out that the assumption in your question is not entirely accurate. " + + "Woodchucks, also known as groundhogs, do not actually chuck wood. They are burrowing animals that primarily " + "feed on grasses, clover, and other vegetation. They do not have the physical ability to chuck wood.\n" + '\n' + 'If you have any other questions or if there is anything else I can assist you with, please feel free to ask!' }*/
API Reference:Replicate from langchain/llms/replicate
You can run other models through Replicate by changing the model parameter.
You can find a full list of models on Replicate's website.
Page Title: Chat models | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-459 | Page Title: Chat models | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsOn this pageChat modelsChat models are a variation on language models.
While chat models use language models under the hood, the interface they expose is a bit different. |
4e9727215e95-460 | While chat models use language models under the hood, the interface they expose is a bit different.
Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.Chat model APIs are fairly new, so we are still figuring out the correct abstractions.The following sections of documentation are provided:How-to guides: Walkthroughs of core functionality, like streaming, creating chat prompts, etc.Integrations: How to use different chat model providers (OpenAI, Anthropic, etc).Get startedSetupTo start we'll need to install the official OpenAI package:npmYarnpnpmnpm install -S openaiyarn add openaipnpm add openaiAccessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the ChatOpenAI class:import { ChatOpenAI } from "langchain/chat_models/openai";const chat = new ChatOpenAI({ openAIApiKey: "YOUR_KEY_HERE"});otherwise you can initialize it with an empty object:import { ChatOpenAI } from "langchain/chat_models/openai";const chat = new ChatOpenAI({});MessagesThe chat model interface is based around messages rather than raw text. |
4e9727215e95-461 | The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, FunctionMessage, and ChatMessage -- ChatMessage takes in an arbitrary role parameter. Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessagecallMessages in -> message outYou can get chat completions by passing one or more messages to the chat model. The response will be a message.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const chat = new ChatOpenAI();// Pass in a list of messages to `call` to start a conversation. In this simple example, we only pass in one message.const response = await chat.call([ new HumanMessage( "What is a good name for a company that makes colorful socks?" ),]);console.log(response);// AIMessage { text: '\n\nRainbow Sox Co.' }API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaOpenAI's chat model also supports multiple messages as input. See here for more information. Here is an example of sending a system and user message to the chat model:const response2 = await chat.call([ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage("Translate: I love programming. "),]);console.log(response2);// AIMessage { text: "J'aime programmer." |
4e9727215e95-462 | }generateBatch calls, richer outputsYou can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter.const response3 = await chat.generate([ [ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage( "Translate this sentence from English to French. I love programming." ), ], [ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage( "Translate this sentence from English to French. I love artificial intelligence." ), ],]);console.log(response3);/* { generations: [ [ { text: "J'aime programmer. ", message: AIMessage { text: "J'aime programmer." }, } ], [ { text: "J'aime l'intelligence artificielle. ", message: AIMessage { text: "J'aime l'intelligence artificielle." } } ] ] }*/You can recover things like token usage from this LLMResult:console.log(response3.llmOutput);/* { tokenUsage: { completionTokens: 20, promptTokens: 69, totalTokens: 89 } }*/PreviousReplicateNextCancelling requestsGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsOn this pageChat modelsChat models are a variation on language models. |
4e9727215e95-463 | While chat models use language models under the hood, the interface they expose is a bit different.
Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.Chat model APIs are fairly new, so we are still figuring out the correct abstractions.The following sections of documentation are provided:How-to guides: Walkthroughs of core functionality, like streaming, creating chat prompts, etc.Integrations: How to use different chat model providers (OpenAI, Anthropic, etc).Get startedSetupTo start we'll need to install the official OpenAI package:npmYarnpnpmnpm install -S openaiyarn add openaipnpm add openaiAccessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the ChatOpenAI class:import { ChatOpenAI } from "langchain/chat_models/openai";const chat = new ChatOpenAI({ openAIApiKey: "YOUR_KEY_HERE"});otherwise you can initialize it with an empty object:import { ChatOpenAI } from "langchain/chat_models/openai";const chat = new ChatOpenAI({});MessagesThe chat model interface is based around messages rather than raw text. |
4e9727215e95-464 | The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, FunctionMessage, and ChatMessage -- ChatMessage takes in an arbitrary role parameter. Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessagecallMessages in -> message outYou can get chat completions by passing one or more messages to the chat model. The response will be a message.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const chat = new ChatOpenAI();// Pass in a list of messages to `call` to start a conversation. In this simple example, we only pass in one message.const response = await chat.call([ new HumanMessage( "What is a good name for a company that makes colorful socks?" ),]);console.log(response);// AIMessage { text: '\n\nRainbow Sox Co.' }API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaOpenAI's chat model also supports multiple messages as input. See here for more information. Here is an example of sending a system and user message to the chat model:const response2 = await chat.call([ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage("Translate: I love programming. "),]);console.log(response2);// AIMessage { text: "J'aime programmer." |
4e9727215e95-465 | }generateBatch calls, richer outputsYou can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter.const response3 = await chat.generate([ [ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage( "Translate this sentence from English to French. I love programming." ), ], [ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage( "Translate this sentence from English to French. I love artificial intelligence." ), ],]);console.log(response3);/* { generations: [ [ { text: "J'aime programmer. ", message: AIMessage { text: "J'aime programmer." }, } ], [ { text: "J'aime l'intelligence artificielle. ", message: AIMessage { text: "J'aime l'intelligence artificielle." } } ] ] }*/You can recover things like token usage from this LLMResult:console.log(response3.llmOutput);/* { tokenUsage: { completionTokens: 20, promptTokens: 69, totalTokens: 89 } }*/PreviousReplicateNextCancelling requestsGet started
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference
ModulesModel I/OLanguage modelsChat modelsOn this pageChat modelsChat models are a variation on language models.
While chat models use language models under the hood, the interface they expose is a bit different. |
4e9727215e95-466 | While chat models use language models under the hood, the interface they expose is a bit different.
Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.Chat model APIs are fairly new, so we are still figuring out the correct abstractions.The following sections of documentation are provided:How-to guides: Walkthroughs of core functionality, like streaming, creating chat prompts, etc.Integrations: How to use different chat model providers (OpenAI, Anthropic, etc).Get startedSetupTo start we'll need to install the official OpenAI package:npmYarnpnpmnpm install -S openaiyarn add openaipnpm add openaiAccessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the ChatOpenAI class:import { ChatOpenAI } from "langchain/chat_models/openai";const chat = new ChatOpenAI({ openAIApiKey: "YOUR_KEY_HERE"});otherwise you can initialize it with an empty object:import { ChatOpenAI } from "langchain/chat_models/openai";const chat = new ChatOpenAI({});MessagesThe chat model interface is based around messages rather than raw text. |
4e9727215e95-467 | The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, FunctionMessage, and ChatMessage -- ChatMessage takes in an arbitrary role parameter. Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessagecallMessages in -> message outYou can get chat completions by passing one or more messages to the chat model. The response will be a message.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const chat = new ChatOpenAI();// Pass in a list of messages to `call` to start a conversation. In this simple example, we only pass in one message.const response = await chat.call([ new HumanMessage( "What is a good name for a company that makes colorful socks?" ),]);console.log(response);// AIMessage { text: '\n\nRainbow Sox Co.' }API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaOpenAI's chat model also supports multiple messages as input. See here for more information. Here is an example of sending a system and user message to the chat model:const response2 = await chat.call([ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage("Translate: I love programming. "),]);console.log(response2);// AIMessage { text: "J'aime programmer." |
4e9727215e95-468 | }generateBatch calls, richer outputsYou can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter.const response3 = await chat.generate([ [ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage( "Translate this sentence from English to French. I love programming." ), ], [ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage( "Translate this sentence from English to French. I love artificial intelligence." ), ],]);console.log(response3);/* { generations: [ [ { text: "J'aime programmer. ", message: AIMessage { text: "J'aime programmer." }, } ], [ { text: "J'aime l'intelligence artificielle. ", message: AIMessage { text: "J'aime l'intelligence artificielle." } } ] ] }*/You can recover things like token usage from this LLMResult:console.log(response3.llmOutput);/* { tokenUsage: { completionTokens: 20, promptTokens: 69, totalTokens: 89 } }*/PreviousReplicateNextCancelling requestsGet started
ModulesModel I/OLanguage modelsChat modelsOn this pageChat modelsChat models are a variation on language models.
While chat models use language models under the hood, the interface they expose is a bit different. |
4e9727215e95-469 | While chat models use language models under the hood, the interface they expose is a bit different.
Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.Chat model APIs are fairly new, so we are still figuring out the correct abstractions.The following sections of documentation are provided:How-to guides: Walkthroughs of core functionality, like streaming, creating chat prompts, etc.Integrations: How to use different chat model providers (OpenAI, Anthropic, etc).Get startedSetupTo start we'll need to install the official OpenAI package:npmYarnpnpmnpm install -S openaiyarn add openaipnpm add openaiAccessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the ChatOpenAI class:import { ChatOpenAI } from "langchain/chat_models/openai";const chat = new ChatOpenAI({ openAIApiKey: "YOUR_KEY_HERE"});otherwise you can initialize it with an empty object:import { ChatOpenAI } from "langchain/chat_models/openai";const chat = new ChatOpenAI({});MessagesThe chat model interface is based around messages rather than raw text. |
4e9727215e95-470 | The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, FunctionMessage, and ChatMessage -- ChatMessage takes in an arbitrary role parameter. Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessagecallMessages in -> message outYou can get chat completions by passing one or more messages to the chat model. The response will be a message.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const chat = new ChatOpenAI();// Pass in a list of messages to `call` to start a conversation. In this simple example, we only pass in one message.const response = await chat.call([ new HumanMessage( "What is a good name for a company that makes colorful socks?" ),]);console.log(response);// AIMessage { text: '\n\nRainbow Sox Co.' }API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaOpenAI's chat model also supports multiple messages as input. See here for more information. Here is an example of sending a system and user message to the chat model:const response2 = await chat.call([ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage("Translate: I love programming. "),]);console.log(response2);// AIMessage { text: "J'aime programmer." |
4e9727215e95-471 | }generateBatch calls, richer outputsYou can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter.const response3 = await chat.generate([ [ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage( "Translate this sentence from English to French. I love programming." ), ], [ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage( "Translate this sentence from English to French. I love artificial intelligence." ), ],]);console.log(response3);/* { generations: [ [ { text: "J'aime programmer. ", message: AIMessage { text: "J'aime programmer." }, } ], [ { text: "J'aime l'intelligence artificielle. ", message: AIMessage { text: "J'aime l'intelligence artificielle." } } ] ] }*/You can recover things like token usage from this LLMResult:console.log(response3.llmOutput);/* { tokenUsage: { completionTokens: 20, promptTokens: 69, totalTokens: 89 } }*/PreviousReplicateNextCancelling requests
Chat modelsChat models are a variation on language models.
While chat models use language models under the hood, the interface they expose is a bit different. |
4e9727215e95-472 | While chat models use language models under the hood, the interface they expose is a bit different.
Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.Chat model APIs are fairly new, so we are still figuring out the correct abstractions.The following sections of documentation are provided:How-to guides: Walkthroughs of core functionality, like streaming, creating chat prompts, etc.Integrations: How to use different chat model providers (OpenAI, Anthropic, etc).Get startedSetupTo start we'll need to install the official OpenAI package:npmYarnpnpmnpm install -S openaiyarn add openaipnpm add openaiAccessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the ChatOpenAI class:import { ChatOpenAI } from "langchain/chat_models/openai";const chat = new ChatOpenAI({ openAIApiKey: "YOUR_KEY_HERE"});otherwise you can initialize it with an empty object:import { ChatOpenAI } from "langchain/chat_models/openai";const chat = new ChatOpenAI({});MessagesThe chat model interface is based around messages rather than raw text. |
4e9727215e95-473 | The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, FunctionMessage, and ChatMessage -- ChatMessage takes in an arbitrary role parameter. Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessagecallMessages in -> message outYou can get chat completions by passing one or more messages to the chat model. The response will be a message.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const chat = new ChatOpenAI();// Pass in a list of messages to `call` to start a conversation. In this simple example, we only pass in one message.const response = await chat.call([ new HumanMessage( "What is a good name for a company that makes colorful socks?" ),]);console.log(response);// AIMessage { text: '\n\nRainbow Sox Co.' }API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaOpenAI's chat model also supports multiple messages as input. See here for more information. Here is an example of sending a system and user message to the chat model:const response2 = await chat.call([ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage("Translate: I love programming. "),]);console.log(response2);// AIMessage { text: "J'aime programmer." |
4e9727215e95-474 | }generateBatch calls, richer outputsYou can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter.const response3 = await chat.generate([ [ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage( "Translate this sentence from English to French. I love programming." ), ], [ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage( "Translate this sentence from English to French. I love artificial intelligence." ), ],]);console.log(response3);/* { generations: [ [ { text: "J'aime programmer. ", message: AIMessage { text: "J'aime programmer." }, } ], [ { text: "J'aime l'intelligence artificielle. ", message: AIMessage { text: "J'aime l'intelligence artificielle." } } ] ] }*/You can recover things like token usage from this LLMResult:console.log(response3.llmOutput);/* { tokenUsage: { completionTokens: 20, promptTokens: 69, totalTokens: 89 } }*/
Chat models are a variation on language models.
While chat models use language models under the hood, the interface they expose is a bit different.
Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
The following sections of documentation are provided:
How-to guides: Walkthroughs of core functionality, like streaming, creating chat prompts, etc. |
4e9727215e95-475 | Integrations: How to use different chat model providers (OpenAI, Anthropic, etc).
If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the ChatOpenAI class:
import { ChatOpenAI } from "langchain/chat_models/openai";const chat = new ChatOpenAI({ openAIApiKey: "YOUR_KEY_HERE"});
otherwise you can initialize it with an empty object:
import { ChatOpenAI } from "langchain/chat_models/openai";const chat = new ChatOpenAI({});
The chat model interface is based around messages rather than raw text.
The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, FunctionMessage, and ChatMessage -- ChatMessage takes in an arbitrary role parameter. Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessage
You can get chat completions by passing one or more messages to the chat model. The response will be a message.
import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const chat = new ChatOpenAI();// Pass in a list of messages to `call` to start a conversation. In this simple example, we only pass in one message.const response = await chat.call([ new HumanMessage( "What is a good name for a company that makes colorful socks?" ),]);console.log(response);// AIMessage { text: '\n\nRainbow Sox Co.' }
API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schema
OpenAI's chat model also supports multiple messages as input. See here for more information. Here is an example of sending a system and user message to the chat model: |
4e9727215e95-476 | const response2 = await chat.call([ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage("Translate: I love programming. "),]);console.log(response2);// AIMessage { text: "J'aime programmer." }
You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter.
const response3 = await chat.generate([ [ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage( "Translate this sentence from English to French. I love programming." ), ], [ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage( "Translate this sentence from English to French. I love artificial intelligence." ), ],]);console.log(response3);/* { generations: [ [ { text: "J'aime programmer. ", message: AIMessage { text: "J'aime programmer." }, } ], [ { text: "J'aime l'intelligence artificielle. ", message: AIMessage { text: "J'aime l'intelligence artificielle." } } ] ] }*/
You can recover things like token usage from this LLMResult:
console.log(response3.llmOutput);/* { tokenUsage: { completionTokens: 20, promptTokens: 69, totalTokens: 89 } }*/
Paragraphs: |
4e9727215e95-477 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toCancelling requestsCancelling requestsYou can cancel a request by passing a signal option when you call the model. For example, for OpenAI:import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const model = new ChatOpenAI({ temperature: 1 });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.const res = await model.call( [ new HumanMessage( "What is a good name for a company that makes colorful socks?" ), ], { signal: controller.signal });console.log(res);/*'\n\nSocktastic Colors'*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaNote, this will only cancel the outgoing request if the underlying provider exposes that option.
LangChain will cancel the underlying request if possible, otherwise it will cancel the processing of the response.PreviousChat modelsNextDealing with API ErrorsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-478 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toCancelling requestsCancelling requestsYou can cancel a request by passing a signal option when you call the model. For example, for OpenAI:import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const model = new ChatOpenAI({ temperature: 1 });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.const res = await model.call( [ new HumanMessage( "What is a good name for a company that makes colorful socks?" ), ], { signal: controller.signal });console.log(res);/*'\n\nSocktastic Colors'*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaNote, this will only cancel the outgoing request if the underlying provider exposes that option. LangChain will cancel the underlying request if possible, otherwise it will cancel the processing of the response.PreviousChat modelsNextDealing with API Errors
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference |
4e9727215e95-479 | ModulesModel I/OLanguage modelsChat modelsHow-toCancelling requestsCancelling requestsYou can cancel a request by passing a signal option when you call the model. For example, for OpenAI:import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const model = new ChatOpenAI({ temperature: 1 });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.const res = await model.call( [ new HumanMessage( "What is a good name for a company that makes colorful socks?" ), ], { signal: controller.signal });console.log(res);/*'\n\nSocktastic Colors'*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaNote, this will only cancel the outgoing request if the underlying provider exposes that option. LangChain will cancel the underlying request if possible, otherwise it will cancel the processing of the response.PreviousChat modelsNextDealing with API Errors |
4e9727215e95-480 | Cancelling requestsYou can cancel a request by passing a signal option when you call the model. For example, for OpenAI:import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const model = new ChatOpenAI({ temperature: 1 });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.const res = await model.call( [ new HumanMessage( "What is a good name for a company that makes colorful socks?" ), ], { signal: controller.signal });console.log(res);/*'\n\nSocktastic Colors'*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaNote, this will only cancel the outgoing request if the underlying provider exposes that option. LangChain will cancel the underlying request if possible, otherwise it will cancel the processing of the response.
import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const model = new ChatOpenAI({ temperature: 1 });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.const res = await model.call( [ new HumanMessage( "What is a good name for a company that makes colorful socks?" ), ], { signal: controller.signal });console.log(res);/*'\n\nSocktastic Colors'*/
Paragraphs: |
4e9727215e95-481 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toDealing with API ErrorsDealing with API ErrorsIf the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a maxRetries option when you instantiate the model. For example:import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ maxRetries: 10 });PreviousCancelling requestsNextDealing with rate limitsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-482 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toDealing with API ErrorsDealing with API ErrorsIf the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a maxRetries option when you instantiate the model. For example:import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ maxRetries: 10 });PreviousCancelling requestsNextDealing with rate limits
ModulesModel I/OLanguage modelsChat modelsHow-toDealing with API ErrorsDealing with API ErrorsIf the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a maxRetries option when you instantiate the model. For example:import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ maxRetries: 10 });PreviousCancelling requestsNextDealing with rate limits |
4e9727215e95-483 | Dealing with API ErrorsIf the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a maxRetries option when you instantiate the model. For example:import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ maxRetries: 10 });
import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ maxRetries: 10 });
Dealing with rate limits
Page Title: Dealing with rate limits | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-484 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toDealing with rate limitsDealing with rate limitsSome providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating a Chat Model. This option allows you to specify the maximum number of concurrent requests you want to make to the provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.For example, if you set maxConcurrency: 5, then LangChain will only send 5 requests to the provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.To use this feature, simply pass maxConcurrency: <number> when you instantiate the LLM.
For example:import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ maxConcurrency: 5 });PreviousDealing with API ErrorsNextOpenAI Function callingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-485 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toDealing with rate limitsDealing with rate limitsSome providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating a Chat Model. This option allows you to specify the maximum number of concurrent requests you want to make to the provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.For example, if you set maxConcurrency: 5, then LangChain will only send 5 requests to the provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.To use this feature, simply pass maxConcurrency: <number> when you instantiate the LLM. For example:import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ maxConcurrency: 5 });PreviousDealing with API ErrorsNextOpenAI Function calling |
4e9727215e95-486 | ModulesModel I/OLanguage modelsChat modelsHow-toDealing with rate limitsDealing with rate limitsSome providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating a Chat Model. This option allows you to specify the maximum number of concurrent requests you want to make to the provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.For example, if you set maxConcurrency: 5, then LangChain will only send 5 requests to the provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.To use this feature, simply pass maxConcurrency: <number> when you instantiate the LLM. For example:import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ maxConcurrency: 5 });PreviousDealing with API ErrorsNextOpenAI Function calling |
4e9727215e95-487 | Dealing with rate limitsSome providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating a Chat Model. This option allows you to specify the maximum number of concurrent requests you want to make to the provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.For example, if you set maxConcurrency: 5, then LangChain will only send 5 requests to the provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.To use this feature, simply pass maxConcurrency: <number> when you instantiate the LLM. For example:import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ maxConcurrency: 5 });
Some providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating a Chat Model. This option allows you to specify the maximum number of concurrent requests you want to make to the provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.
For example, if you set maxConcurrency: 5, then LangChain will only send 5 requests to the provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.
import { ChatOpenAI } from "langchain/chat_models/openai";const model = new ChatOpenAI({ maxConcurrency: 5 }); |
4e9727215e95-488 | OpenAI Function calling
Page Title: OpenAI Function calling | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toOpenAI Function callingOn this pageOpenAI Function callingFunction calling is a useful way to get structured output from an LLM for a wide range of purposes.
By providing schemas for "functions", the LLM will choose one and attempt to output a response matching that schema.Though the name implies that the LLM is actually running code and calling a function, it is more accurate to say that the LLM is
populating parameters that match the schema for the arguments a hypothetical function would take. We can use these
structured responses for whatever we'd like!Function calling serves as a building block for several other popular features in LangChain, including the OpenAI Functions agent
and structured output chain. In addition to these more specific use cases, you can also attach function parameters
directly to the model and call it, as shown below.UsageOpenAI requires parameter schemas in the format below, where parameters must be JSON Schema.
Specifying the function_call parameter will force the model to return a response using the specified function. |
4e9727215e95-489 | Specifying the function_call parameter will force the model to return a response using the specified function.
This is useful if you have multiple schemas you'd like the model to pick from.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const extractionFunctionSchema = { name: "extractor", description: "Extracts fields from the input. ", parameters: { type: "object", properties: { tone: { type: "string", enum: ["positive", "negative"], description: "The overall tone of the input", }, word_count: { type: "number", description: "The number of words in the input", }, chat_response: { type: "string", description: "A response to the human's input", }, }, required: ["tone", "word_count", "chat_response"], },};// Bind function arguments to the model.// All subsequent invoke calls will use the bound parameters.// "functions.parameters" must be formatted as JSON Schema// Omit "function_call" if you want the model to choose a function to call.const model = new ChatOpenAI({ modelName: "gpt-4",}).bind({ functions: [extractionFunctionSchema], function_call: { name: "extractor" },});const result = await model.invoke([new HumanMessage("What a beautiful day! |
4e9727215e95-490 | ")]);console.log(result);/* AIMessage { content: '', name: undefined, additional_kwargs: { function_call: { name: 'extractor', arguments: '{\n' + ' "tone": "positive",\n' + ' "word_count": 4,\n' + ' "chat_response": "It certainly is a beautiful day! "\n' + '}' } } }*/// Alternatively, you can pass function call arguments as an additional argument as a one-off:/*const model = new ChatOpenAI({ modelName: "gpt-4",});const result = await model.call([ new HumanMessage("What a beautiful day! ")], { functions: [extractionFunctionSchema], function_call: {name: "extractor"}});*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaUsage with ZodAn alternative way to declare function schema is to use the Zod schema library with the |
4e9727215e95-491 | zod-to-json-schema utility package to translate it:npmYarnpnpmnpm install zodnpm install zod-to-json-schemayarn add zodyarn add zod-to-json-schemapnpm add zodpnpm add zod-to-json-schemaimport { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";const extractionFunctionZodSchema = z.object({ tone: z .enum(["positive", "negative"]) .describe("The overall tone of the input"), entity: z.string().describe("The entity mentioned in the input"), word_count: z.number().describe("The number of words in the input"), chat_response: z.string().describe("A response to the human's input"), final_punctuation: z .optional(z.string()) .describe("The final punctuation mark in the input, if any. "),});// Bind function arguments to the model.// "functions.parameters" must be formatted as JSON Schema.// We translate the above Zod schema into JSON schema using the "zodToJsonSchema" package.// Omit "function_call" if you want the model to choose a function to call.const model = new ChatOpenAI({ modelName: "gpt-4",}).bind({ functions: [ { name: "extractor", description: "Extracts fields from the input. ", parameters: zodToJsonSchema(extractionFunctionZodSchema), }, ], function_call: { name: "extractor" },});const result = await model.invoke([new HumanMessage("What a beautiful day! |
4e9727215e95-492 | ")]);console.log(result);/* AIMessage { content: '', name: undefined, additional_kwargs: { function_call: { name: 'extractor', arguments: '{\n' + ' "tone": "positive",\n' + ' "entity": "day",\n' + ' "word_count": 4,\n' + ' "chat_response": "It certainly is a gorgeous day! ",\n' + ' "final_punctuation": "! "\n' + '}' } } }*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaPreviousDealing with rate limitsNextLLMChainUsageUsage with ZodCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsChat modelsHow-toCancelling requestsDealing with API ErrorsDealing with rate limitsOpenAI Function callingLLMChainPromptsStreamingSubscribing to eventsAdding a timeoutIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsChat modelsHow-toOpenAI Function callingOn this pageOpenAI Function callingFunction calling is a useful way to get structured output from an LLM for a wide range of purposes.
By providing schemas for "functions", the LLM will choose one and attempt to output a response matching that schema.Though the name implies that the LLM is actually running code and calling a function, it is more accurate to say that the LLM is
populating parameters that match the schema for the arguments a hypothetical function would take. We can use these |
4e9727215e95-493 | structured responses for whatever we'd like!Function calling serves as a building block for several other popular features in LangChain, including the OpenAI Functions agent
and structured output chain. In addition to these more specific use cases, you can also attach function parameters
directly to the model and call it, as shown below.UsageOpenAI requires parameter schemas in the format below, where parameters must be JSON Schema.
Specifying the function_call parameter will force the model to return a response using the specified function.
This is useful if you have multiple schemas you'd like the model to pick from.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const extractionFunctionSchema = { name: "extractor", description: "Extracts fields from the input. ", parameters: { type: "object", properties: { tone: { type: "string", enum: ["positive", "negative"], description: "The overall tone of the input", }, word_count: { type: "number", description: "The number of words in the input", }, chat_response: { type: "string", description: "A response to the human's input", }, }, required: ["tone", "word_count", "chat_response"], },};// Bind function arguments to the model.// All subsequent invoke calls will use the bound parameters.// "functions.parameters" must be formatted as JSON Schema// Omit "function_call" if you want the model to choose a function to call.const model = new ChatOpenAI({ modelName: "gpt-4",}).bind({ functions: [extractionFunctionSchema], function_call: { name: "extractor" },});const result = await model.invoke([new HumanMessage("What a beautiful day! |
4e9727215e95-494 | ")]);console.log(result);/* AIMessage { content: '', name: undefined, additional_kwargs: { function_call: { name: 'extractor', arguments: '{\n' + ' "tone": "positive",\n' + ' "word_count": 4,\n' + ' "chat_response": "It certainly is a beautiful day! "\n' + '}' } } }*/// Alternatively, you can pass function call arguments as an additional argument as a one-off:/*const model = new ChatOpenAI({ modelName: "gpt-4",});const result = await model.call([ new HumanMessage("What a beautiful day! ")], { functions: [extractionFunctionSchema], function_call: {name: "extractor"}});*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaUsage with ZodAn alternative way to declare function schema is to use the Zod schema library with the |
4e9727215e95-495 | zod-to-json-schema utility package to translate it:npmYarnpnpmnpm install zodnpm install zod-to-json-schemayarn add zodyarn add zod-to-json-schemapnpm add zodpnpm add zod-to-json-schemaimport { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";const extractionFunctionZodSchema = z.object({ tone: z .enum(["positive", "negative"]) .describe("The overall tone of the input"), entity: z.string().describe("The entity mentioned in the input"), word_count: z.number().describe("The number of words in the input"), chat_response: z.string().describe("A response to the human's input"), final_punctuation: z .optional(z.string()) .describe("The final punctuation mark in the input, if any. "),});// Bind function arguments to the model.// "functions.parameters" must be formatted as JSON Schema.// We translate the above Zod schema into JSON schema using the "zodToJsonSchema" package.// Omit "function_call" if you want the model to choose a function to call.const model = new ChatOpenAI({ modelName: "gpt-4",}).bind({ functions: [ { name: "extractor", description: "Extracts fields from the input. ", parameters: zodToJsonSchema(extractionFunctionZodSchema), }, ], function_call: { name: "extractor" },});const result = await model.invoke([new HumanMessage("What a beautiful day! |
4e9727215e95-496 | ")]);console.log(result);/* AIMessage { content: '', name: undefined, additional_kwargs: { function_call: { name: 'extractor', arguments: '{\n' + ' "tone": "positive",\n' + ' "entity": "day",\n' + ' "word_count": 4,\n' + ' "chat_response": "It certainly is a gorgeous day! ",\n' + ' "final_punctuation": "! "\n' + '}' } } }*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaPreviousDealing with rate limitsNextLLMChainUsageUsage with Zod
ModulesModel I/OLanguage modelsChat modelsHow-toOpenAI Function callingOn this pageOpenAI Function callingFunction calling is a useful way to get structured output from an LLM for a wide range of purposes.
By providing schemas for "functions", the LLM will choose one and attempt to output a response matching that schema.Though the name implies that the LLM is actually running code and calling a function, it is more accurate to say that the LLM is
populating parameters that match the schema for the arguments a hypothetical function would take. We can use these
structured responses for whatever we'd like!Function calling serves as a building block for several other popular features in LangChain, including the OpenAI Functions agent
and structured output chain. In addition to these more specific use cases, you can also attach function parameters
directly to the model and call it, as shown below.UsageOpenAI requires parameter schemas in the format below, where parameters must be JSON Schema.
Specifying the function_call parameter will force the model to return a response using the specified function. |
4e9727215e95-497 | Specifying the function_call parameter will force the model to return a response using the specified function.
This is useful if you have multiple schemas you'd like the model to pick from.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";const extractionFunctionSchema = { name: "extractor", description: "Extracts fields from the input. ", parameters: { type: "object", properties: { tone: { type: "string", enum: ["positive", "negative"], description: "The overall tone of the input", }, word_count: { type: "number", description: "The number of words in the input", }, chat_response: { type: "string", description: "A response to the human's input", }, }, required: ["tone", "word_count", "chat_response"], },};// Bind function arguments to the model.// All subsequent invoke calls will use the bound parameters.// "functions.parameters" must be formatted as JSON Schema// Omit "function_call" if you want the model to choose a function to call.const model = new ChatOpenAI({ modelName: "gpt-4",}).bind({ functions: [extractionFunctionSchema], function_call: { name: "extractor" },});const result = await model.invoke([new HumanMessage("What a beautiful day! |
4e9727215e95-498 | ")]);console.log(result);/* AIMessage { content: '', name: undefined, additional_kwargs: { function_call: { name: 'extractor', arguments: '{\n' + ' "tone": "positive",\n' + ' "word_count": 4,\n' + ' "chat_response": "It certainly is a beautiful day! "\n' + '}' } } }*/// Alternatively, you can pass function call arguments as an additional argument as a one-off:/*const model = new ChatOpenAI({ modelName: "gpt-4",});const result = await model.call([ new HumanMessage("What a beautiful day! ")], { functions: [extractionFunctionSchema], function_call: {name: "extractor"}});*/API Reference:ChatOpenAI from langchain/chat_models/openaiHumanMessage from langchain/schemaUsage with ZodAn alternative way to declare function schema is to use the Zod schema library with the |
4e9727215e95-499 | zod-to-json-schema utility package to translate it:npmYarnpnpmnpm install zodnpm install zod-to-json-schemayarn add zodyarn add zod-to-json-schemapnpm add zodpnpm add zod-to-json-schemaimport { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage } from "langchain/schema";import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";const extractionFunctionZodSchema = z.object({ tone: z .enum(["positive", "negative"]) .describe("The overall tone of the input"), entity: z.string().describe("The entity mentioned in the input"), word_count: z.number().describe("The number of words in the input"), chat_response: z.string().describe("A response to the human's input"), final_punctuation: z .optional(z.string()) .describe("The final punctuation mark in the input, if any. "),});// Bind function arguments to the model.// "functions.parameters" must be formatted as JSON Schema.// We translate the above Zod schema into JSON schema using the "zodToJsonSchema" package.// Omit "function_call" if you want the model to choose a function to call.const model = new ChatOpenAI({ modelName: "gpt-4",}).bind({ functions: [ { name: "extractor", description: "Extracts fields from the input. ", parameters: zodToJsonSchema(extractionFunctionZodSchema), }, ], function_call: { name: "extractor" },});const result = await model.invoke([new HumanMessage("What a beautiful day! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.