id
stringlengths 14
17
| text
stringlengths 42
2.1k
|
---|---|
4e9727215e95-300 | You can also access provider specific information that is returned. This information is NOT standardized across providers.
console.log(llmResult.llmOutput);/* { tokenUsage: { completionTokens: 46, promptTokens: 8, totalTokens: 54 } }*/
Here's an example with additional parameters, which sets -1 for max_tokens to turn on token size calculations:
import { OpenAI } from "langchain/llms/openai";export const run = async () => { const model = new OpenAI({ // customize openai model that's used, `text-davinci-003` is the default modelName: "text-ada-001", // `max_tokens` supports a magic -1 param where the max token length for the specified modelName // is calculated and included in the request to OpenAI as the `max_tokens` param maxTokens: -1, // use `modelKwargs` to pass params directly to the openai call // note that they use snake_case instead of camelCase modelKwargs: { user: "me", }, // for additional logging for debugging purposes verbose: true, }); const resA = await model.call( "What would be a good company name a company that makes colorful socks?" ); console.log({ resA }); // { resA: '\n\nSocktastic Colors' }};
API Reference:OpenAI from langchain/llms/openai
This section is for users who want a deeper technical understanding of how LangChain works. If you are just getting started, you can skip this section.
Both LLMs and Chat Models are built on top of the BaseLanguageModel class. This class provides a common interface for all models, and allows us to easily swap out models in chains without changing the rest of the code. |
4e9727215e95-301 | The BaseLanguageModel class has two abstract methods: generatePrompt and getNumTokens, which are implemented by BaseChatModel and BaseLLM respectively.
BaseLLM is a subclass of BaseLanguageModel that provides a common interface for LLMs while BaseChatModel is a subclass of BaseLanguageModel that provides a common interface for chat models.
Cancelling requests
Page Title: Cancelling requests | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-302 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toCancelling requestsDealing with API ErrorsDealing with Rate LimitsCachingStreamingSubscribing to eventsAdding a timeoutIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsHow-toCancelling requestsCancelling requestsYou can cancel a request by passing a signal option when you call the model. For example, for OpenAI:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 1 });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.const res = await model.call( "What would be a good company name a company that makes colorful socks? ", { signal: controller.signal });console.log(res);/*'\n\nSocktastic Colors'*/API Reference:OpenAI from langchain/llms/openaiNote, this will only cancel the outgoing request if the underlying provider exposes that option. LangChain will cancel the underlying request if possible, otherwise it will cancel the processing of the response.PreviousLLMsNextDealing with API ErrorsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-303 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toCancelling requestsDealing with API ErrorsDealing with Rate LimitsCachingStreamingSubscribing to eventsAdding a timeoutIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsHow-toCancelling requestsCancelling requestsYou can cancel a request by passing a signal option when you call the model. For example, for OpenAI:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 1 });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.const res = await model.call( "What would be a good company name a company that makes colorful socks? ", { signal: controller.signal });console.log(res);/*'\n\nSocktastic Colors'*/API Reference:OpenAI from langchain/llms/openaiNote, this will only cancel the outgoing request if the underlying provider exposes that option. LangChain will cancel the underlying request if possible, otherwise it will cancel the processing of the response.PreviousLLMsNextDealing with API Errors
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toCancelling requestsDealing with API ErrorsDealing with Rate LimitsCachingStreamingSubscribing to eventsAdding a timeoutIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference |
4e9727215e95-304 | ModulesModel I/OLanguage modelsLLMsHow-toCancelling requestsCancelling requestsYou can cancel a request by passing a signal option when you call the model. For example, for OpenAI:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 1 });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.const res = await model.call( "What would be a good company name a company that makes colorful socks? ", { signal: controller.signal });console.log(res);/*'\n\nSocktastic Colors'*/API Reference:OpenAI from langchain/llms/openaiNote, this will only cancel the outgoing request if the underlying provider exposes that option. LangChain will cancel the underlying request if possible, otherwise it will cancel the processing of the response.PreviousLLMsNextDealing with API Errors
Cancelling requestsYou can cancel a request by passing a signal option when you call the model. For example, for OpenAI:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 1 });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.const res = await model.call( "What would be a good company name a company that makes colorful socks? ", { signal: controller.signal });console.log(res);/*'\n\nSocktastic Colors'*/API Reference:OpenAI from langchain/llms/openaiNote, this will only cancel the outgoing request if the underlying provider exposes that option. LangChain will cancel the underlying request if possible, otherwise it will cancel the processing of the response.
You can cancel a request by passing a signal option when you call the model. For example, for OpenAI: |
4e9727215e95-305 | import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 1 });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.const res = await model.call( "What would be a good company name a company that makes colorful socks? ", { signal: controller.signal });console.log(res);/*'\n\nSocktastic Colors'*/
Note, this will only cancel the outgoing request if the underlying provider exposes that option. LangChain will cancel the underlying request if possible, otherwise it will cancel the processing of the response.
Dealing with API Errors
Page Title: Dealing with API Errors | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toCancelling requestsDealing with API ErrorsDealing with Rate LimitsCachingStreamingSubscribing to eventsAdding a timeoutIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsHow-toDealing with API ErrorsDealing with API ErrorsIf the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a maxRetries option when you instantiate the model. For example:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ maxRetries: 10 });PreviousCancelling requestsNextDealing with Rate LimitsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-306 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toCancelling requestsDealing with API ErrorsDealing with Rate LimitsCachingStreamingSubscribing to eventsAdding a timeoutIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsHow-toDealing with API ErrorsDealing with API ErrorsIf the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a maxRetries option when you instantiate the model. For example:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ maxRetries: 10 });PreviousCancelling requestsNextDealing with Rate Limits
ModulesModel I/OLanguage modelsLLMsHow-toDealing with API ErrorsDealing with API ErrorsIf the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a maxRetries option when you instantiate the model. For example:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ maxRetries: 10 });PreviousCancelling requestsNextDealing with Rate Limits |
4e9727215e95-307 | Dealing with API ErrorsIf the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a maxRetries option when you instantiate the model. For example:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ maxRetries: 10 });
If the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a maxRetries option when you instantiate the model. For example:
import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ maxRetries: 10 });
Dealing with Rate Limits
Page Title: Dealing with Rate Limits | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-308 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toCancelling requestsDealing with API ErrorsDealing with Rate LimitsCachingStreamingSubscribing to eventsAdding a timeoutIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsHow-toDealing with Rate LimitsDealing with Rate LimitsSome LLM providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating an LLM. This option allows you to specify the maximum number of concurrent requests you want to make to the LLM provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.For example, if you set maxConcurrency: 5, then LangChain will only send 5 requests to the LLM provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.To use this feature, simply pass maxConcurrency: <number> when you instantiate the LLM.
For example:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ maxConcurrency: 5 });PreviousDealing with API ErrorsNextCachingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-309 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toCancelling requestsDealing with API ErrorsDealing with Rate LimitsCachingStreamingSubscribing to eventsAdding a timeoutIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsHow-toDealing with Rate LimitsDealing with Rate LimitsSome LLM providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating an LLM. This option allows you to specify the maximum number of concurrent requests you want to make to the LLM provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.For example, if you set maxConcurrency: 5, then LangChain will only send 5 requests to the LLM provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.To use this feature, simply pass maxConcurrency: <number> when you instantiate the LLM. For example:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ maxConcurrency: 5 });PreviousDealing with API ErrorsNextCaching |
4e9727215e95-310 | ModulesModel I/OLanguage modelsLLMsHow-toDealing with Rate LimitsDealing with Rate LimitsSome LLM providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating an LLM. This option allows you to specify the maximum number of concurrent requests you want to make to the LLM provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.For example, if you set maxConcurrency: 5, then LangChain will only send 5 requests to the LLM provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.To use this feature, simply pass maxConcurrency: <number> when you instantiate the LLM. For example:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ maxConcurrency: 5 });PreviousDealing with API ErrorsNextCaching |
4e9727215e95-311 | Dealing with Rate LimitsSome LLM providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating an LLM. This option allows you to specify the maximum number of concurrent requests you want to make to the LLM provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.For example, if you set maxConcurrency: 5, then LangChain will only send 5 requests to the LLM provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.To use this feature, simply pass maxConcurrency: <number> when you instantiate the LLM. For example:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ maxConcurrency: 5 });
Some LLM providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating an LLM. This option allows you to specify the maximum number of concurrent requests you want to make to the LLM provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.
For example, if you set maxConcurrency: 5, then LangChain will only send 5 requests to the LLM provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent. |
4e9727215e95-312 | To use this feature, simply pass maxConcurrency: <number> when you instantiate the LLM. For example:
import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ maxConcurrency: 5 });
Caching
Page Title: Caching | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toCancelling requestsDealing with API ErrorsDealing with Rate LimitsCachingStreamingSubscribing to eventsAdding a timeoutIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsHow-toCachingCachingLangChain provides an optional caching layer for LLMs. This is useful for two reasons:It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. |
4e9727215e95-313 | It can speed up your application by reducing the number of API calls you make to the LLM provider.import { OpenAI } from "langchain/llms/openai";// To make the caching really obvious, lets use a slower model.const model = new OpenAI({ modelName: "text-davinci-002", cache: true, n: 2, bestOf: 2});In Memory CacheThe default cache is stored in-memory. This means that if you restart your application, the cache will be cleared.// The first time, it is not yet in cache, so it should take longerconst res = await model.predict("Tell me a joke");console.log(res);/* CPU times: user 35.9 ms, sys: 28.6 ms, total: 64.6 ms Wall time: 4.83 s "\n\nWhy did the chicken cross the road?\n\nTo get to the other side. "*/// The second time it is, so it goes fasterconst res2 = await model.predict("Tell me a joke");console.log(res2);/* CPU times: user 238 µs, sys: 143 µs, total: 381 µs Wall time: 1.76 ms "\n\nWhy did the chicken cross the road?\n\nTo get to the other side. "*/Caching with MomentoLangChain also provides a Momento-based cache. Momento is a distributed, serverless cache that requires zero setup or infrastructure maintenance. To use it, you'll need to install the @gomomento/sdk package:npm install @gomomento/sdkNext you'll need to sign up and create an API key. |
4e9727215e95-314 | Once you've done that, pass a cache option when you instantiate the LLM like this:import { OpenAI } from "langchain/llms/openai";import { MomentoCache } from "langchain/cache/momento";import { CacheClient, Configurations, CredentialProvider,} from "@gomomento/sdk";// See https://github.com/momentohq/client-sdk-javascript for connection optionsconst client = new CacheClient({ configuration: Configurations.Laptop.v1(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_AUTH_TOKEN", }), defaultTtlSeconds: 60 * 60 * 24,});const cache = await MomentoCache.fromProps({ client, cacheName: "langchain",});const model = new OpenAI({ cache });API Reference:OpenAI from langchain/llms/openaiMomentoCache from langchain/cache/momentoCaching with RedisLangChain also provides a Redis-based cache. This is useful if you want to share the cache across multiple processes or servers. To use it, you'll need to install the redis package:npm install ioredisThen, you can pass a cache option when you instantiate the LLM. For example:import { OpenAI } from "langchain/llms/openai";import { RedisCache } from "langchain/cache/ioredis";import { Redis } from "ioredis";// See https://github.com/redis/ioredis for connection optionsconst client = new Redis({});const cache = new RedisCache(client);const model = new OpenAI({ cache });Caching with Upstash RedisLangChain also provides an Upstash Redis-based cache. |
4e9727215e95-315 | Like the Redis-based cache, this cache is useful if you want to share the cache across multiple processes or servers. The Upstash Redis client uses HTTP and supports edge environments. To use it, you'll need to install the @upstash/redis package:npm install @upstash/redisYou'll also need an Upstash account and a Redis database to connect to. Once you've done that, retrieve your REST URL and REST token.Then, you can pass a cache option when you instantiate the LLM. |
4e9727215e95-316 | For example:import { OpenAI } from "langchain/llms/openai";import { UpstashRedisCache } from "langchain/cache/upstash_redis";// See https://docs.upstash.com/redis/howto/connectwithupstashredis#quick-start for connection optionsconst cache = new UpstashRedisCache({ config: { url: "UPSTASH_REDIS_REST_URL", token: "UPSTASH_REDIS_REST_TOKEN", },});const model = new OpenAI({ cache });API Reference:OpenAI from langchain/llms/openaiUpstashRedisCache from langchain/cache/upstash_redisYou can also directly pass in a previously created @upstash/redis client instance:import { Redis } from "@upstash/redis";import https from "https";import { OpenAI } from "langchain/llms/openai";import { UpstashRedisCache } from "langchain/cache/upstash_redis";// const client = new Redis({// url: process.env.UPSTASH_REDIS_REST_URL!,// token: process.env.UPSTASH_REDIS_REST_TOKEN!,// agent: new https.Agent({ keepAlive: true }),// });// Or simply call Redis.fromEnv() to automatically load the UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN environment variables.const client = Redis.fromEnv({ agent: new https.Agent({ keepAlive: true }),});const cache = new UpstashRedisCache({ client });const model = new OpenAI({ cache });API Reference:OpenAI from langchain/llms/openaiUpstashRedisCache from langchain/cache/upstash_redisPreviousDealing with Rate LimitsNextStreamingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-317 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toCancelling requestsDealing with API ErrorsDealing with Rate LimitsCachingStreamingSubscribing to eventsAdding a timeoutIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsHow-toCachingCachingLangChain provides an optional caching layer for LLMs. This is useful for two reasons:It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. |
4e9727215e95-318 | It can speed up your application by reducing the number of API calls you make to the LLM provider.import { OpenAI } from "langchain/llms/openai";// To make the caching really obvious, lets use a slower model.const model = new OpenAI({ modelName: "text-davinci-002", cache: true, n: 2, bestOf: 2});In Memory CacheThe default cache is stored in-memory. This means that if you restart your application, the cache will be cleared.// The first time, it is not yet in cache, so it should take longerconst res = await model.predict("Tell me a joke");console.log(res);/* CPU times: user 35.9 ms, sys: 28.6 ms, total: 64.6 ms Wall time: 4.83 s "\n\nWhy did the chicken cross the road?\n\nTo get to the other side. "*/// The second time it is, so it goes fasterconst res2 = await model.predict("Tell me a joke");console.log(res2);/* CPU times: user 238 µs, sys: 143 µs, total: 381 µs Wall time: 1.76 ms "\n\nWhy did the chicken cross the road?\n\nTo get to the other side. "*/Caching with MomentoLangChain also provides a Momento-based cache. Momento is a distributed, serverless cache that requires zero setup or infrastructure maintenance. To use it, you'll need to install the @gomomento/sdk package:npm install @gomomento/sdkNext you'll need to sign up and create an API key. |
4e9727215e95-319 | Once you've done that, pass a cache option when you instantiate the LLM like this:import { OpenAI } from "langchain/llms/openai";import { MomentoCache } from "langchain/cache/momento";import { CacheClient, Configurations, CredentialProvider,} from "@gomomento/sdk";// See https://github.com/momentohq/client-sdk-javascript for connection optionsconst client = new CacheClient({ configuration: Configurations.Laptop.v1(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_AUTH_TOKEN", }), defaultTtlSeconds: 60 * 60 * 24,});const cache = await MomentoCache.fromProps({ client, cacheName: "langchain",});const model = new OpenAI({ cache });API Reference:OpenAI from langchain/llms/openaiMomentoCache from langchain/cache/momentoCaching with RedisLangChain also provides a Redis-based cache. This is useful if you want to share the cache across multiple processes or servers. To use it, you'll need to install the redis package:npm install ioredisThen, you can pass a cache option when you instantiate the LLM. For example:import { OpenAI } from "langchain/llms/openai";import { RedisCache } from "langchain/cache/ioredis";import { Redis } from "ioredis";// See https://github.com/redis/ioredis for connection optionsconst client = new Redis({});const cache = new RedisCache(client);const model = new OpenAI({ cache });Caching with Upstash RedisLangChain also provides an Upstash Redis-based cache. |
4e9727215e95-320 | Like the Redis-based cache, this cache is useful if you want to share the cache across multiple processes or servers. The Upstash Redis client uses HTTP and supports edge environments. To use it, you'll need to install the @upstash/redis package:npm install @upstash/redisYou'll also need an Upstash account and a Redis database to connect to. Once you've done that, retrieve your REST URL and REST token.Then, you can pass a cache option when you instantiate the LLM. |
4e9727215e95-321 | For example:import { OpenAI } from "langchain/llms/openai";import { UpstashRedisCache } from "langchain/cache/upstash_redis";// See https://docs.upstash.com/redis/howto/connectwithupstashredis#quick-start for connection optionsconst cache = new UpstashRedisCache({ config: { url: "UPSTASH_REDIS_REST_URL", token: "UPSTASH_REDIS_REST_TOKEN", },});const model = new OpenAI({ cache });API Reference:OpenAI from langchain/llms/openaiUpstashRedisCache from langchain/cache/upstash_redisYou can also directly pass in a previously created @upstash/redis client instance:import { Redis } from "@upstash/redis";import https from "https";import { OpenAI } from "langchain/llms/openai";import { UpstashRedisCache } from "langchain/cache/upstash_redis";// const client = new Redis({// url: process.env.UPSTASH_REDIS_REST_URL!,// token: process.env.UPSTASH_REDIS_REST_TOKEN!,// agent: new https.Agent({ keepAlive: true }),// });// Or simply call Redis.fromEnv() to automatically load the UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN environment variables.const client = Redis.fromEnv({ agent: new https.Agent({ keepAlive: true }),});const cache = new UpstashRedisCache({ client });const model = new OpenAI({ cache });API Reference:OpenAI from langchain/llms/openaiUpstashRedisCache from langchain/cache/upstash_redisPreviousDealing with Rate LimitsNextStreaming |
4e9727215e95-322 | ModulesModel I/OLanguage modelsLLMsHow-toCachingCachingLangChain provides an optional caching layer for LLMs. This is useful for two reasons:It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. |
4e9727215e95-323 | It can speed up your application by reducing the number of API calls you make to the LLM provider.import { OpenAI } from "langchain/llms/openai";// To make the caching really obvious, lets use a slower model.const model = new OpenAI({ modelName: "text-davinci-002", cache: true, n: 2, bestOf: 2});In Memory CacheThe default cache is stored in-memory. This means that if you restart your application, the cache will be cleared.// The first time, it is not yet in cache, so it should take longerconst res = await model.predict("Tell me a joke");console.log(res);/* CPU times: user 35.9 ms, sys: 28.6 ms, total: 64.6 ms Wall time: 4.83 s "\n\nWhy did the chicken cross the road?\n\nTo get to the other side. "*/// The second time it is, so it goes fasterconst res2 = await model.predict("Tell me a joke");console.log(res2);/* CPU times: user 238 µs, sys: 143 µs, total: 381 µs Wall time: 1.76 ms "\n\nWhy did the chicken cross the road?\n\nTo get to the other side. "*/Caching with MomentoLangChain also provides a Momento-based cache. Momento is a distributed, serverless cache that requires zero setup or infrastructure maintenance. To use it, you'll need to install the @gomomento/sdk package:npm install @gomomento/sdkNext you'll need to sign up and create an API key. |
4e9727215e95-324 | Once you've done that, pass a cache option when you instantiate the LLM like this:import { OpenAI } from "langchain/llms/openai";import { MomentoCache } from "langchain/cache/momento";import { CacheClient, Configurations, CredentialProvider,} from "@gomomento/sdk";// See https://github.com/momentohq/client-sdk-javascript for connection optionsconst client = new CacheClient({ configuration: Configurations.Laptop.v1(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_AUTH_TOKEN", }), defaultTtlSeconds: 60 * 60 * 24,});const cache = await MomentoCache.fromProps({ client, cacheName: "langchain",});const model = new OpenAI({ cache });API Reference:OpenAI from langchain/llms/openaiMomentoCache from langchain/cache/momentoCaching with RedisLangChain also provides a Redis-based cache. This is useful if you want to share the cache across multiple processes or servers. To use it, you'll need to install the redis package:npm install ioredisThen, you can pass a cache option when you instantiate the LLM. For example:import { OpenAI } from "langchain/llms/openai";import { RedisCache } from "langchain/cache/ioredis";import { Redis } from "ioredis";// See https://github.com/redis/ioredis for connection optionsconst client = new Redis({});const cache = new RedisCache(client);const model = new OpenAI({ cache });Caching with Upstash RedisLangChain also provides an Upstash Redis-based cache. |
4e9727215e95-325 | Like the Redis-based cache, this cache is useful if you want to share the cache across multiple processes or servers. The Upstash Redis client uses HTTP and supports edge environments. To use it, you'll need to install the @upstash/redis package:npm install @upstash/redisYou'll also need an Upstash account and a Redis database to connect to. Once you've done that, retrieve your REST URL and REST token.Then, you can pass a cache option when you instantiate the LLM. |
4e9727215e95-326 | For example:import { OpenAI } from "langchain/llms/openai";import { UpstashRedisCache } from "langchain/cache/upstash_redis";// See https://docs.upstash.com/redis/howto/connectwithupstashredis#quick-start for connection optionsconst cache = new UpstashRedisCache({ config: { url: "UPSTASH_REDIS_REST_URL", token: "UPSTASH_REDIS_REST_TOKEN", },});const model = new OpenAI({ cache });API Reference:OpenAI from langchain/llms/openaiUpstashRedisCache from langchain/cache/upstash_redisYou can also directly pass in a previously created @upstash/redis client instance:import { Redis } from "@upstash/redis";import https from "https";import { OpenAI } from "langchain/llms/openai";import { UpstashRedisCache } from "langchain/cache/upstash_redis";// const client = new Redis({// url: process.env.UPSTASH_REDIS_REST_URL!,// token: process.env.UPSTASH_REDIS_REST_TOKEN!,// agent: new https.Agent({ keepAlive: true }),// });// Or simply call Redis.fromEnv() to automatically load the UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN environment variables.const client = Redis.fromEnv({ agent: new https.Agent({ keepAlive: true }),});const cache = new UpstashRedisCache({ client });const model = new OpenAI({ cache });API Reference:OpenAI from langchain/llms/openaiUpstashRedisCache from langchain/cache/upstash_redisPreviousDealing with Rate LimitsNextStreaming
CachingLangChain provides an optional caching layer for LLMs. This is useful for two reasons:It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. |
4e9727215e95-327 | It can speed up your application by reducing the number of API calls you make to the LLM provider.import { OpenAI } from "langchain/llms/openai";// To make the caching really obvious, lets use a slower model.const model = new OpenAI({ modelName: "text-davinci-002", cache: true, n: 2, bestOf: 2});In Memory CacheThe default cache is stored in-memory. This means that if you restart your application, the cache will be cleared.// The first time, it is not yet in cache, so it should take longerconst res = await model.predict("Tell me a joke");console.log(res);/* CPU times: user 35.9 ms, sys: 28.6 ms, total: 64.6 ms Wall time: 4.83 s "\n\nWhy did the chicken cross the road?\n\nTo get to the other side. "*/// The second time it is, so it goes fasterconst res2 = await model.predict("Tell me a joke");console.log(res2);/* CPU times: user 238 µs, sys: 143 µs, total: 381 µs Wall time: 1.76 ms "\n\nWhy did the chicken cross the road?\n\nTo get to the other side. "*/Caching with MomentoLangChain also provides a Momento-based cache. Momento is a distributed, serverless cache that requires zero setup or infrastructure maintenance. To use it, you'll need to install the @gomomento/sdk package:npm install @gomomento/sdkNext you'll need to sign up and create an API key. |
4e9727215e95-328 | Once you've done that, pass a cache option when you instantiate the LLM like this:import { OpenAI } from "langchain/llms/openai";import { MomentoCache } from "langchain/cache/momento";import { CacheClient, Configurations, CredentialProvider,} from "@gomomento/sdk";// See https://github.com/momentohq/client-sdk-javascript for connection optionsconst client = new CacheClient({ configuration: Configurations.Laptop.v1(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_AUTH_TOKEN", }), defaultTtlSeconds: 60 * 60 * 24,});const cache = await MomentoCache.fromProps({ client, cacheName: "langchain",});const model = new OpenAI({ cache });API Reference:OpenAI from langchain/llms/openaiMomentoCache from langchain/cache/momentoCaching with RedisLangChain also provides a Redis-based cache. This is useful if you want to share the cache across multiple processes or servers. To use it, you'll need to install the redis package:npm install ioredisThen, you can pass a cache option when you instantiate the LLM. For example:import { OpenAI } from "langchain/llms/openai";import { RedisCache } from "langchain/cache/ioredis";import { Redis } from "ioredis";// See https://github.com/redis/ioredis for connection optionsconst client = new Redis({});const cache = new RedisCache(client);const model = new OpenAI({ cache });Caching with Upstash RedisLangChain also provides an Upstash Redis-based cache. |
4e9727215e95-329 | Like the Redis-based cache, this cache is useful if you want to share the cache across multiple processes or servers. The Upstash Redis client uses HTTP and supports edge environments. To use it, you'll need to install the @upstash/redis package:npm install @upstash/redisYou'll also need an Upstash account and a Redis database to connect to. Once you've done that, retrieve your REST URL and REST token.Then, you can pass a cache option when you instantiate the LLM. |
4e9727215e95-330 | For example:import { OpenAI } from "langchain/llms/openai";import { UpstashRedisCache } from "langchain/cache/upstash_redis";// See https://docs.upstash.com/redis/howto/connectwithupstashredis#quick-start for connection optionsconst cache = new UpstashRedisCache({ config: { url: "UPSTASH_REDIS_REST_URL", token: "UPSTASH_REDIS_REST_TOKEN", },});const model = new OpenAI({ cache });API Reference:OpenAI from langchain/llms/openaiUpstashRedisCache from langchain/cache/upstash_redisYou can also directly pass in a previously created @upstash/redis client instance:import { Redis } from "@upstash/redis";import https from "https";import { OpenAI } from "langchain/llms/openai";import { UpstashRedisCache } from "langchain/cache/upstash_redis";// const client = new Redis({// url: process.env.UPSTASH_REDIS_REST_URL!,// token: process.env.UPSTASH_REDIS_REST_TOKEN!,// agent: new https.Agent({ keepAlive: true }),// });// Or simply call Redis.fromEnv() to automatically load the UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN environment variables.const client = Redis.fromEnv({ agent: new https.Agent({ keepAlive: true }),});const cache = new UpstashRedisCache({ client });const model = new OpenAI({ cache });API Reference:OpenAI from langchain/llms/openaiUpstashRedisCache from langchain/cache/upstash_redis
LangChain provides an optional caching layer for LLMs. This is useful for two reasons:
It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. |
4e9727215e95-331 | It can speed up your application by reducing the number of API calls you make to the LLM provider.
import { OpenAI } from "langchain/llms/openai";// To make the caching really obvious, lets use a slower model.const model = new OpenAI({ modelName: "text-davinci-002", cache: true, n: 2, bestOf: 2});
The default cache is stored in-memory. This means that if you restart your application, the cache will be cleared.
// The first time, it is not yet in cache, so it should take longerconst res = await model.predict("Tell me a joke");console.log(res);/* CPU times: user 35.9 ms, sys: 28.6 ms, total: 64.6 ms Wall time: 4.83 s "\n\nWhy did the chicken cross the road?\n\nTo get to the other side. "*/
// The second time it is, so it goes fasterconst res2 = await model.predict("Tell me a joke");console.log(res2);/* CPU times: user 238 µs, sys: 143 µs, total: 381 µs Wall time: 1.76 ms "\n\nWhy did the chicken cross the road?\n\nTo get to the other side. "*/
LangChain also provides a Momento-based cache. Momento is a distributed, serverless cache that requires zero setup or infrastructure maintenance. To use it, you'll need to install the @gomomento/sdk package:
npm install @gomomento/sdk
Next you'll need to sign up and create an API key. Once you've done that, pass a cache option when you instantiate the LLM like this: |
4e9727215e95-332 | import { OpenAI } from "langchain/llms/openai";import { MomentoCache } from "langchain/cache/momento";import { CacheClient, Configurations, CredentialProvider,} from "@gomomento/sdk";// See https://github.com/momentohq/client-sdk-javascript for connection optionsconst client = new CacheClient({ configuration: Configurations.Laptop.v1(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_AUTH_TOKEN", }), defaultTtlSeconds: 60 * 60 * 24,});const cache = await MomentoCache.fromProps({ client, cacheName: "langchain",});const model = new OpenAI({ cache });
API Reference:OpenAI from langchain/llms/openaiMomentoCache from langchain/cache/momento
LangChain also provides a Redis-based cache. This is useful if you want to share the cache across multiple processes or servers. To use it, you'll need to install the redis package:
npm install ioredis
Then, you can pass a cache option when you instantiate the LLM. For example:
import { OpenAI } from "langchain/llms/openai";import { RedisCache } from "langchain/cache/ioredis";import { Redis } from "ioredis";// See https://github.com/redis/ioredis for connection optionsconst client = new Redis({});const cache = new RedisCache(client);const model = new OpenAI({ cache });
LangChain also provides an Upstash Redis-based cache. Like the Redis-based cache, this cache is useful if you want to share the cache across multiple processes or servers. The Upstash Redis client uses HTTP and supports edge environments. To use it, you'll need to install the @upstash/redis package:
npm install @upstash/redis |
4e9727215e95-333 | npm install @upstash/redis
You'll also need an Upstash account and a Redis database to connect to. Once you've done that, retrieve your REST URL and REST token.
import { OpenAI } from "langchain/llms/openai";import { UpstashRedisCache } from "langchain/cache/upstash_redis";// See https://docs.upstash.com/redis/howto/connectwithupstashredis#quick-start for connection optionsconst cache = new UpstashRedisCache({ config: { url: "UPSTASH_REDIS_REST_URL", token: "UPSTASH_REDIS_REST_TOKEN", },});const model = new OpenAI({ cache });
API Reference:OpenAI from langchain/llms/openaiUpstashRedisCache from langchain/cache/upstash_redis
You can also directly pass in a previously created @upstash/redis client instance:
import { Redis } from "@upstash/redis";import https from "https";import { OpenAI } from "langchain/llms/openai";import { UpstashRedisCache } from "langchain/cache/upstash_redis";// const client = new Redis({// url: process.env.UPSTASH_REDIS_REST_URL!,// token: process.env.UPSTASH_REDIS_REST_TOKEN!,// agent: new https.Agent({ keepAlive: true }),// });// Or simply call Redis.fromEnv() to automatically load the UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN environment variables.const client = Redis.fromEnv({ agent: new https.Agent({ keepAlive: true }),});const cache = new UpstashRedisCache({ client });const model = new OpenAI({ cache });
Streaming
Page Title: Streaming | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-334 | Page Title: Streaming | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toCancelling requestsDealing with API ErrorsDealing with Rate LimitsCachingStreamingSubscribing to eventsAdding a timeoutIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsHow-toStreamingStreamingSome LLMs provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.To utilize streaming, use a CallbackHandler like so:import { OpenAI } from "langchain/llms/openai";// To enable streaming, we pass in `streaming: true` to the LLM constructor.// Additionally, we pass in a handler for the `handleLLMNewToken` event.const model = new OpenAI({ maxTokens: 25, streaming: true,});const response = await model.call("Tell me a joke. |
4e9727215e95-335 | ", { callbacks: [ { handleLLMNewToken(token: string) { console.log({ token }); }, }, ],});console.log(response);/*{ token: '\n' }{ token: '\n' }{ token: 'Q' }{ token: ':' }{ token: ' Why' }{ token: ' did' }{ token: ' the' }{ token: ' chicken' }{ token: ' cross' }{ token: ' the' }{ token: ' playground' }{ token: '?' }{ token: '\n' }{ token: 'A' }{ token: ':' }{ token: ' To' }{ token: ' get' }{ token: ' to' }{ token: ' the' }{ token: ' other' }{ token: ' slide' }{ token: '.' }Q: Why did the chicken cross the playground?A: To get to the other slide. */API Reference:OpenAI from langchain/llms/openaiWe still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming.PreviousCachingNextSubscribing to eventsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-336 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toCancelling requestsDealing with API ErrorsDealing with Rate LimitsCachingStreamingSubscribing to eventsAdding a timeoutIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsHow-toStreamingStreamingSome LLMs provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.To utilize streaming, use a CallbackHandler like so:import { OpenAI } from "langchain/llms/openai";// To enable streaming, we pass in `streaming: true` to the LLM constructor.// Additionally, we pass in a handler for the `handleLLMNewToken` event.const model = new OpenAI({ maxTokens: 25, streaming: true,});const response = await model.call("Tell me a joke. ", { callbacks: [ { handleLLMNewToken(token: string) { console.log({ token }); }, }, ],});console.log(response);/*{ token: '\n' }{ token: '\n' }{ token: 'Q' }{ token: ':' }{ token: ' Why' }{ token: ' did' }{ token: ' the' }{ token: ' chicken' }{ token: ' cross' }{ token: ' the' }{ token: ' playground' }{ token: '?' |
4e9727215e95-337 | }{ token: '\n' }{ token: 'A' }{ token: ':' }{ token: ' To' }{ token: ' get' }{ token: ' to' }{ token: ' the' }{ token: ' other' }{ token: ' slide' }{ token: '.' }Q: Why did the chicken cross the playground?A: To get to the other slide. */API Reference:OpenAI from langchain/llms/openaiWe still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming.PreviousCachingNextSubscribing to events |
4e9727215e95-338 | ModulesModel I/OLanguage modelsLLMsHow-toStreamingStreamingSome LLMs provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.To utilize streaming, use a CallbackHandler like so:import { OpenAI } from "langchain/llms/openai";// To enable streaming, we pass in `streaming: true` to the LLM constructor.// Additionally, we pass in a handler for the `handleLLMNewToken` event.const model = new OpenAI({ maxTokens: 25, streaming: true,});const response = await model.call("Tell me a joke. ", { callbacks: [ { handleLLMNewToken(token: string) { console.log({ token }); }, }, ],});console.log(response);/*{ token: '\n' }{ token: '\n' }{ token: 'Q' }{ token: ':' }{ token: ' Why' }{ token: ' did' }{ token: ' the' }{ token: ' chicken' }{ token: ' cross' }{ token: ' the' }{ token: ' playground' }{ token: '?' }{ token: '\n' }{ token: 'A' }{ token: ':' }{ token: ' To' }{ token: ' get' }{ token: ' to' }{ token: ' the' }{ token: ' other' }{ token: ' slide' }{ token: '.' }Q: Why did the chicken cross the playground?A: To get to the other slide. */API |
4e9727215e95-339 | Why did the chicken cross the playground?A: To get to the other slide. */API Reference:OpenAI from langchain/llms/openaiWe still have access to the end LLMResult if using generate. |
4e9727215e95-340 | However, token_usage is not currently supported for streaming.PreviousCachingNextSubscribing to events |
4e9727215e95-341 | StreamingSome LLMs provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.To utilize streaming, use a CallbackHandler like so:import { OpenAI } from "langchain/llms/openai";// To enable streaming, we pass in `streaming: true` to the LLM constructor.// Additionally, we pass in a handler for the `handleLLMNewToken` event.const model = new OpenAI({ maxTokens: 25, streaming: true,});const response = await model.call("Tell me a joke. ", { callbacks: [ { handleLLMNewToken(token: string) { console.log({ token }); }, }, ],});console.log(response);/*{ token: '\n' }{ token: '\n' }{ token: 'Q' }{ token: ':' }{ token: ' Why' }{ token: ' did' }{ token: ' the' }{ token: ' chicken' }{ token: ' cross' }{ token: ' the' }{ token: ' playground' }{ token: '?' }{ token: '\n' }{ token: 'A' }{ token: ':' }{ token: ' To' }{ token: ' get' }{ token: ' to' }{ token: ' the' }{ token: ' other' }{ token: ' slide' }{ token: '.' }Q: Why did the chicken cross the playground?A: To get to the other slide. */API Reference:OpenAI from langchain/llms/openaiWe still |
4e9727215e95-342 | the other slide. */API Reference:OpenAI from langchain/llms/openaiWe still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming. |
4e9727215e95-343 | Some LLMs provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.
To utilize streaming, use a CallbackHandler like so:
import { OpenAI } from "langchain/llms/openai";// To enable streaming, we pass in `streaming: true` to the LLM constructor.// Additionally, we pass in a handler for the `handleLLMNewToken` event.const model = new OpenAI({ maxTokens: 25, streaming: true,});const response = await model.call("Tell me a joke. ", { callbacks: [ { handleLLMNewToken(token: string) { console.log({ token }); }, }, ],});console.log(response);/*{ token: '\n' }{ token: '\n' }{ token: 'Q' }{ token: ':' }{ token: ' Why' }{ token: ' did' }{ token: ' the' }{ token: ' chicken' }{ token: ' cross' }{ token: ' the' }{ token: ' playground' }{ token: '?' }{ token: '\n' }{ token: 'A' }{ token: ':' }{ token: ' To' }{ token: ' get' }{ token: ' to' }{ token: ' the' }{ token: ' other' }{ token: ' slide' }{ token: '.' }Q: Why did the chicken cross the playground?A: To get to the other slide. */
We still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming. |
4e9727215e95-344 | Subscribing to events
Page Title: Subscribing to events | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toCancelling requestsDealing with API ErrorsDealing with Rate LimitsCachingStreamingSubscribing to eventsAdding a timeoutIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsHow-toSubscribing to eventsSubscribing to eventsEspecially when using an agent, there can be a lot of back-and-forth going on behind the scenes as a LLM processes a prompt. For agents, the response object contains an intermediateSteps object that you can print to see an overview of the steps it took to get there. |
4e9727215e95-345 | If that's not enough and you want to see every exchange with the LLM, you can pass callbacks to the LLM for custom logging (or anything else you want to do) as the model goes through the steps:For more info on the events available see the Callbacks section of the docs.import { LLMResult } from "langchain/schema";import { OpenAI } from "langchain/llms/openai";import { Serialized } from "langchain/load/serializable";// We can pass in a list of CallbackHandlers to the LLM constructor to get callbacks for various events.const model = new OpenAI({ callbacks: [ { handleLLMStart: async (llm: Serialized, prompts: string[]) => { console.log(JSON.stringify(llm, null, 2)); console.log(JSON.stringify(prompts, null, 2)); }, handleLLMEnd: async (output: LLMResult) => { console.log(JSON.stringify(output, null, 2)); }, handleLLMError: async (err: Error) => { console.error(err); }, }, ],});await model.call( "What would be a good company name a company that makes colorful socks? ");// {// "name": "openai"// }// [// "What would be a good company name a company that makes colorful socks? "// ]// {// "generations": [// [// {// "text": "\n\nSocktastic Splashes. |
4e9727215e95-346 | ",// "generationInfo": {// "finishReason": "stop",// "logprobs": null// }// }// ]// ],// "llmOutput": {// "tokenUsage": {// "completionTokens": 9,// "promptTokens": 14,// "totalTokens": 23// }// }// }API Reference:LLMResult from langchain/schemaOpenAI from langchain/llms/openaiSerialized from langchain/load/serializablePreviousStreamingNextAdding a timeoutCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toCancelling requestsDealing with API ErrorsDealing with Rate LimitsCachingStreamingSubscribing to eventsAdding a timeoutIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsHow-toSubscribing to eventsSubscribing to eventsEspecially when using an agent, there can be a lot of back-and-forth going on behind the scenes as a LLM processes a prompt. For agents, the response object contains an intermediateSteps object that you can print to see an overview of the steps it took to get there. |
4e9727215e95-347 | If that's not enough and you want to see every exchange with the LLM, you can pass callbacks to the LLM for custom logging (or anything else you want to do) as the model goes through the steps:For more info on the events available see the Callbacks section of the docs.import { LLMResult } from "langchain/schema";import { OpenAI } from "langchain/llms/openai";import { Serialized } from "langchain/load/serializable";// We can pass in a list of CallbackHandlers to the LLM constructor to get callbacks for various events.const model = new OpenAI({ callbacks: [ { handleLLMStart: async (llm: Serialized, prompts: string[]) => { console.log(JSON.stringify(llm, null, 2)); console.log(JSON.stringify(prompts, null, 2)); }, handleLLMEnd: async (output: LLMResult) => { console.log(JSON.stringify(output, null, 2)); }, handleLLMError: async (err: Error) => { console.error(err); }, }, ],});await model.call( "What would be a good company name a company that makes colorful socks? ");// {// "name": "openai"// }// [// "What would be a good company name a company that makes colorful socks? "// ]// {// "generations": [// [// {// "text": "\n\nSocktastic Splashes. |
4e9727215e95-348 | ",// "generationInfo": {// "finishReason": "stop",// "logprobs": null// }// }// ]// ],// "llmOutput": {// "tokenUsage": {// "completionTokens": 9,// "promptTokens": 14,// "totalTokens": 23// }// }// }API Reference:LLMResult from langchain/schemaOpenAI from langchain/llms/openaiSerialized from langchain/load/serializablePreviousStreamingNextAdding a timeout |
4e9727215e95-349 | ModulesModel I/OLanguage modelsLLMsHow-toSubscribing to eventsSubscribing to eventsEspecially when using an agent, there can be a lot of back-and-forth going on behind the scenes as a LLM processes a prompt. For agents, the response object contains an intermediateSteps object that you can print to see an overview of the steps it took to get there. If that's not enough and you want to see every exchange with the LLM, you can pass callbacks to the LLM for custom logging (or anything else you want to do) as the model goes through the steps:For more info on the events available see the Callbacks section of the docs.import { LLMResult } from "langchain/schema";import { OpenAI } from "langchain/llms/openai";import { Serialized } from "langchain/load/serializable";// We can pass in a list of CallbackHandlers to the LLM constructor to get callbacks for various events.const model = new OpenAI({ callbacks: [ { handleLLMStart: async (llm: Serialized, prompts: string[]) => { console.log(JSON.stringify(llm, null, 2)); console.log(JSON.stringify(prompts, null, 2)); }, handleLLMEnd: async (output: LLMResult) => { console.log(JSON.stringify(output, null, 2)); }, handleLLMError: async (err: Error) => { console.error(err); }, }, ],});await model.call( "What would be a good company name a company that makes colorful socks? |
4e9727215e95-350 | ");// {// "name": "openai"// }// [// "What would be a good company name a company that makes colorful socks? "// ]// {// "generations": [// [// {// "text": "\n\nSocktastic Splashes. ",// "generationInfo": {// "finishReason": "stop",// "logprobs": null// }// }// ]// ],// "llmOutput": {// "tokenUsage": {// "completionTokens": 9,// "promptTokens": 14,// "totalTokens": 23// }// }// }API Reference:LLMResult from langchain/schemaOpenAI from langchain/llms/openaiSerialized from langchain/load/serializablePreviousStreamingNextAdding a timeout |
4e9727215e95-351 | Subscribing to eventsEspecially when using an agent, there can be a lot of back-and-forth going on behind the scenes as a LLM processes a prompt. For agents, the response object contains an intermediateSteps object that you can print to see an overview of the steps it took to get there. If that's not enough and you want to see every exchange with the LLM, you can pass callbacks to the LLM for custom logging (or anything else you want to do) as the model goes through the steps:For more info on the events available see the Callbacks section of the docs.import { LLMResult } from "langchain/schema";import { OpenAI } from "langchain/llms/openai";import { Serialized } from "langchain/load/serializable";// We can pass in a list of CallbackHandlers to the LLM constructor to get callbacks for various events.const model = new OpenAI({ callbacks: [ { handleLLMStart: async (llm: Serialized, prompts: string[]) => { console.log(JSON.stringify(llm, null, 2)); console.log(JSON.stringify(prompts, null, 2)); }, handleLLMEnd: async (output: LLMResult) => { console.log(JSON.stringify(output, null, 2)); }, handleLLMError: async (err: Error) => { console.error(err); }, }, ],});await model.call( "What would be a good company name a company that makes colorful socks? ");// {// "name": "openai"// }// [// "What would be a good company name a company that makes colorful socks? |
4e9727215e95-352 | "// ]// {// "generations": [// [// {// "text": "\n\nSocktastic Splashes. ",// "generationInfo": {// "finishReason": "stop",// "logprobs": null// }// }// ]// ],// "llmOutput": {// "tokenUsage": {// "completionTokens": 9,// "promptTokens": 14,// "totalTokens": 23// }// }// }API Reference:LLMResult from langchain/schemaOpenAI from langchain/llms/openaiSerialized from langchain/load/serializable
Especially when using an agent, there can be a lot of back-and-forth going on behind the scenes as a LLM processes a prompt. For agents, the response object contains an intermediateSteps object that you can print to see an overview of the steps it took to get there. If that's not enough and you want to see every exchange with the LLM, you can pass callbacks to the LLM for custom logging (or anything else you want to do) as the model goes through the steps:
For more info on the events available see the Callbacks section of the docs. |
4e9727215e95-353 | For more info on the events available see the Callbacks section of the docs.
import { LLMResult } from "langchain/schema";import { OpenAI } from "langchain/llms/openai";import { Serialized } from "langchain/load/serializable";// We can pass in a list of CallbackHandlers to the LLM constructor to get callbacks for various events.const model = new OpenAI({ callbacks: [ { handleLLMStart: async (llm: Serialized, prompts: string[]) => { console.log(JSON.stringify(llm, null, 2)); console.log(JSON.stringify(prompts, null, 2)); }, handleLLMEnd: async (output: LLMResult) => { console.log(JSON.stringify(output, null, 2)); }, handleLLMError: async (err: Error) => { console.error(err); }, }, ],});await model.call( "What would be a good company name a company that makes colorful socks? ");// {// "name": "openai"// }// [// "What would be a good company name a company that makes colorful socks? "// ]// {// "generations": [// [// {// "text": "\n\nSocktastic Splashes. ",// "generationInfo": {// "finishReason": "stop",// "logprobs": null// }// }// ]// ],// "llmOutput": {// "tokenUsage": {// "completionTokens": 9,// "promptTokens": 14,// "totalTokens": 23// }// }// }
API Reference:LLMResult from langchain/schemaOpenAI from langchain/llms/openaiSerialized from langchain/load/serializable
Adding a timeout |
4e9727215e95-354 | Adding a timeout
Page Title: Adding a timeout | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toCancelling requestsDealing with API ErrorsDealing with Rate LimitsCachingStreamingSubscribing to eventsAdding a timeoutIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsHow-toAdding a timeoutAdding a timeoutBy default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a timeout option, in milliseconds, when you call the model. For example, for OpenAI:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 1 });const resA = await model.call( "What would be a good company name a company that makes colorful socks? ", { timeout: 1000 } // 1s timeout);console.log({ resA });// '\n\nSocktastic Colors' }API Reference:OpenAI from langchain/llms/openaiPreviousSubscribing to eventsNextAI21CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-355 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toCancelling requestsDealing with API ErrorsDealing with Rate LimitsCachingStreamingSubscribing to eventsAdding a timeoutIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsHow-toAdding a timeoutAdding a timeoutBy default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a timeout option, in milliseconds, when you call the model. For example, for OpenAI:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 1 });const resA = await model.call( "What would be a good company name a company that makes colorful socks? ", { timeout: 1000 } // 1s timeout);console.log({ resA });// '\n\nSocktastic Colors' }API Reference:OpenAI from langchain/llms/openaiPreviousSubscribing to eventsNextAI21 |
4e9727215e95-356 | ModulesModel I/OLanguage modelsLLMsHow-toAdding a timeoutAdding a timeoutBy default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a timeout option, in milliseconds, when you call the model. For example, for OpenAI:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 1 });const resA = await model.call( "What would be a good company name a company that makes colorful socks? ", { timeout: 1000 } // 1s timeout);console.log({ resA });// '\n\nSocktastic Colors' }API Reference:OpenAI from langchain/llms/openaiPreviousSubscribing to eventsNextAI21
Adding a timeoutBy default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a timeout option, in milliseconds, when you call the model. For example, for OpenAI:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 1 });const resA = await model.call( "What would be a good company name a company that makes colorful socks? ", { timeout: 1000 } // 1s timeout);console.log({ resA });// '\n\nSocktastic Colors' }API Reference:OpenAI from langchain/llms/openai
By default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a timeout option, in milliseconds, when you call the model. For example, for OpenAI: |
4e9727215e95-357 | import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 1 });const resA = await model.call( "What would be a good company name a company that makes colorful socks? ", { timeout: 1000 } // 1s timeout);console.log({ resA });// '\n\nSocktastic Colors' }
AI21
Page Title: AI21 | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-358 | Page Title: AI21 | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsAI21AI21You can get started with AI21Labs' Jurassic family of models, as well as see a full list of available foundational models, by signing up for an API key on their website.Here's an example of initializing an instance in LangChain.js:import { AI21 } from "langchain/llms/ai21";const model = new AI21({ ai21ApiKey: "YOUR_AI21_API_KEY", // Or set as process.env.AI21_API_KEY});const res = await model.call(`Translate "I love programming" into German.`);console.log({ res });/* { res: "\nIch liebe das Programmieren." } */API Reference:AI21 from langchain/llms/ai21PreviousAdding a timeoutNextAlephAlphaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-359 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsAI21AI21You can get started with AI21Labs' Jurassic family of models, as well as see a full list of available foundational models, by signing up for an API key on their website.Here's an example of initializing an instance in LangChain.js:import { AI21 } from "langchain/llms/ai21";const model = new AI21({ ai21ApiKey: "YOUR_AI21_API_KEY", // Or set as process.env.AI21_API_KEY});const res = await model.call(`Translate "I love programming" into German.`);console.log({ res });/* { res: "\nIch liebe das Programmieren." } */API Reference:AI21 from langchain/llms/ai21PreviousAdding a timeoutNextAlephAlpha
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference |
4e9727215e95-360 | ModulesModel I/OLanguage modelsLLMsIntegrationsAI21AI21You can get started with AI21Labs' Jurassic family of models, as well as see a full list of available foundational models, by signing up for an API key on their website.Here's an example of initializing an instance in LangChain.js:import { AI21 } from "langchain/llms/ai21";const model = new AI21({ ai21ApiKey: "YOUR_AI21_API_KEY", // Or set as process.env.AI21_API_KEY});const res = await model.call(`Translate "I love programming" into German.`);console.log({ res });/* { res: "\nIch liebe das Programmieren." } */API Reference:AI21 from langchain/llms/ai21PreviousAdding a timeoutNextAlephAlpha
AI21You can get started with AI21Labs' Jurassic family of models, as well as see a full list of available foundational models, by signing up for an API key on their website.Here's an example of initializing an instance in LangChain.js:import { AI21 } from "langchain/llms/ai21";const model = new AI21({ ai21ApiKey: "YOUR_AI21_API_KEY", // Or set as process.env.AI21_API_KEY});const res = await model.call(`Translate "I love programming" into German.`);console.log({ res });/* { res: "\nIch liebe das Programmieren." } */API Reference:AI21 from langchain/llms/ai21
You can get started with AI21Labs' Jurassic family of models, as well as see a full list of available foundational models, by signing up for an API key on their website.
Here's an example of initializing an instance in LangChain.js: |
4e9727215e95-361 | Here's an example of initializing an instance in LangChain.js:
import { AI21 } from "langchain/llms/ai21";const model = new AI21({ ai21ApiKey: "YOUR_AI21_API_KEY", // Or set as process.env.AI21_API_KEY});const res = await model.call(`Translate "I love programming" into German.`);console.log({ res });/* { res: "\nIch liebe das Programmieren." } */
API Reference:AI21 from langchain/llms/ai21
AlephAlpha
Page Title: AlephAlpha | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-362 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsAlephAlphaAlephAlphaLangChain.js supports AlephAlpha's Luminous family of models. You'll need to sign up for an API key on their website.Here's an example:import { AlephAlpha } from "langchain/llms/aleph_alpha";const model = new AlephAlpha({ aleph_alpha_api_key: "YOUR_ALEPH_ALPHA_API_KEY", // Or set as process.env.ALEPH_ALPHA_API_KEY});const res = await model.call(`Is cereal soup?`);console.log({ res });/* { res: "\nIs soup a cereal? I don’t think so, but it is delicious." } */API Reference:AlephAlpha from langchain/llms/aleph_alphaPreviousAI21NextAWS SageMakerEndpointCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-363 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsAlephAlphaAlephAlphaLangChain.js supports AlephAlpha's Luminous family of models. You'll need to sign up for an API key on their website.Here's an example:import { AlephAlpha } from "langchain/llms/aleph_alpha";const model = new AlephAlpha({ aleph_alpha_api_key: "YOUR_ALEPH_ALPHA_API_KEY", // Or set as process.env.ALEPH_ALPHA_API_KEY});const res = await model.call(`Is cereal soup?`);console.log({ res });/* { res: "\nIs soup a cereal? I don’t think so, but it is delicious." } */API Reference:AlephAlpha from langchain/llms/aleph_alphaPreviousAI21NextAWS SageMakerEndpoint |
4e9727215e95-364 | ModulesModel I/OLanguage modelsLLMsIntegrationsAlephAlphaAlephAlphaLangChain.js supports AlephAlpha's Luminous family of models. You'll need to sign up for an API key on their website.Here's an example:import { AlephAlpha } from "langchain/llms/aleph_alpha";const model = new AlephAlpha({ aleph_alpha_api_key: "YOUR_ALEPH_ALPHA_API_KEY", // Or set as process.env.ALEPH_ALPHA_API_KEY});const res = await model.call(`Is cereal soup?`);console.log({ res });/* { res: "\nIs soup a cereal? I don’t think so, but it is delicious." } */API Reference:AlephAlpha from langchain/llms/aleph_alphaPreviousAI21NextAWS SageMakerEndpoint
AlephAlphaLangChain.js supports AlephAlpha's Luminous family of models. You'll need to sign up for an API key on their website.Here's an example:import { AlephAlpha } from "langchain/llms/aleph_alpha";const model = new AlephAlpha({ aleph_alpha_api_key: "YOUR_ALEPH_ALPHA_API_KEY", // Or set as process.env.ALEPH_ALPHA_API_KEY});const res = await model.call(`Is cereal soup?`);console.log({ res });/* { res: "\nIs soup a cereal? I don’t think so, but it is delicious." } */API Reference:AlephAlpha from langchain/llms/aleph_alpha
LangChain.js supports AlephAlpha's Luminous family of models. You'll need to sign up for an API key on their website.
Here's an example: |
4e9727215e95-365 | Here's an example:
import { AlephAlpha } from "langchain/llms/aleph_alpha";const model = new AlephAlpha({ aleph_alpha_api_key: "YOUR_ALEPH_ALPHA_API_KEY", // Or set as process.env.ALEPH_ALPHA_API_KEY});const res = await model.call(`Is cereal soup?`);console.log({ res });/* { res: "\nIs soup a cereal? I don’t think so, but it is delicious." } */
API Reference:AlephAlpha from langchain/llms/aleph_alpha
AWS SageMakerEndpoint
Page Title: AWS SageMakerEndpoint | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsAWS SageMakerEndpointAWS SageMakerEndpointLangChain.js supports integration with AWS SageMaker-hosted endpoints. |
4e9727215e95-366 | Check Amazon SageMaker JumpStart for a list of available models, and how to deploy your own.Here's an example:npmYarnpnpmnpm install @aws-sdk/client-sagemaker-runtimeyarn add @aws-sdk/client-sagemaker-runtimepnpm add @aws-sdk/client-sagemaker-runtimeimport { SageMakerLLMContentHandler, SageMakerEndpoint,} from "langchain/llms/sagemaker_endpoint";// Custom for whatever model you'll be usingclass HuggingFaceTextGenerationGPT2ContentHandler implements SageMakerLLMContentHandler{ contentType = "application/json"; accepts = "application/json"; async transformInput(prompt: string, modelKwargs: Record<string, unknown>) { const inputString = JSON.stringify({ text_inputs: prompt, ...modelKwargs, }); return Buffer.from(inputString); } async transformOutput(output: Uint8Array) { const responseJson = JSON.parse(Buffer.from(output).toString("utf-8")); return responseJson.generated_texts[0]; }}const contentHandler = new HuggingFaceTextGenerationGPT2ContentHandler();const model = new SageMakerEndpoint({ endpointName: "jumpstart-example-huggingface-textgener-2023-05-16-22-35-45-660", // Your endpoint name here modelKwargs: { temperature: 1e-10 }, contentHandler, clientOptions: { region: "YOUR AWS ENDPOINT REGION", credentials: { accessKeyId: "YOUR AWS ACCESS ID", secretAccessKey: "YOUR AWS SECRET ACCESS KEY", }, },});const res = await model.call("Hello, my name is ");console.log({ res });/* { res: "_____. |
4e9727215e95-367 | I am a student at the University of California, Berkeley. I am a member of the American Association of University Professors." } */API Reference:SageMakerLLMContentHandler from langchain/llms/sagemaker_endpointSageMakerEndpoint from langchain/llms/sagemaker_endpointPreviousAlephAlphaNextAzure OpenAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsAWS SageMakerEndpointAWS SageMakerEndpointLangChain.js supports integration with AWS SageMaker-hosted endpoints. |
4e9727215e95-368 | Check Amazon SageMaker JumpStart for a list of available models, and how to deploy your own.Here's an example:npmYarnpnpmnpm install @aws-sdk/client-sagemaker-runtimeyarn add @aws-sdk/client-sagemaker-runtimepnpm add @aws-sdk/client-sagemaker-runtimeimport { SageMakerLLMContentHandler, SageMakerEndpoint,} from "langchain/llms/sagemaker_endpoint";// Custom for whatever model you'll be usingclass HuggingFaceTextGenerationGPT2ContentHandler implements SageMakerLLMContentHandler{ contentType = "application/json"; accepts = "application/json"; async transformInput(prompt: string, modelKwargs: Record<string, unknown>) { const inputString = JSON.stringify({ text_inputs: prompt, ...modelKwargs, }); return Buffer.from(inputString); } async transformOutput(output: Uint8Array) { const responseJson = JSON.parse(Buffer.from(output).toString("utf-8")); return responseJson.generated_texts[0]; }}const contentHandler = new HuggingFaceTextGenerationGPT2ContentHandler();const model = new SageMakerEndpoint({ endpointName: "jumpstart-example-huggingface-textgener-2023-05-16-22-35-45-660", // Your endpoint name here modelKwargs: { temperature: 1e-10 }, contentHandler, clientOptions: { region: "YOUR AWS ENDPOINT REGION", credentials: { accessKeyId: "YOUR AWS ACCESS ID", secretAccessKey: "YOUR AWS SECRET ACCESS KEY", }, },});const res = await model.call("Hello, my name is ");console.log({ res });/* { res: "_____. |
4e9727215e95-369 | I am a student at the University of California, Berkeley. I am a member of the American Association of University Professors." } */API Reference:SageMakerLLMContentHandler from langchain/llms/sagemaker_endpointSageMakerEndpoint from langchain/llms/sagemaker_endpointPreviousAlephAlphaNextAzure OpenAI
ModulesModel I/OLanguage modelsLLMsIntegrationsAWS SageMakerEndpointAWS SageMakerEndpointLangChain.js supports integration with AWS SageMaker-hosted endpoints. |
4e9727215e95-370 | Check Amazon SageMaker JumpStart for a list of available models, and how to deploy your own.Here's an example:npmYarnpnpmnpm install @aws-sdk/client-sagemaker-runtimeyarn add @aws-sdk/client-sagemaker-runtimepnpm add @aws-sdk/client-sagemaker-runtimeimport { SageMakerLLMContentHandler, SageMakerEndpoint,} from "langchain/llms/sagemaker_endpoint";// Custom for whatever model you'll be usingclass HuggingFaceTextGenerationGPT2ContentHandler implements SageMakerLLMContentHandler{ contentType = "application/json"; accepts = "application/json"; async transformInput(prompt: string, modelKwargs: Record<string, unknown>) { const inputString = JSON.stringify({ text_inputs: prompt, ...modelKwargs, }); return Buffer.from(inputString); } async transformOutput(output: Uint8Array) { const responseJson = JSON.parse(Buffer.from(output).toString("utf-8")); return responseJson.generated_texts[0]; }}const contentHandler = new HuggingFaceTextGenerationGPT2ContentHandler();const model = new SageMakerEndpoint({ endpointName: "jumpstart-example-huggingface-textgener-2023-05-16-22-35-45-660", // Your endpoint name here modelKwargs: { temperature: 1e-10 }, contentHandler, clientOptions: { region: "YOUR AWS ENDPOINT REGION", credentials: { accessKeyId: "YOUR AWS ACCESS ID", secretAccessKey: "YOUR AWS SECRET ACCESS KEY", }, },});const res = await model.call("Hello, my name is ");console.log({ res });/* { res: "_____. |
4e9727215e95-371 | I am a student at the University of California, Berkeley. I am a member of the American Association of University Professors." } */API Reference:SageMakerLLMContentHandler from langchain/llms/sagemaker_endpointSageMakerEndpoint from langchain/llms/sagemaker_endpointPreviousAlephAlphaNextAzure OpenAI
AWS SageMakerEndpointLangChain.js supports integration with AWS SageMaker-hosted endpoints. |
4e9727215e95-372 | AWS SageMakerEndpointLangChain.js supports integration with AWS SageMaker-hosted endpoints.
Check Amazon SageMaker JumpStart for a list of available models, and how to deploy your own.Here's an example:npmYarnpnpmnpm install @aws-sdk/client-sagemaker-runtimeyarn add @aws-sdk/client-sagemaker-runtimepnpm add @aws-sdk/client-sagemaker-runtimeimport { SageMakerLLMContentHandler, SageMakerEndpoint,} from "langchain/llms/sagemaker_endpoint";// Custom for whatever model you'll be usingclass HuggingFaceTextGenerationGPT2ContentHandler implements SageMakerLLMContentHandler{ contentType = "application/json"; accepts = "application/json"; async transformInput(prompt: string, modelKwargs: Record<string, unknown>) { const inputString = JSON.stringify({ text_inputs: prompt, ...modelKwargs, }); return Buffer.from(inputString); } async transformOutput(output: Uint8Array) { const responseJson = JSON.parse(Buffer.from(output).toString("utf-8")); return responseJson.generated_texts[0]; }}const contentHandler = new HuggingFaceTextGenerationGPT2ContentHandler();const model = new SageMakerEndpoint({ endpointName: "jumpstart-example-huggingface-textgener-2023-05-16-22-35-45-660", // Your endpoint name here modelKwargs: { temperature: 1e-10 }, contentHandler, clientOptions: { region: "YOUR AWS ENDPOINT REGION", credentials: { accessKeyId: "YOUR AWS ACCESS ID", secretAccessKey: "YOUR AWS SECRET ACCESS KEY", }, },});const res = await model.call("Hello, my name is ");console.log({ res });/* { res: "_____. |
4e9727215e95-373 | I am a student at the University of California, Berkeley. I am a member of the American Association of University Professors." } */API Reference:SageMakerLLMContentHandler from langchain/llms/sagemaker_endpointSageMakerEndpoint from langchain/llms/sagemaker_endpoint
LangChain.js supports integration with AWS SageMaker-hosted endpoints. Check Amazon SageMaker JumpStart for a list of available models, and how to deploy your own.
npmYarnpnpmnpm install @aws-sdk/client-sagemaker-runtimeyarn add @aws-sdk/client-sagemaker-runtimepnpm add @aws-sdk/client-sagemaker-runtime
npm install @aws-sdk/client-sagemaker-runtimeyarn add @aws-sdk/client-sagemaker-runtimepnpm add @aws-sdk/client-sagemaker-runtime
npm install @aws-sdk/client-sagemaker-runtime
yarn add @aws-sdk/client-sagemaker-runtime
pnpm add @aws-sdk/client-sagemaker-runtime |
4e9727215e95-374 | pnpm add @aws-sdk/client-sagemaker-runtime
import { SageMakerLLMContentHandler, SageMakerEndpoint,} from "langchain/llms/sagemaker_endpoint";// Custom for whatever model you'll be usingclass HuggingFaceTextGenerationGPT2ContentHandler implements SageMakerLLMContentHandler{ contentType = "application/json"; accepts = "application/json"; async transformInput(prompt: string, modelKwargs: Record<string, unknown>) { const inputString = JSON.stringify({ text_inputs: prompt, ...modelKwargs, }); return Buffer.from(inputString); } async transformOutput(output: Uint8Array) { const responseJson = JSON.parse(Buffer.from(output).toString("utf-8")); return responseJson.generated_texts[0]; }}const contentHandler = new HuggingFaceTextGenerationGPT2ContentHandler();const model = new SageMakerEndpoint({ endpointName: "jumpstart-example-huggingface-textgener-2023-05-16-22-35-45-660", // Your endpoint name here modelKwargs: { temperature: 1e-10 }, contentHandler, clientOptions: { region: "YOUR AWS ENDPOINT REGION", credentials: { accessKeyId: "YOUR AWS ACCESS ID", secretAccessKey: "YOUR AWS SECRET ACCESS KEY", }, },});const res = await model.call("Hello, my name is ");console.log({ res });/* { res: "_____. I am a student at the University of California, Berkeley. I am a member of the American Association of University Professors." } */
API Reference:SageMakerLLMContentHandler from langchain/llms/sagemaker_endpointSageMakerEndpoint from langchain/llms/sagemaker_endpoint
Azure OpenAI |
4e9727215e95-375 | Azure OpenAI
Page Title: Azure OpenAI | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsAzure OpenAIAzure OpenAIYou can also use the OpenAI class to call OpenAI models hosted on Azure.For example, if your Azure instance is hosted under https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}, you
could initialize your instance like this:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}",});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });If your instance is hosted under a domain other than the default openai.azure.com, you'll need to use the alternate AZURE_OPENAI_BASE_PATH environemnt variable. |
4e9727215e95-376 | For example, here's how you would connect to the domain https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });PreviousAWS SageMakerEndpointNextBedrockCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsAzure OpenAIAzure OpenAIYou can also use the OpenAI class to call OpenAI models hosted on Azure.For example, if your Azure instance is hosted under https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}, you |
4e9727215e95-377 | could initialize your instance like this:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}",});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });If your instance is hosted under a domain other than the default openai.azure.com, you'll need to use the alternate AZURE_OPENAI_BASE_PATH environemnt variable.
For example, here's how you would connect to the domain https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });PreviousAWS SageMakerEndpointNextBedrock |
4e9727215e95-378 | ModulesModel I/OLanguage modelsLLMsIntegrationsAzure OpenAIAzure OpenAIYou can also use the OpenAI class to call OpenAI models hosted on Azure.For example, if your Azure instance is hosted under https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}, you
could initialize your instance like this:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}",});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });If your instance is hosted under a domain other than the default openai.azure.com, you'll need to use the alternate AZURE_OPENAI_BASE_PATH environemnt variable. |
4e9727215e95-379 | For example, here's how you would connect to the domain https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });PreviousAWS SageMakerEndpointNextBedrock
Azure OpenAIYou can also use the OpenAI class to call OpenAI models hosted on Azure.For example, if your Azure instance is hosted under https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}, you
could initialize your instance like this:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}",});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });If your instance is hosted under a domain other than the default openai.azure.com, you'll need to use the alternate AZURE_OPENAI_BASE_PATH environemnt variable. |
4e9727215e95-380 | For example, here's how you would connect to the domain https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}:import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });
You can also use the OpenAI class to call OpenAI models hosted on Azure.
For example, if your Azure instance is hosted under https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}, you
could initialize your instance like this:
import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}",});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });
If your instance is hosted under a domain other than the default openai.azure.com, you'll need to use the alternate AZURE_OPENAI_BASE_PATH environemnt variable. |
4e9727215e95-381 | For example, here's how you would connect to the domain https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}:
import { OpenAI } from "langchain/llms/openai";const model = new OpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });
Bedrock
Page Title: Bedrock | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsBedrockOn this pageBedrockAmazon Bedrock is a fully managed service that makes Foundation Models (FMs) |
4e9727215e95-382 | from leading AI startups and Amazon available via an API. You can choose from a wide range of FMs to find the model that is best suited for your use case.SetupYou'll need to install a few official AWS packages as peer dependencies:npmYarnpnpmnpm install @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4yarn add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4pnpm add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4Usageimport { Bedrock } from "langchain/llms/bedrock";async function test() { // If no credentials are provided, the default credentials from // @aws-sdk/credential-provider-node will be used. const model = new Bedrock({ model: "ai21", region: "us-west-2", // credentials: { // accessKeyId: "YOUR_AWS_ACCESS_KEY", // secretAccessKey: "YOUR_SECRET_ACCESS_KEY" // } }); const res = await model.call("Tell me a joke"); console.log(res);}test();API Reference:Bedrock from langchain/llms/bedrockPreviousAzure OpenAINextCohereSetupUsageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-383 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsBedrockOn this pageBedrockAmazon Bedrock is a fully managed service that makes Foundation Models (FMs) |
4e9727215e95-384 | from leading AI startups and Amazon available via an API. You can choose from a wide range of FMs to find the model that is best suited for your use case.SetupYou'll need to install a few official AWS packages as peer dependencies:npmYarnpnpmnpm install @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4yarn add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4pnpm add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4Usageimport { Bedrock } from "langchain/llms/bedrock";async function test() { // If no credentials are provided, the default credentials from // @aws-sdk/credential-provider-node will be used. const model = new Bedrock({ model: "ai21", region: "us-west-2", // credentials: { // accessKeyId: "YOUR_AWS_ACCESS_KEY", // secretAccessKey: "YOUR_SECRET_ACCESS_KEY" // } }); const res = await model.call("Tell me a joke"); console.log(res);}test();API Reference:Bedrock from langchain/llms/bedrockPreviousAzure OpenAINextCohereSetupUsage
ModulesModel I/OLanguage modelsLLMsIntegrationsBedrockOn this pageBedrockAmazon Bedrock is a fully managed service that makes Foundation Models (FMs) |
4e9727215e95-385 | from leading AI startups and Amazon available via an API. You can choose from a wide range of FMs to find the model that is best suited for your use case.SetupYou'll need to install a few official AWS packages as peer dependencies:npmYarnpnpmnpm install @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4yarn add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4pnpm add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4Usageimport { Bedrock } from "langchain/llms/bedrock";async function test() { // If no credentials are provided, the default credentials from // @aws-sdk/credential-provider-node will be used. const model = new Bedrock({ model: "ai21", region: "us-west-2", // credentials: { // accessKeyId: "YOUR_AWS_ACCESS_KEY", // secretAccessKey: "YOUR_SECRET_ACCESS_KEY" // } }); const res = await model.call("Tell me a joke"); console.log(res);}test();API Reference:Bedrock from langchain/llms/bedrockPreviousAzure OpenAINextCohereSetupUsage
ModulesModel I/OLanguage modelsLLMsIntegrationsBedrockOn this pageBedrockAmazon Bedrock is a fully managed service that makes Foundation Models (FMs) |
4e9727215e95-386 | from leading AI startups and Amazon available via an API. You can choose from a wide range of FMs to find the model that is best suited for your use case.SetupYou'll need to install a few official AWS packages as peer dependencies:npmYarnpnpmnpm install @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4yarn add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4pnpm add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4Usageimport { Bedrock } from "langchain/llms/bedrock";async function test() { // If no credentials are provided, the default credentials from // @aws-sdk/credential-provider-node will be used. const model = new Bedrock({ model: "ai21", region: "us-west-2", // credentials: { // accessKeyId: "YOUR_AWS_ACCESS_KEY", // secretAccessKey: "YOUR_SECRET_ACCESS_KEY" // } }); const res = await model.call("Tell me a joke"); console.log(res);}test();API Reference:Bedrock from langchain/llms/bedrockPreviousAzure OpenAINextCohere
BedrockAmazon Bedrock is a fully managed service that makes Foundation Models (FMs) |
4e9727215e95-387 | BedrockAmazon Bedrock is a fully managed service that makes Foundation Models (FMs)
from leading AI startups and Amazon available via an API. You can choose from a wide range of FMs to find the model that is best suited for your use case.SetupYou'll need to install a few official AWS packages as peer dependencies:npmYarnpnpmnpm install @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4yarn add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4pnpm add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4Usageimport { Bedrock } from "langchain/llms/bedrock";async function test() { // If no credentials are provided, the default credentials from // @aws-sdk/credential-provider-node will be used. const model = new Bedrock({ model: "ai21", region: "us-west-2", // credentials: { // accessKeyId: "YOUR_AWS_ACCESS_KEY", // secretAccessKey: "YOUR_SECRET_ACCESS_KEY" // } }); const res = await model.call("Tell me a joke"); console.log(res);}test();API Reference:Bedrock from langchain/llms/bedrock
Amazon Bedrock is a fully managed service that makes Foundation Models (FMs)
from leading AI startups and Amazon available via an API. You can choose from a wide range of FMs to find the model that is best suited for your use case.
You'll need to install a few official AWS packages as peer dependencies: |
4e9727215e95-388 | You'll need to install a few official AWS packages as peer dependencies:
npmYarnpnpmnpm install @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4yarn add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4pnpm add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4
npm install @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4yarn add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4pnpm add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4
npm install @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4
yarn add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4
pnpm add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @aws-sdk/protocol-http @aws-sdk/signature-v4 |
4e9727215e95-389 | import { Bedrock } from "langchain/llms/bedrock";async function test() { // If no credentials are provided, the default credentials from // @aws-sdk/credential-provider-node will be used. const model = new Bedrock({ model: "ai21", region: "us-west-2", // credentials: { // accessKeyId: "YOUR_AWS_ACCESS_KEY", // secretAccessKey: "YOUR_SECRET_ACCESS_KEY" // } }); const res = await model.call("Tell me a joke"); console.log(res);}test();
API Reference:Bedrock from langchain/llms/bedrock
Cohere
SetupUsage
Page Title: Cohere | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-390 | Page Title: Cohere | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsCohereCohereLangChain.js supports Cohere LLMs. Here's an example:npmYarnpnpmnpm install cohere-aiyarn add cohere-aipnpm add cohere-aiimport { Cohere } from "langchain/llms/cohere";const model = new Cohere({ maxTokens: 20, apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.COHERE_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });PreviousBedrockNextGoogle PaLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-391 | Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsCohereCohereLangChain.js supports Cohere LLMs. Here's an example:npmYarnpnpmnpm install cohere-aiyarn add cohere-aipnpm add cohere-aiimport { Cohere } from "langchain/llms/cohere";const model = new Cohere({ maxTokens: 20, apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.COHERE_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });PreviousBedrockNextGoogle PaLM
ModulesModel I/OLanguage modelsLLMsIntegrationsCohereCohereLangChain.js supports Cohere LLMs. Here's an example:npmYarnpnpmnpm install cohere-aiyarn add cohere-aipnpm add cohere-aiimport { Cohere } from "langchain/llms/cohere";const model = new Cohere({ maxTokens: 20, apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.COHERE_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });PreviousBedrockNextGoogle PaLM |
4e9727215e95-392 | CohereLangChain.js supports Cohere LLMs. Here's an example:npmYarnpnpmnpm install cohere-aiyarn add cohere-aipnpm add cohere-aiimport { Cohere } from "langchain/llms/cohere";const model = new Cohere({ maxTokens: 20, apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.COHERE_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });
LangChain.js supports Cohere LLMs. Here's an example:
npmYarnpnpmnpm install cohere-aiyarn add cohere-aipnpm add cohere-ai
npm install cohere-aiyarn add cohere-aipnpm add cohere-ai
npm install cohere-ai
yarn add cohere-ai
pnpm add cohere-ai
import { Cohere } from "langchain/llms/cohere";const model = new Cohere({ maxTokens: 20, apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.COHERE_API_KEY});const res = await model.call( "What would be a good company name a company that makes colorful socks? ");console.log({ res });
Google PaLM
Page Title: Google PaLM | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-393 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsGoogle PaLMGoogle PaLMThe Google PaLM API can be integrated by first
installing the required packages:npmYarnpnpmnpm install google-auth-library @google-ai/generativelanguageyarn add google-auth-library @google-ai/generativelanguagepnpm add google-auth-library @google-ai/generativelanguageCreate an API key from Google MakerSuite. You can then set
the key as GOOGLE_PALM_API_KEY environment variable or pass it as apiKey parameter while instantiating |
4e9727215e95-394 | the model.import { GooglePaLM } from "langchain/llms/googlepalm";export const run = async () => { const model = new GooglePaLM({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` // other params temperature: 1, // OPTIONAL modelName: "models/text-bison-001", // OPTIONAL maxOutputTokens: 1024, // OPTIONAL topK: 40, // OPTIONAL topP: 3, // OPTIONAL safetySettings: [ // OPTIONAL { category: "HARM_CATEGORY_DANGEROUS", threshold: "BLOCK_MEDIUM_AND_ABOVE", }, ], stopSequences: ["stop"], // OPTIONAL }); const res = await model.call( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res });};API Reference:GooglePaLM from langchain/llms/googlepalmPreviousCohereNextGoogle Vertex AICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsGoogle PaLMGoogle PaLMThe Google PaLM API can be integrated by first |
4e9727215e95-395 | installing the required packages:npmYarnpnpmnpm install google-auth-library @google-ai/generativelanguageyarn add google-auth-library @google-ai/generativelanguagepnpm add google-auth-library @google-ai/generativelanguageCreate an API key from Google MakerSuite. You can then set
the key as GOOGLE_PALM_API_KEY environment variable or pass it as apiKey parameter while instantiating
the model.import { GooglePaLM } from "langchain/llms/googlepalm";export const run = async () => { const model = new GooglePaLM({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` // other params temperature: 1, // OPTIONAL modelName: "models/text-bison-001", // OPTIONAL maxOutputTokens: 1024, // OPTIONAL topK: 40, // OPTIONAL topP: 3, // OPTIONAL safetySettings: [ // OPTIONAL { category: "HARM_CATEGORY_DANGEROUS", threshold: "BLOCK_MEDIUM_AND_ABOVE", }, ], stopSequences: ["stop"], // OPTIONAL }); const res = await model.call( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res });};API Reference:GooglePaLM from langchain/llms/googlepalmPreviousCohereNextGoogle Vertex AI
ModulesModel I/OLanguage modelsLLMsIntegrationsGoogle PaLMGoogle PaLMThe Google PaLM API can be integrated by first |
4e9727215e95-396 | installing the required packages:npmYarnpnpmnpm install google-auth-library @google-ai/generativelanguageyarn add google-auth-library @google-ai/generativelanguagepnpm add google-auth-library @google-ai/generativelanguageCreate an API key from Google MakerSuite. You can then set
the key as GOOGLE_PALM_API_KEY environment variable or pass it as apiKey parameter while instantiating
the model.import { GooglePaLM } from "langchain/llms/googlepalm";export const run = async () => { const model = new GooglePaLM({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` // other params temperature: 1, // OPTIONAL modelName: "models/text-bison-001", // OPTIONAL maxOutputTokens: 1024, // OPTIONAL topK: 40, // OPTIONAL topP: 3, // OPTIONAL safetySettings: [ // OPTIONAL { category: "HARM_CATEGORY_DANGEROUS", threshold: "BLOCK_MEDIUM_AND_ABOVE", }, ], stopSequences: ["stop"], // OPTIONAL }); const res = await model.call( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res });};API Reference:GooglePaLM from langchain/llms/googlepalmPreviousCohereNextGoogle Vertex AI
Google PaLMThe Google PaLM API can be integrated by first
installing the required packages:npmYarnpnpmnpm install google-auth-library @google-ai/generativelanguageyarn add google-auth-library @google-ai/generativelanguagepnpm add google-auth-library @google-ai/generativelanguageCreate an API key from Google MakerSuite. You can then set |
4e9727215e95-397 | the key as GOOGLE_PALM_API_KEY environment variable or pass it as apiKey parameter while instantiating
the model.import { GooglePaLM } from "langchain/llms/googlepalm";export const run = async () => { const model = new GooglePaLM({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` // other params temperature: 1, // OPTIONAL modelName: "models/text-bison-001", // OPTIONAL maxOutputTokens: 1024, // OPTIONAL topK: 40, // OPTIONAL topP: 3, // OPTIONAL safetySettings: [ // OPTIONAL { category: "HARM_CATEGORY_DANGEROUS", threshold: "BLOCK_MEDIUM_AND_ABOVE", }, ], stopSequences: ["stop"], // OPTIONAL }); const res = await model.call( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res });};API Reference:GooglePaLM from langchain/llms/googlepalm
The Google PaLM API can be integrated by first
installing the required packages:
npmYarnpnpmnpm install google-auth-library @google-ai/generativelanguageyarn add google-auth-library @google-ai/generativelanguagepnpm add google-auth-library @google-ai/generativelanguage
npm install google-auth-library @google-ai/generativelanguageyarn add google-auth-library @google-ai/generativelanguagepnpm add google-auth-library @google-ai/generativelanguage
npm install google-auth-library @google-ai/generativelanguage
yarn add google-auth-library @google-ai/generativelanguage
pnpm add google-auth-library @google-ai/generativelanguage |
4e9727215e95-398 | pnpm add google-auth-library @google-ai/generativelanguage
Create an API key from Google MakerSuite. You can then set
the key as GOOGLE_PALM_API_KEY environment variable or pass it as apiKey parameter while instantiating
the model.
import { GooglePaLM } from "langchain/llms/googlepalm";export const run = async () => { const model = new GooglePaLM({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` // other params temperature: 1, // OPTIONAL modelName: "models/text-bison-001", // OPTIONAL maxOutputTokens: 1024, // OPTIONAL topK: 40, // OPTIONAL topP: 3, // OPTIONAL safetySettings: [ // OPTIONAL { category: "HARM_CATEGORY_DANGEROUS", threshold: "BLOCK_MEDIUM_AND_ABOVE", }, ], stopSequences: ["stop"], // OPTIONAL }); const res = await model.call( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res });};
API Reference:GooglePaLM from langchain/llms/googlepalm
Google Vertex AI
Page Title: Google Vertex AI | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-399 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OPromptsLanguage modelsLLMsHow-toIntegrationsAI21AlephAlphaAWS SageMakerEndpointAzure OpenAIBedrockCohereGoogle PaLMGoogle Vertex AIHuggingFaceInferenceOllamaOpenAIPromptLayer OpenAIRaycastAIReplicateChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/OLanguage modelsLLMsIntegrationsGoogle Vertex AIGoogle Vertex AIThe Vertex AI implementation is meant to be used in Node.js and not
directly in a browser, since it requires a service account to use.Before running this code, you should make sure the Vertex AI API is
enabled for the relevant project in your Google Cloud dashboard and that you've authenticated to
Google Cloud using one of these methods:You are logged into an account (using gcloud auth application-default login)
permitted to that project.You are running on a machine using a service account that is permitted
to the project.You have downloaded the credentials for a service account that is permitted
to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.