url
stringlengths 25
141
| content
stringlengths 2.14k
402k
|
---|---|
https://js.langchain.com/v0.1/docs/modules/data_connection/text_embedding/caching_embeddings/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* [Dealing with API errors](/v0.1/docs/modules/data_connection/text_embedding/api_errors/)
* [Caching](/v0.1/docs/modules/data_connection/text_embedding/caching_embeddings/)
* [Dealing with rate limits](/v0.1/docs/modules/data_connection/text_embedding/rate_limits/)
* [Adding a timeout](/v0.1/docs/modules/data_connection/text_embedding/timeouts/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* [Indexing](/v0.1/docs/modules/data_connection/indexing/)
* [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* Caching
On this page
Caching
=======
Embeddings can be stored or temporarily cached to avoid needing to recompute them.
Caching embeddings can be done using a `CacheBackedEmbeddings` instance.
The cache backed embedder is a wrapper around an embedder that caches embeddings in a key-value store.
The text is hashed and the hash is used as the key in the cache.
The main supported way to initialized a `CacheBackedEmbeddings` is the `fromBytesStore` static method. This takes in the following parameters:
* `underlying_embedder`: The embedder to use for embedding.
* `document_embedding_cache`: The cache to use for storing document embeddings.
* `namespace`: (optional, defaults to "") The namespace to use for document cache. This namespace is used to avoid collisions with other caches. For example, set it to the name of the embedding model used.
**Attention:** Be sure to set the namespace parameter to avoid collisions of the same text embedded using different embeddings models.
Usage, in-memory[](#usage-in-memory "Direct link to Usage, in-memory")
-----------------------------------------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
Here's a basic test example with an in memory cache. This type of cache is primarily useful for unit tests or prototyping. Do not use this cache if you need to actually store the embeddings for an extended period of time:
import { OpenAIEmbeddings } from "@langchain/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { InMemoryStore } from "langchain/storage/in_memory";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { FaissStore } from "@langchain/community/vectorstores/faiss";import { TextLoader } from "langchain/document_loaders/fs/text";const underlyingEmbeddings = new OpenAIEmbeddings();const inMemoryStore = new InMemoryStore();const cacheBackedEmbeddings = CacheBackedEmbeddings.fromBytesStore( underlyingEmbeddings, inMemoryStore, { namespace: underlyingEmbeddings.modelName, });const loader = new TextLoader("./state_of_the_union.txt");const rawDocuments = await loader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 0,});const documents = await splitter.splitDocuments(rawDocuments);// No keys logged yet since the cache is emptyfor await (const key of inMemoryStore.yieldKeys()) { console.log(key);}let time = Date.now();const vectorstore = await FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Initial creation time: ${Date.now() - time}ms`);/* Initial creation time: 1905ms*/// The second time is much faster since the embeddings for the input docs have already been added to the cachetime = Date.now();const vectorstore2 = await FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Cached creation time: ${Date.now() - time}ms`);/* Cached creation time: 8ms*/// Many keys logged with hashed valuesconst keys = [];for await (const key of inMemoryStore.yieldKeys()) { keys.push(key);}console.log(keys.slice(0, 5));/* [ 'text-embedding-ada-002ea9b59e760e64bec6ee9097b5a06b0d91cb3ab64', 'text-embedding-ada-0023b424f5ed1271a6f5601add17c1b58b7c992772e', 'text-embedding-ada-002fec5d021611e1527297c5e8f485876ea82dcb111', 'text-embedding-ada-00262f72e0c2d711c6b861714ee624b28af639fdb13', 'text-embedding-ada-00262d58882330038a4e6e25ea69a938f4391541874' ]*/
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [CacheBackedEmbeddings](https://api.js.langchain.com/classes/langchain_embeddings_cache_backed.CacheBackedEmbeddings.html) from `langchain/embeddings/cache_backed`
* [InMemoryStore](https://api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `langchain/storage/in_memory`
* [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter`
* [FaissStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
Usage, Convex[](#usage-convex "Direct link to Usage, Convex")
--------------------------------------------------------------
Here's an example with a [Convex](https://convex.dev/) as a cache.
### Create project[](#create-project "Direct link to Create project")
Get a working [Convex](https://docs.convex.dev/) project set up, for example by using:
npm create convex@latest
### Add database accessors[](#add-database-accessors "Direct link to Add database accessors")
Add query and mutation helpers to `convex/langchain/db.ts`:
convex/langchain/db.ts
export * from "langchain/util/convex";
### Configure your schema[](#configure-your-schema "Direct link to Configure your schema")
Set up your schema (for indexing):
convex/schema.ts
import { defineSchema, defineTable } from "convex/server";import { v } from "convex/values";export default defineSchema({ cache: defineTable({ key: v.string(), value: v.any(), }).index("byKey", ["key"]),});
### Example[](#example "Direct link to Example")
"use node";import { TextLoader } from "langchain/document_loaders/fs/text";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { OpenAIEmbeddings } from "@langchain/openai";import { ConvexKVStore } from "@langchain/community/storage/convex";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { ConvexVectorStore } from "@langchain/community/vectorstores/convex";import { action } from "./_generated/server.js";export const ask = action({ args: {}, handler: async (ctx) => { const underlyingEmbeddings = new OpenAIEmbeddings(); const cacheBackedEmbeddings = CacheBackedEmbeddings.fromBytesStore( underlyingEmbeddings, new ConvexKVStore({ ctx }), { namespace: underlyingEmbeddings.modelName, } ); const loader = new TextLoader("./state_of_the_union.txt"); const rawDocuments = await loader.load(); const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 0, }); const documents = await splitter.splitDocuments(rawDocuments); let time = Date.now(); const vectorstore = await ConvexVectorStore.fromDocuments( documents, cacheBackedEmbeddings, { ctx } ); console.log(`Initial creation time: ${Date.now() - time}ms`); /* Initial creation time: 1808ms */ // The second time is much faster since the embeddings for the input docs have already been added to the cache time = Date.now(); const vectorstore2 = await ConvexVectorStore.fromDocuments( documents, cacheBackedEmbeddings, { ctx } ); console.log(`Cached creation time: ${Date.now() - time}ms`); /* Cached creation time: 33ms */ },});
#### API Reference:
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* [CacheBackedEmbeddings](https://api.js.langchain.com/classes/langchain_embeddings_cache_backed.CacheBackedEmbeddings.html) from `langchain/embeddings/cache_backed`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [ConvexKVStore](https://api.js.langchain.com/classes/langchain_community_storage_convex.ConvexKVStore.html) from `@langchain/community/storage/convex`
* [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter`
* [ConvexVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_convex.ConvexVectorStore.html) from `@langchain/community/vectorstores/convex`
Usage, Redis[](#usage-redis "Direct link to Usage, Redis")
-----------------------------------------------------------
Here's an example with a Redis cache.
You'll first need to install `ioredis` as a peer dependency and pass in an initialized client:
* npm
* Yarn
* pnpm
npm install ioredis
yarn add ioredis
pnpm add ioredis
import { Redis } from "ioredis";import { OpenAIEmbeddings } from "@langchain/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { FaissStore } from "@langchain/community/vectorstores/faiss";import { TextLoader } from "langchain/document_loaders/fs/text";import { RedisByteStore } from "@langchain/community/storage/ioredis";const underlyingEmbeddings = new OpenAIEmbeddings();// Requires a Redis instance running at http://localhost:6379.// See https://github.com/redis/ioredis for full config options.const redisClient = new Redis();const redisStore = new RedisByteStore({ client: redisClient,});const cacheBackedEmbeddings = CacheBackedEmbeddings.fromBytesStore( underlyingEmbeddings, redisStore, { namespace: underlyingEmbeddings.modelName, });const loader = new TextLoader("./state_of_the_union.txt");const rawDocuments = await loader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 0,});const documents = await splitter.splitDocuments(rawDocuments);let time = Date.now();const vectorstore = await FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Initial creation time: ${Date.now() - time}ms`);/* Initial creation time: 1808ms*/// The second time is much faster since the embeddings for the input docs have already been added to the cachetime = Date.now();const vectorstore2 = await FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Cached creation time: ${Date.now() - time}ms`);/* Cached creation time: 33ms*/// Many keys logged with hashed valuesconst keys = [];for await (const key of redisStore.yieldKeys()) { keys.push(key);}console.log(keys.slice(0, 5));/* [ 'text-embedding-ada-002fa9ac80e1bf226b7b4dfc03ea743289a65a727b2', 'text-embedding-ada-0027dbf9c4b36e12fe1768300f145f4640342daaf22', 'text-embedding-ada-002ea9b59e760e64bec6ee9097b5a06b0d91cb3ab64', 'text-embedding-ada-002fec5d021611e1527297c5e8f485876ea82dcb111', 'text-embedding-ada-002c00f818c345da13fed9f2697b4b689338143c8c7' ]*/
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [CacheBackedEmbeddings](https://api.js.langchain.com/classes/langchain_embeddings_cache_backed.CacheBackedEmbeddings.html) from `langchain/embeddings/cache_backed`
* [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter`
* [FaissStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* [RedisByteStore](https://api.js.langchain.com/classes/langchain_community_storage_ioredis.RedisByteStore.html) from `@langchain/community/storage/ioredis`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Dealing with API errors
](/v0.1/docs/modules/data_connection/text_embedding/api_errors/)[
Next
Dealing with rate limits
](/v0.1/docs/modules/data_connection/text_embedding/rate_limits/)
* [Usage, in-memory](#usage-in-memory)
* [Usage, Convex](#usage-convex)
* [Create project](#create-project)
* [Add database accessors](#add-database-accessors)
* [Configure your schema](#configure-your-schema)
* [Example](#example)
* [Usage, Redis](#usage-redis)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/data_connection/text_embedding/rate_limits/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* [Dealing with API errors](/v0.1/docs/modules/data_connection/text_embedding/api_errors/)
* [Caching](/v0.1/docs/modules/data_connection/text_embedding/caching_embeddings/)
* [Dealing with rate limits](/v0.1/docs/modules/data_connection/text_embedding/rate_limits/)
* [Adding a timeout](/v0.1/docs/modules/data_connection/text_embedding/timeouts/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* [Indexing](/v0.1/docs/modules/data_connection/indexing/)
* [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* Dealing with rate limits
Dealing with rate limits
========================
Some providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a `maxConcurrency` option when instantiating an Embeddings model. This option allows you to specify the maximum number of concurrent requests you want to make to the provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.
For example, if you set `maxConcurrency: 5`, then LangChain will only send 5 requests to the provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.
To use this feature, simply pass `maxConcurrency: <number>` when you instantiate the LLM. For example:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAIEmbeddings } from "@langchain/openai";const model = new OpenAIEmbeddings({ maxConcurrency: 5 });
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Caching
](/v0.1/docs/modules/data_connection/text_embedding/caching_embeddings/)[
Next
Adding a timeout
](/v0.1/docs/modules/data_connection/text_embedding/timeouts/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/bedrock/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* Bedrock
On this page
Bedrock
=======
> [Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that makes Foundation Models (FMs) from leading AI startups and Amazon available via an API. You can choose from a wide range of FMs to find the model that is best suited for your use case.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to install a few official AWS packages as peer dependencies:
* npm
* Yarn
* pnpm
npm install @aws-crypto/sha256-js @aws-sdk/credential-provider-node @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types
yarn add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types
pnpm add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types
You can also use Bedrock in web environments such as Edge functions or Cloudflare Workers by omitting the `@aws-sdk/credential-provider-node` dependency and using the `web` entrypoint:
* npm
* Yarn
* pnpm
npm install @aws-crypto/sha256-js @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types
yarn add @aws-crypto/sha256-js @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types
pnpm add @aws-crypto/sha256-js @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Note that some models require specific prompting techniques. For example, Anthropic's Claude-v2 model will throw an error if the prompt does not start with `Human:` .
import { Bedrock } from "@langchain/community/llms/bedrock";// Or, from web environments:// import { Bedrock } from "@langchain/community/llms/bedrock/web";// If no credentials are provided, the default credentials from// @aws-sdk/credential-provider-node will be used.const model = new Bedrock({ model: "ai21.j2-grande-instruct", // You can also do e.g. "anthropic.claude-v2" region: "us-east-1", // endpointUrl: "custom.amazonaws.com", // credentials: { // accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!, // secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!, // }, // modelKwargs: {},});const res = await model.invoke("Tell me a joke");console.log(res);/* Why was the math book unhappy? Because it had too many problems!*/
#### API Reference:
* [Bedrock](https://api.js.langchain.com/classes/langchain_community_llms_bedrock.Bedrock.html) from `@langchain/community/llms/bedrock`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Azure OpenAI
](/v0.1/docs/integrations/llms/azure/)[
Next
Cloudflare Workers AI
](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/aws_sagemaker/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* AWS SageMakerEndpoint
On this page
AWS SageMakerEndpoint
=====================
LangChain.js supports integration with AWS SageMaker-hosted endpoints. Check [Amazon SageMaker JumpStart](https://aws.amazon.com/sagemaker/jumpstart/) for a list of available models, and how to deploy your own.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to install the official SageMaker SDK as a peer dependency:
* npm
* Yarn
* pnpm
npm install @aws-sdk/client-sagemaker-runtime
yarn add @aws-sdk/client-sagemaker-runtime
pnpm add @aws-sdk/client-sagemaker-runtime
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { SageMakerEndpoint, SageMakerLLMContentHandler,} from "@langchain/community/llms/sagemaker_endpoint";interface ResponseJsonInterface { generation: { content: string; };}// Custom for whatever model you'll be usingclass LLama213BHandler implements SageMakerLLMContentHandler { contentType = "application/json"; accepts = "application/json"; async transformInput( prompt: string, modelKwargs: Record<string, unknown> ): Promise<Uint8Array> { const payload = { inputs: [[{ role: "user", content: prompt }]], parameters: modelKwargs, }; const stringifiedPayload = JSON.stringify(payload); return new TextEncoder().encode(stringifiedPayload); } async transformOutput(output: Uint8Array): Promise<string> { const response_json = JSON.parse( new TextDecoder("utf-8").decode(output) ) as ResponseJsonInterface[]; const content = response_json[0]?.generation.content ?? ""; return content; }}const contentHandler = new LLama213BHandler();const model = new SageMakerEndpoint({ endpointName: "aws-llama-2-13b-chat", modelKwargs: { temperature: 0.5, max_new_tokens: 700, top_p: 0.9, }, endpointKwargs: { CustomAttributes: "accept_eula=true", }, contentHandler, clientOptions: { region: "YOUR AWS ENDPOINT REGION", credentials: { accessKeyId: "YOUR AWS ACCESS ID", secretAccessKey: "YOUR AWS SECRET ACCESS KEY", }, },});const res = await model.invoke( "Hello, my name is John Doe, tell me a joke about llamas ");console.log(res);/* [ { content: "Hello, John Doe! Here's a llama joke for you: Why did the llama become a gardener? Because it was great at llama-scaping!" } ] */
#### API Reference:
* [SageMakerEndpoint](https://api.js.langchain.com/classes/langchain_community_llms_sagemaker_endpoint.SageMakerEndpoint.html) from `@langchain/community/llms/sagemaker_endpoint`
* [SageMakerLLMContentHandler](https://api.js.langchain.com/types/langchain_community_llms_sagemaker_endpoint.SageMakerLLMContentHandler.html) from `@langchain/community/llms/sagemaker_endpoint`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
AlephAlpha
](/v0.1/docs/integrations/llms/aleph_alpha/)[
Next
Azure OpenAI
](/v0.1/docs/integrations/llms/azure/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/bedrock/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Bedrock
Bedrock
=======
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-service.html) is a fully managed service that makes base models from Amazon and third-party model providers accessible through an API.
When this documentation was written, Bedrock supports one model for text embeddings, the Titan Embeddings G1 - Text model (amazon.titan-embed-text-v1). This model supports text retrieval, semantic similarity, and clustering. The maximum input text is 8K tokens and the maximum output vector length is 1536.
Setup[](#setup "Direct link to Setup")
---------------------------------------
To use this embedding, please ensure you have the Bedrock runtime client installed in your project.
* npm
* Yarn
* pnpm
npm i @aws-sdk/client-bedrock-runtime@^3.422.0
yarn add @aws-sdk/client-bedrock-runtime@^3.422.0
pnpm add @aws-sdk/client-bedrock-runtime@^3.422.0
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
Usage[](#usage "Direct link to Usage")
---------------------------------------
The `BedrockEmbeddings` class uses the AWS Bedrock API to generate embeddings for a given text. It strips new line characters from the text as recommended.
/* eslint-disable @typescript-eslint/no-non-null-assertion */import { BedrockEmbeddings } from "@langchain/community/embeddings/bedrock";const embeddings = new BedrockEmbeddings({ region: process.env.BEDROCK_AWS_REGION!, credentials: { accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!, secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!, }, model: "amazon.titan-embed-text-v1", // Default value});const res = await embeddings.embedQuery( "What would be a good company name a company that makes colorful socks?");console.log({ res });
#### API Reference:
* [BedrockEmbeddings](https://api.js.langchain.com/classes/langchain_community_embeddings_bedrock.BedrockEmbeddings.html) from `@langchain/community/embeddings/bedrock`
Configuring the Bedrock Runtime Client[](#configuring-the-bedrock-runtime-client "Direct link to Configuring the Bedrock Runtime Client")
------------------------------------------------------------------------------------------------------------------------------------------
You can pass in your own instance of the `BedrockRuntimeClient` if you want to customize options like `credentials`, `region`, `retryPolicy`, etc.
import { BedrockRuntimeClient } from "@aws-sdk/client-bedrock-runtime";import { BedrockEmbeddings } from "langchain/embeddings/bedrock";const client = new BedrockRuntimeClient({ region: "us-east-1", credentials: getCredentials(),});const embeddings = new BedrockEmbeddings({ client,});
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Baidu Qianfan
](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)[
Next
Cloudflare Workers AI
](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/s3/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/)
* [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/)
* [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/)
* [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/)
* [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/)
* [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/)
* [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/)
* [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/)
* [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/)
* [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/)
* [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/)
* [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/)
* [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/)
* [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/)
* [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/)
* [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/)
* [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/)
* [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/)
* [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/)
* [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/)
* [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/)
* [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/)
* [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/)
* [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/)
* [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/)
* [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/)
* [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* S3 File
S3 File
=======
Compatibility
Only available on Node.js.
This covers how to load document objects from an s3 file object.
Setup[](#setup "Direct link to Setup")
---------------------------------------
To run this index you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.
See the docs [here](https://js.langchain.com/docs/modules/indexes/document_loaders/examples/file_loaders/unstructured) for information on how to do that.
You'll also need to install the official AWS SDK:
* npm
* Yarn
* pnpm
npm install @aws-sdk/client-s3
yarn add @aws-sdk/client-s3
pnpm add @aws-sdk/client-s3
Usage[](#usage "Direct link to Usage")
---------------------------------------
Once Unstructured is configured, you can use the S3 loader to load files and then convert them into a Document.
You can optionally provide a s3Config parameter to specify your bucket region, access key, and secret access key. If these are not provided, you will need to have them in your environment (e.g., by running `aws configure`).
import { S3Loader } from "langchain/document_loaders/web/s3";const loader = new S3Loader({ bucket: "my-document-bucket-123", key: "AccountingOverview.pdf", s3Config: { region: "us-east-1", credentials: { accessKeyId: "AKIAIOSFODNN7EXAMPLE", secretAccessKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", }, }, unstructuredAPIURL: "http://localhost:8000/general/v0/general", unstructuredAPIKey: "", // this will be soon required});const docs = await loader.load();console.log(docs);
#### API Reference:
* [S3Loader](https://api.js.langchain.com/classes/langchain_document_loaders_web_s3.S3Loader.html) from `langchain/document_loaders/web/s3`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Recursive URL Loader
](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/)[
Next
SearchApi Loader
](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/azure/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* Azure OpenAI
On this page
Azure OpenAI
============
[Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) is a cloud service to help you quickly develop generative AI experiences with a diverse set of prebuilt and curated models from OpenAI, Meta and beyond.
LangChain.js supports integration with [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) using either the dedicated [Azure OpenAI SDK](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai) or the [OpenAI SDK](https://github.com/openai/openai-node).
You can learn more about Azure OpenAI and its difference with the OpenAI API on [this page](https://learn.microsoft.com/azure/ai-services/openai/overview). If you don't have an Azure account, you can [create a free account](https://azure.microsoft.com/free/) to get started.
Using the Azure OpenAI SDK[](#using-the-azure-openai-sdk "Direct link to Using the Azure OpenAI SDK")
------------------------------------------------------------------------------------------------------
You'll first need to install the [`@langchain/azure-openai`](https://www.npmjs.com/package/@langchain/azure-openai) package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install -S @langchain/azure-openai
yarn add @langchain/azure-openai
pnpm add @langchain/azure-openai
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
You'll also need to have an Azure OpenAI instance deployed. You can deploy a version on Azure Portal following [this guide](https://learn.microsoft.com/azure/ai-services/openai/how-to/create-resource?pivots=web-portal).
Once you have your instance running, make sure you have the endpoint and key. You can find them in the Azure Portal, under the "Keys and Endpoint" section of your instance.
You can then define the following environment variables to use the service:
AZURE_OPENAI_API_ENDPOINT=<YOUR_ENDPOINT>AZURE_OPENAI_API_KEY=<YOUR_KEY>AZURE_OPENAI_API_DEPLOYMENT_NAME=<YOUR_DEPLOYMENT_NAME>
Alternatively, you can pass the values directly to the `AzureOpenAI` constructor:
import { AzureOpenAI } from "@langchain/azure-openai";const model = new AzureOpenAI({ azureOpenAIEndpoint: "<your_endpoint>", apiKey: "<your_key>", azureOpenAIApiDeploymentName: "<your_deployment_name",});
If you're using Azure Managed Identity, you can also pass the credentials directly to the constructor:
import { DefaultAzureCredential } from "@azure/identity";import { AzureOpenAI } from "@langchain/azure-openai";const credentials = new DefaultAzureCredential();const model = new AzureOpenAI({ credentials, azureOpenAIEndpoint: "<your_endpoint>", azureOpenAIApiDeploymentName: "<your_deployment_name",});
If you're using Azure Managed Identity, you can also pass the credentials directly to the constructor:
import { DefaultAzureCredential } from "@azure/identity";import { AzureOpenAI } from "@langchain/azure-openai";const credentials = new DefaultAzureCredential();const model = new AzureOpenAI({ credentials, azureOpenAIEndpoint: "<your_endpoint>", azureOpenAIApiDeploymentName: "<your_deployment_name", model: "<your_model>",});
### LLM usage example[](#llm-usage-example "Direct link to LLM usage example")
import { AzureOpenAI } from "@langchain/azure-openai";export const run = async () => { const model = new AzureOpenAI({ model: "gpt-4", temperature: 0.7, maxTokens: 1000, maxRetries: 5, }); const res = await model.invoke( "Question: What would be a good company name for a company that makes colorful socks?\nAnswer:" ); console.log({ res });};
#### API Reference:
* [AzureOpenAI](https://api.js.langchain.com/classes/langchain_azure_openai.AzureOpenAI.html) from `@langchain/azure-openai`
### Chat usage example[](#chat-usage-example "Direct link to Chat usage example")
import { AzureChatOpenAI } from "@langchain/azure-openai";export const run = async () => { const model = new AzureChatOpenAI({ model: "gpt-4", prefixMessages: [ { role: "system", content: "You are a helpful assistant that answers in pirate language", }, ], maxTokens: 50, }); const res = await model.invoke( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res });};
#### API Reference:
* [AzureChatOpenAI](https://api.js.langchain.com/classes/langchain_azure_openai.AzureChatOpenAI.html) from `@langchain/azure-openai`
Using OpenAI SDK[](#using-openai-sdk "Direct link to Using OpenAI SDK")
------------------------------------------------------------------------
You can also use the `OpenAI` class to call OpenAI models hosted on Azure.
For example, if your Azure instance is hosted under `https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}`, you could initialize your instance like this:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";const model = new OpenAI({ temperature: 0.9, apiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}",});const res = await model.invoke( "What would be a good company name for a company that makes colorful socks?");console.log({ res });
If your instance is hosted under a domain other than the default `openai.azure.com`, you'll need to use the alternate `AZURE_OPENAI_BASE_PATH` environment variable. For example, here's how you would connect to the domain `https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}`:
import { OpenAI } from "@langchain/openai";const model = new OpenAI({ temperature: 0.9, apiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});const res = await model.invoke( "What would be a good company name for a company that makes colorful socks?");console.log({ res });
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
AWS SageMakerEndpoint
](/v0.1/docs/integrations/llms/aws_sagemaker/)[
Next
Bedrock
](/v0.1/docs/integrations/llms/bedrock/)
* [Using the Azure OpenAI SDK](#using-the-azure-openai-sdk)
* [LLM usage example](#llm-usage-example)
* [Chat usage example](#chat-usage-example)
* [Using OpenAI SDK](#using-openai-sdk)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat_memory/dynamodb/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Astra DB Chat Memory](/v0.1/docs/integrations/chat_memory/astradb/)
* [Cassandra Chat Memory](/v0.1/docs/integrations/chat_memory/cassandra/)
* [Cloudflare D1-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/cloudflare_d1/)
* [Convex Chat Memory](/v0.1/docs/integrations/chat_memory/convex/)
* [DynamoDB-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/dynamodb/)
* [Firestore Chat Memory](/v0.1/docs/integrations/chat_memory/firestore/)
* [IPFS Datastore Chat Memory](/v0.1/docs/integrations/chat_memory/ipfs_datastore/)
* [Momento-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/momento/)
* [MongoDB Chat Memory](/v0.1/docs/integrations/chat_memory/mongodb/)
* [Motörhead Memory](/v0.1/docs/integrations/chat_memory/motorhead_memory/)
* [PlanetScale Chat Memory](/v0.1/docs/integrations/chat_memory/planetscale/)
* [Postgres Chat Memory](/v0.1/docs/integrations/chat_memory/postgres/)
* [Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/redis/)
* [Upstash Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/upstash_redis/)
* [Xata Chat Memory](/v0.1/docs/integrations/chat_memory/xata/)
* [Zep Memory](/v0.1/docs/integrations/chat_memory/zep_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* DynamoDB-Backed Chat Memory
DynamoDB-Backed Chat Memory
===========================
For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for a DynamoDB instance.
Setup[](#setup "Direct link to Setup")
---------------------------------------
First, install the AWS DynamoDB client in your project:
* npm
* Yarn
* pnpm
npm install @aws-sdk/client-dynamodb
yarn add @aws-sdk/client-dynamodb
pnpm add @aws-sdk/client-dynamodb
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
Next, sign into your AWS account and create a DynamoDB table. Name the table `langchain`, and name your partition key `id`. Make sure your partition key is a string. You can leave sort key and the other settings alone.
You'll also need to retrieve an AWS access key and secret key for a role or user that has access to the table and add them to your environment variables.
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { BufferMemory } from "langchain/memory";import { DynamoDBChatMessageHistory } from "@langchain/community/stores/message/dynamodb";import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new DynamoDBChatMessageHistory({ tableName: "langchain", partitionKey: "id", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation config: { region: "us-east-2", credentials: { accessKeyId: "<your AWS access key id>", secretAccessKey: "<your AWS secret access key>", }, }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/
#### API Reference:
* [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory`
* [DynamoDBChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_dynamodb.DynamoDBChatMessageHistory.html) from `@langchain/community/stores/message/dynamodb`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Convex Chat Memory
](/v0.1/docs/integrations/chat_memory/convex/)[
Next
Firestore Chat Memory
](/v0.1/docs/integrations/chat_memory/firestore/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/azure_openai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Azure OpenAI
On this page
Azure OpenAI
============
[Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) is a cloud service to help you quickly develop generative AI experiences with a diverse set of prebuilt and curated models from OpenAI, Meta and beyond.
LangChain.js supports integration with [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) using either the dedicated [Azure OpenAI SDK](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai) or the [OpenAI SDK](https://github.com/openai/openai-node).
You can learn more about Azure OpenAI and its difference with the OpenAI API on [this page](https://learn.microsoft.com/azure/ai-services/openai/overview). If you don't have an Azure account, you can [create a free account](https://azure.microsoft.com/free/) to get started.
Using Azure OpenAI SDK[](#using-azure-openai-sdk "Direct link to Using Azure OpenAI SDK")
------------------------------------------------------------------------------------------
You'll first need to install the [`@langchain/azure-openai`](https://www.npmjs.com/package/@langchain/azure-openai) package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install -S @langchain/azure-openai
yarn add @langchain/azure-openai
pnpm add @langchain/azure-openai
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
You'll also need to have an Azure OpenAI instance deployed. You can deploy a version on Azure Portal following [this guide](https://learn.microsoft.com/azure/ai-services/openai/how-to/create-resource?pivots=web-portal).
Once you have your instance running, make sure you have the endpoint and key. You can find them in the Azure Portal, under the "Keys and Endpoint" section of your instance.
You can then define the following environment variables to use the service:
AZURE_OPENAI_API_ENDPOINT=<YOUR_ENDPOINT>AZURE_OPENAI_API_KEY=<YOUR_KEY>AZURE_OPENAI_API_EMBEDDING_DEPLOYMENT_NAME=<YOUR_EMBEDDING_DEPLOYMENT_NAME>
Alternatively, you can pass the values directly to the `AzureOpenAI` constructor:
import { AzureOpenAI } from "@langchain/azure-openai";const model = new AzureOpenAI({ azureOpenAIEndpoint: "<your_endpoint>", apiKey: "<your_key>", azureOpenAIApiDeploymentName: "<your_embedding_deployment_name",});
If you're using Azure Managed Identity, you can also pass the credentials directly to the constructor:
import { DefaultAzureCredential } from "@azure/identity";import { AzureOpenAI } from "@langchain/azure-openai";const credentials = new DefaultAzureCredential();const model = new AzureOpenAI({ credentials, azureOpenAIEndpoint: "<your_endpoint>", azureOpenAIApiDeploymentName: "<your_embedding_deployment_name",});
### Usage example[](#usage-example "Direct link to Usage example")
import { AzureOpenAIEmbeddings } from "@langchain/azure-openai";const model = new AzureOpenAIEmbeddings();const res = await model.embedQuery( "What would be a good company name for a company that makes colorful socks?");console.log({ res });
#### API Reference:
* [AzureOpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_azure_openai.AzureOpenAIEmbeddings.html) from `@langchain/azure-openai`
Using OpenAI SDK[](#using-openai-sdk "Direct link to Using OpenAI SDK")
------------------------------------------------------------------------
The `OpenAIEmbeddings` class can also use the OpenAI API on Azure to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing `stripNewLines: false` to the constructor.
For example, if your Azure instance is hosted under `https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}`, you could initialize your instance like this:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAIEmbeddings } from "@langchain/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME});
If you'd like to initialize using environment variable defaults, the `process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME` will be used first, then `process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME`. This can be useful if you're using these embeddings with another Azure OpenAI model.
If your instance is hosted under a domain other than the default `openai.azure.com`, you'll need to use the alternate `AZURE_OPENAI_BASE_PATH` environment variable. For example, here's how you would connect to the domain `https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}`:
import { OpenAIEmbeddings } from "@langchain/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Alibaba Tongyi
](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)[
Next
Baidu Qianfan
](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Using Azure OpenAI SDK](#using-azure-openai-sdk)
* [Usage example](#usage-example)
* [Using OpenAI SDK](#using-openai-sdk)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/azure/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* Azure OpenAI
On this page
Azure ChatOpenAI
================
[Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) is a cloud service to help you quickly develop generative AI experiences with a diverse set of prebuilt and curated models from OpenAI, Meta and beyond.
LangChain.js supports integration with [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) using either the dedicated [Azure OpenAI SDK](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai) or the [OpenAI SDK](https://github.com/openai/openai-node).
You can learn more about Azure OpenAI and its difference with the OpenAI API on [this page](https://learn.microsoft.com/azure/ai-services/openai/overview). If you don't have an Azure account, you can [create a free account](https://azure.microsoft.com/free/) to get started.
Using the OpenAI SDK[](#using-the-openai-sdk "Direct link to Using the OpenAI SDK")
------------------------------------------------------------------------------------
You can use the `ChatOpenAI` class to access OpenAI instances hosted on Azure.
For example, if your Azure instance is hosted under `https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}`, you could initialize your instance like this:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ temperature: 0.9, azureOpenAIApiKey: "SOME_SECRET_VALUE", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME});
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
If your instance is hosted under a domain other than the default `openai.azure.com`, you'll need to use the alternate `AZURE_OPENAI_BASE_PATH` environment variable. For example, here's how you would connect to the domain `https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}`:
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ temperature: 0.9, azureOpenAIApiKey: "SOME_SECRET_VALUE", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
Using the Azure OpenAI SDK[](#using-the-azure-openai-sdk "Direct link to Using the Azure OpenAI SDK")
------------------------------------------------------------------------------------------------------
You'll first need to install the [`@langchain/azure-openai`](https://www.npmjs.com/package/@langchain/azure-openai) package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install -S @langchain/azure-openai
yarn add @langchain/azure-openai
pnpm add @langchain/azure-openai
You'll also need to have an Azure OpenAI instance deployed. You can deploy a version on Azure Portal following [this guide](https://learn.microsoft.com/azure/ai-services/openai/how-to/create-resource?pivots=web-portal).
Once you have your instance running, make sure you have the endpoint and key. You can find them in the Azure Portal, under the "Keys and Endpoint" section of your instance.
You can then define the following environment variables to use the service:
AZURE_OPENAI_API_ENDPOINT=<YOUR_ENDPOINT>AZURE_OPENAI_API_KEY=<YOUR_KEY>AZURE_OPENAI_API_EMBEDDING_DEPLOYMENT_NAME=<YOUR_EMBEDDING_DEPLOYMENT_NAME>
Alternatively, you can pass the values directly to the `AzureOpenAI` constructor:
import { AzureChatOpenAI } from "@langchain/azure-openai";const model = new AzureChatOpenAI({ azureOpenAIEndpoint: "<your_endpoint>", apiKey: "<your_key>", azureOpenAIApiDeploymentName: "<your_embedding_deployment_name", model: "<your_model>",});
If you're using Azure Managed Identity, you can also pass the credentials directly to the constructor:
import { DefaultAzureCredential } from "@azure/identity";import { AzureChatOpenAI } from "@langchain/azure-openai";const credentials = new DefaultAzureCredential();const model = new AzureChatOpenAI({ credentials, azureOpenAIEndpoint: "<your_endpoint>", azureOpenAIApiDeploymentName: "<your_embedding_deployment_name", model: "<your_model>",});
### Usage example[](#usage-example "Direct link to Usage example")
import { AzureChatOpenAI } from "@langchain/azure-openai";const model = new AzureChatOpenAI({ model: "gpt-4", prefixMessages: [ { role: "system", content: "You are a helpful assistant that answers in pirate language", }, ], maxTokens: 50,});const res = await model.invoke( "What would be a good company name for a company that makes colorful socks?");console.log({ res });
#### API Reference:
* [AzureChatOpenAI](https://api.js.langchain.com/classes/langchain_azure_openai.AzureChatOpenAI.html) from `@langchain/azure-openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Anthropic Tools
](/v0.1/docs/integrations/chat/anthropic_tools/)[
Next
Baidu Wenxin
](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Using the OpenAI SDK](#using-the-openai-sdk)
* [Using the Azure OpenAI SDK](#using-the-azure-openai-sdk)
* [Usage example](#usage-example)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/)
* [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/)
* [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/)
* [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/)
* [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/)
* [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/)
* [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/)
* [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/)
* [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/)
* [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/)
* [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/)
* [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/)
* [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/)
* [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/)
* [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/)
* [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/)
* [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/)
* [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/)
* [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/)
* [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/)
* [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/)
* [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/)
* [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/)
* [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/)
* [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/)
* [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/)
* [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* Azure Blob Storage Container
Azure Blob Storage Container
============================
Compatibility
Only available on Node.js.
This covers how to load a container on Azure Blob Storage into LangChain documents.
Setup[](#setup "Direct link to Setup")
---------------------------------------
To run this loader, you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.
See the docs [here](https://js.langchain.com/docs/modules/indexes/document_loaders/examples/file_loaders/unstructured) for information on how to do that.
You'll also need to install the official Azure Storage Blob client library:
* npm
* Yarn
* pnpm
npm install @azure/storage-blob
yarn add @azure/storage-blob
pnpm add @azure/storage-blob
Usage[](#usage "Direct link to Usage")
---------------------------------------
Once Unstructured is configured, you can use the Azure Blob Storage Container loader to load files and then convert them into a Document.
import { AzureBlobStorageContainerLoader } from "langchain/document_loaders/web/azure_blob_storage_container";const loader = new AzureBlobStorageContainerLoader({ azureConfig: { connectionString: "", container: "container_name", }, unstructuredConfig: { apiUrl: "http://localhost:8000/general/v0/general", apiKey: "", // this will be soon required },});const docs = await loader.load();console.log(docs);
#### API Reference:
* [AzureBlobStorageContainerLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_azure_blob_storage_container.AzureBlobStorageContainerLoader.html) from `langchain/document_loaders/web/azure_blob_storage_container`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
AssemblyAI Audio Transcript
](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/)[
Next
Azure Blob Storage File
](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/)
* [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/)
* [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/)
* [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/)
* [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/)
* [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/)
* [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/)
* [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/)
* [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/)
* [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/)
* [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/)
* [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/)
* [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/)
* [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/)
* [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/)
* [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/)
* [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/)
* [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/)
* [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/)
* [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/)
* [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/)
* [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/)
* [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/)
* [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/)
* [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/)
* [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/)
* [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* Azure Blob Storage File
Azure Blob Storage File
=======================
Compatibility
Only available on Node.js.
This covers how to load an Azure File into LangChain documents.
Setup[](#setup "Direct link to Setup")
---------------------------------------
To use this loader, you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.
See the docs [here](https://js.langchain.com/docs/modules/indexes/document_loaders/examples/file_loaders/unstructured) for information on how to do that.
You'll also need to install the official Azure Storage Blob client library:
* npm
* Yarn
* pnpm
npm install @azure/storage-blob
yarn add @azure/storage-blob
pnpm add @azure/storage-blob
Usage[](#usage "Direct link to Usage")
---------------------------------------
Once Unstructured is configured, you can use the Azure Blob Storage File loader to load files and then convert them into a Document.
import { AzureBlobStorageFileLoader } from "langchain/document_loaders/web/azure_blob_storage_file";const loader = new AzureBlobStorageFileLoader({ azureConfig: { connectionString: "", container: "container_name", blobName: "example.txt", }, unstructuredConfig: { apiUrl: "http://localhost:8000/general/v0/general", apiKey: "", // this will be soon required },});const docs = await loader.load();console.log(docs);
#### API Reference:
* [AzureBlobStorageFileLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_azure_blob_storage_file.AzureBlobStorageFileLoader.html) from `langchain/document_loaders/web/azure_blob_storage_file`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Azure Blob Storage Container
](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/)[
Next
Browserbase Loader
](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/ai21/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* AI21
AI21
====
You can get started with AI21Labs' Jurassic family of models, as well as see a full list of available foundational models, by signing up for an API key [on their website](https://www.ai21.com/).
Here's an example of initializing an instance in LangChain.js:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
import { AI21 } from "@langchain/community/llms/ai21";const model = new AI21({ ai21ApiKey: "YOUR_AI21_API_KEY", // Or set as process.env.AI21_API_KEY});const res = await model.invoke(`Translate "I love programming" into German.`);console.log({ res });/* { res: "\nIch liebe das Programmieren." } */
#### API Reference:
* [AI21](https://api.js.langchain.com/classes/langchain_community_llms_ai21.AI21.html) from `@langchain/community/llms/ai21`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
LLMs
](/v0.1/docs/integrations/llms/)[
Next
AlephAlpha
](/v0.1/docs/integrations/llms/aleph_alpha/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/openai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* OpenAI
On this page
OpenAI
======
Here's how you can initialize an `OpenAI` LLM instance:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
import { OpenAI } from "@langchain/openai";const model = new OpenAI({ model: "gpt-3.5-turbo-instruct", // Defaults to "gpt-3.5-turbo-instruct" if no model provided. temperature: 0.9, apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY});const res = await model.invoke( "What would be a good company name a company that makes colorful socks?");console.log({ res });
If you're part of an organization, you can set `process.env.OPENAI_ORGANIZATION` to your OpenAI organization id, or pass it in as `organization` when initializing the model.
Custom URLs[](#custom-urls "Direct link to Custom URLs")
---------------------------------------------------------
You can customize the base URL the SDK sends requests to by passing a `configuration` parameter like this:
const model = new OpenAI({ temperature: 0.9, configuration: { baseURL: "https://your_custom_url.com", },});
You can also pass other `ClientOptions` parameters accepted by the official SDK.
If you are hosting on Azure OpenAI, see the [dedicated page instead](/v0.1/docs/integrations/llms/azure/).
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Ollama
](/v0.1/docs/integrations/llms/ollama/)[
Next
PromptLayer OpenAI
](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [Custom URLs](#custom-urls)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/cloudflare_workersai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* Cloudflare Workers AI
On this page
Cloudflare Workers AI
=====================
info
Workers AI is currently in Open Beta and is not recommended for production data and traffic, and limits + access are subject to change
Workers AI allows you to run machine learning models, on the Cloudflare network, from your own code.
Usage[](#usage "Direct link to Usage")
---------------------------------------
You'll first need to install the LangChain Cloudflare integration package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/cloudflare
yarn add @langchain/cloudflare
pnpm add @langchain/cloudflare
import { CloudflareWorkersAI } from "@langchain/cloudflare";const model = new CloudflareWorkersAI({ model: "@cf/meta/llama-2-7b-chat-int8", // Default value cloudflareAccountId: process.env.CLOUDFLARE_ACCOUNT_ID, cloudflareApiToken: process.env.CLOUDFLARE_API_TOKEN, // Pass a custom base URL to use Cloudflare AI Gateway // baseUrl: `https://gateway.ai.cloudflare.com/v1/{YOUR_ACCOUNT_ID}/{GATEWAY_NAME}/workers-ai/`,});const response = await model.invoke( `Translate "I love programming" into German.`);console.log(response);/* Here are a few options:1. "Ich liebe Programmieren" - This is the most common way to say "I love programming" in German. "Liebe" means "love" in German, and "Programmieren" means "programming".2. "Programmieren macht mir Spaß" - This means "Programming makes me happy". This is a more casual way to express your love for programming in German.3. "Ich bin ein großer Fan von Programmieren" - This means "I'm a big fan of programming". This is a more formal way to express your love for programming in German.4. "Programmieren ist mein Hobby" - This means "Programming is my hobby". This is a more casual way to express your love for programming in German.5. "Ich liebe es, Programme zu schreiben" - This means "I love writing programs". This is a more formal way to express your love for programming in German.*/const stream = await model.stream( `Translate "I love programming" into German.`);for await (const chunk of stream) { console.log(chunk);}/* Here are a few options : 1 . " I ch lie be Program ...*/
#### API Reference:
* [CloudflareWorkersAI](https://api.js.langchain.com/classes/langchain_cloudflare.CloudflareWorkersAI.html) from `@langchain/cloudflare`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Bedrock
](/v0.1/docs/integrations/llms/bedrock/)[
Next
Cohere
](/v0.1/docs/integrations/llms/cohere/)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/aleph_alpha/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* AlephAlpha
AlephAlpha
==========
LangChain.js supports AlephAlpha's Luminous family of models. You'll need to sign up for an API key [on their website](https://www.aleph-alpha.com/).
Here's an example:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
import { AlephAlpha } from "@langchain/community/llms/aleph_alpha";const model = new AlephAlpha({ aleph_alpha_api_key: "YOUR_ALEPH_ALPHA_API_KEY", // Or set as process.env.ALEPH_ALPHA_API_KEY});const res = await model.invoke(`Is cereal soup?`);console.log({ res });/* { res: "\nIs soup a cereal? I don’t think so, but it is delicious." } */
#### API Reference:
* [AlephAlpha](https://api.js.langchain.com/classes/langchain_community_llms_aleph_alpha.AlephAlpha.html) from `@langchain/community/llms/aleph_alpha`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
AI21
](/v0.1/docs/integrations/llms/ai21/)[
Next
AWS SageMakerEndpoint
](/v0.1/docs/integrations/llms/aws_sagemaker/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/cohere/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* Cohere
Cohere
======
LangChain.js supports Cohere LLMs. Here's an example:
You'll first need to install the [`@langchain/cohere`](https://www.npmjs.com/package/@langchain/cohere) package.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/cohere
yarn add @langchain/cohere
pnpm add @langchain/cohere
import { Cohere } from "@langchain/cohere";const model = new Cohere({ maxTokens: 20, apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.COHERE_API_KEY});const res = await model.invoke( "What would be a good company name a company that makes colorful socks?");console.log({ res });
#### API Reference:
* [Cohere](https://api.js.langchain.com/classes/langchain_cohere.Cohere.html) from `@langchain/cohere`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Cloudflare Workers AI
](/v0.1/docs/integrations/llms/cloudflare_workersai/)[
Next
Fake LLM
](/v0.1/docs/integrations/llms/fake/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/fake/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* Fake LLM
On this page
Fake LLM
========
LangChain provides a fake LLM for testing purposes. This allows you to mock out calls to the LLM and and simulate what would happen if the LLM responded in a certain way.
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { FakeListLLM } from "langchain/llms/fake";/** * The FakeListLLM can be used to simulate ordered predefined responses. */const llm = new FakeListLLM({ responses: ["I'll callback later.", "You 'console' them!"],});const firstResponse = await llm.invoke("You want to hear a JavasSript joke?");const secondResponse = await llm.invoke( "How do you cheer up a JavaScript developer?");console.log({ firstResponse });console.log({ secondResponse });/** * The FakeListLLM can also be used to simulate streamed responses. */const stream = await llm.stream("You want to hear a JavasSript joke?");const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/** * The FakeListLLM can also be used to simulate delays in either either synchronous or streamed responses. */const slowLLM = new FakeListLLM({ responses: ["Because Oct 31 equals Dec 25", "You 'console' them!"], sleep: 1000,});const slowResponse = await slowLLM.invoke( "Why do programmers always mix up Halloween and Christmas?");console.log({ slowResponse });const slowStream = await slowLLM.stream( "How do you cheer up a JavaScript developer?");const slowChunks = [];for await (const chunk of slowStream) { slowChunks.push(chunk);}console.log(slowChunks.join(""));
#### API Reference:
* [FakeListLLM](https://api.js.langchain.com/classes/langchain_llms_fake.FakeListLLM.html) from `langchain/llms/fake`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Cohere
](/v0.1/docs/integrations/llms/cohere/)[
Next
Fireworks
](/v0.1/docs/integrations/llms/fireworks/)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/fireworks/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* Fireworks
Fireworks
=========
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
You can use models provided by Fireworks AI as follows:
import { Fireworks } from "@langchain/community/llms/fireworks";const model = new Fireworks({ temperature: 0.9, // In Node.js defaults to process.env.FIREWORKS_API_KEY apiKey: "YOUR-API-KEY",});
#### API Reference:
* [Fireworks](https://api.js.langchain.com/classes/langchain_community_llms_fireworks.Fireworks.html) from `@langchain/community/llms/fireworks`
Behind the scenes, Fireworks AI uses the OpenAI SDK and OpenAI compatible API, with some caveats:
* Certain properties are not supported by the Fireworks API, see [here](https://readme.fireworks.ai/docs/openai-compatibility#api-compatibility).
* Generation using multiple prompts is not supported.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Fake LLM
](/v0.1/docs/integrations/llms/fake/)[
Next
Friendli
](/v0.1/docs/integrations/llms/friendli/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/friendli/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* Friendli
On this page
Friendli
========
> [Friendli](https://friendli.ai/) enhances AI application performance and optimizes cost savings with scalable, efficient deployment options, tailored for high-demand AI workloads.
This tutorial guides you through integrating `Friendli` with LangChain.
Setup[](#setup "Direct link to Setup")
---------------------------------------
Ensure the `@langchain/community` is installed.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Sign in to [Friendli Suite](https://suite.friendli.ai/) to create a Personal Access Token, and set it as the `FRIENDLI_TOKEN` environment. You can set team id as `FRIENDLI_TEAM` environment.
You can initialize a Friendli chat model with selecting the model you want to use. The default model is `mixtral-8x7b-instruct-v0-1`. You can check the available models at [docs.friendli.ai](https://docs.friendli.ai/guides/serverless_endpoints/pricing#text-generation-models).
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { Friendli } from "@langchain/community/llms/friendli";const model = new Friendli({ model: "mixtral-8x7b-instruct-v0-1", // Default value friendliToken: process.env.FRIENDLI_TOKEN, friendliTeam: process.env.FRIENDLI_TEAM, maxTokens: 18, temperature: 0.75, topP: 0.25, frequencyPenalty: 0, stop: [],});const response = await model.invoke( "Check the Grammar: She dont like to eat vegetables, but she loves fruits.");console.log(response);/*Correct: She doesn't like to eat vegetables, but she loves fruits*/const stream = await model.stream( "Check the Grammar: She dont like to eat vegetables, but she loves fruits.");for await (const chunk of stream) { console.log(chunk);}/*Correct: She doesn...she loves fruits*/
#### API Reference:
* [Friendli](https://api.js.langchain.com/classes/langchain_community_llms_friendli.Friendli.html) from `@langchain/community/llms/friendli`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Fireworks
](/v0.1/docs/integrations/llms/fireworks/)[
Next
(Legacy) Google PaLM/VertexAI
](/v0.1/docs/integrations/llms/google_palm/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/google_vertex_ai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* Google Vertex AI
On this page
Google Vertex AI
================
Langchain.js supports two different authentication methods based on whether you're running in a Node.js environment or a web environment.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Node.js[](#nodejs "Direct link to Node.js")
To call Vertex AI models in Node, you'll need to install the `@langchain/google-vertexai` package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
You should make sure the Vertex AI API is enabled for the relevant project and that you've authenticated to Google Cloud using one of these methods:
* You are logged into an account (using `gcloud auth application-default login`) permitted to that project.
* You are running on a machine using a service account that is permitted to the project.
* You have downloaded the credentials for a service account that is permitted to the project and set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the path of this file. **or**
* You set the `GOOGLE_API_KEY` environment variable to the API key for the project.
### Web[](#web "Direct link to Web")
To call Vertex AI models in web environments (like Edge functions), you'll need to install the `@langchain/google-vertexai-web` package:
* npm
* Yarn
* pnpm
npm install @langchain/google-vertexai-web
yarn add @langchain/google-vertexai-web
pnpm add @langchain/google-vertexai-web
Then, you'll need to add your service account credentials directly as a `GOOGLE_VERTEX_AI_WEB_CREDENTIALS` environment variable:
GOOGLE_VERTEX_AI_WEB_CREDENTIALS={"type":"service_account","project_id":"YOUR_PROJECT-12345",...}
You can also pass your credentials directly in code like this:
import { VertexAI } from "@langchain/google-vertexai";// Or uncomment this line if you're using the web version:// import { VertexAI } from "@langchain/google-vertexai-web";const model = new VertexAI({ authOptions: { credentials: {"type":"service_account","project_id":"YOUR_PROJECT-12345",...}, },});
Usage[](#usage "Direct link to Usage")
---------------------------------------
The entire family of `gemini` models are available by specifying the `modelName` parameter.
import { VertexAI } from "@langchain/google-vertexai";// Or, if using the web entrypoint:// import { VertexAI } from "@langchain/google-vertexai-web";const model = new VertexAI({ temperature: 0.7,});const res = await model.invoke( "What would be a good company name for a company that makes colorful socks?");console.log({ res });/*{ res: '* Hue Hues\n' + '* Sock Spectrum\n' + '* Kaleidosocks\n' + '* Threads of Joy\n' + '* Vibrant Threads\n' + '* Rainbow Soles\n' + '* Colorful Canvases\n' + '* Prismatic Pedals\n' + '* Sock Canvas\n' + '* Color Collective'} */
#### API Reference:
* [VertexAI](https://api.js.langchain.com/classes/langchain_google_vertexai.VertexAI.html) from `@langchain/google-vertexai`
### Streaming[](#streaming "Direct link to Streaming")
Streaming in multiple chunks is supported for faster responses:
import { VertexAI } from "@langchain/google-vertexai";// Or, if using the web entrypoint:// import { VertexAI } from "@langchain/google-vertexai-web";const model = new VertexAI({ temperature: 0.7,});const stream = await model.stream( "What would be a good company name for a company that makes colorful socks?");for await (const chunk of stream) { console.log("\n---------\nChunk:\n---------\n", chunk);}/*---------Chunk:--------- * Kaleidoscope Toes* Huephoria* Soleful Spectrum*---------Chunk:--------- Colorwave Hosiery* Chromatic Threads* Rainbow Rhapsody* Vibrant Soles* Toe-tally Colorful* Socktacular Hues*---------Chunk:--------- Threads of Joy---------Chunk:---------*/
#### API Reference:
* [VertexAI](https://api.js.langchain.com/classes/langchain_google_vertexai.VertexAI.html) from `@langchain/google-vertexai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
(Legacy) Google PaLM/VertexAI
](/v0.1/docs/integrations/llms/google_palm/)[
Next
Gradient AI
](/v0.1/docs/integrations/llms/gradient_ai/)
* [Setup](#setup)
* [Node.js](#nodejs)
* [Web](#web)
* [Usage](#usage)
* [Streaming](#streaming)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/huggingface_inference/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* HuggingFaceInference
HuggingFaceInference
====================
Here's an example of calling a HugggingFaceInference model as an LLM:
* npm
* Yarn
* pnpm
npm install @huggingface/inference@2
yarn add @huggingface/inference@2
pnpm add @huggingface/inference@2
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
import { HuggingFaceInference } from "langchain/llms/hf";const model = new HuggingFaceInference({ model: "gpt2", apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY});const res = await model.invoke("1 + 1 =");console.log({ res });
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Gradient AI
](/v0.1/docs/integrations/llms/gradient_ai/)[
Next
Llama CPP
](/v0.1/docs/integrations/llms/llama_cpp/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/gradient_ai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* Gradient AI
On this page
Gradient AI
===========
LangChain.js supports integration with Gradient AI. Check out [Gradient AI](https://docs.gradient.ai/docs) for a list of available models.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to install the official Gradient Node SDK as a peer dependency:
* npm
* Yarn
* pnpm
npm i @gradientai/nodejs-sdk
yarn add @gradientai/nodejs-sdk
pnpm add @gradientai/nodejs-sdk
You will need to set the following environment variables for using the Gradient AI API.
1. `GRADIENT_ACCESS_TOKEN`
2. `GRADIENT_WORKSPACE_ID`
Alternatively, these can be set during the GradientAI Class instantiation as `gradientAccessKey` and `workspaceId` respectively. For example:
const model = new GradientLLM({ gradientAccessKey: "My secret Access Token" workspaceId: "My secret workspace id"});
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
### Using Gradient's Base Models[](#using-gradients-base-models "Direct link to Using Gradient's Base Models")
import { GradientLLM } from "@langchain/community/llms/gradient_ai";// Note that inferenceParameters are optionalconst model = new GradientLLM({ modelSlug: "llama2-7b-chat", inferenceParameters: { maxGeneratedTokenCount: 20, temperature: 0, },});const res = await model.invoke( "What would be a good company name for a company that makes colorful socks?");console.log({ res });
#### API Reference:
* [GradientLLM](https://api.js.langchain.com/classes/langchain_community_llms_gradient_ai.GradientLLM.html) from `@langchain/community/llms/gradient_ai`
### Using your own fine-tuned Adapters[](#using-your-own-fine-tuned-adapters "Direct link to Using your own fine-tuned Adapters")
The use your own custom adapter simply set `adapterId` during setup.
import { GradientLLM } from "@langchain/community/llms/gradient_ai";// Note that inferenceParameters are optionalconst model = new GradientLLM({ adapterId: process.env.GRADIENT_ADAPTER_ID, inferenceParameters: { maxGeneratedTokenCount: 20, temperature: 0, },});const res = await model.invoke( "What would be a good company name for a company that makes colorful socks?");console.log({ res });
#### API Reference:
* [GradientLLM](https://api.js.langchain.com/classes/langchain_community_llms_gradient_ai.GradientLLM.html) from `@langchain/community/llms/gradient_ai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Google Vertex AI
](/v0.1/docs/integrations/llms/google_vertex_ai/)[
Next
HuggingFaceInference
](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Setup](#setup)
* [Usage](#usage)
* [Using Gradient's Base Models](#using-gradients-base-models)
* [Using your own fine-tuned Adapters](#using-your-own-fine-tuned-adapters)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/llama_cpp/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* Llama CPP
On this page
Llama CPP
=========
Compatibility
Only available on Node.js.
This module is based on the [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) Node.js bindings for [llama.cpp](https://github.com/ggerganov/llama.cpp), allowing you to work with a locally running LLM. This allows you to work with a much smaller quantized model capable of running on a laptop environment, ideal for testing and scratch padding ideas without running up a bill!
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to install the [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) module to communicate with your local model.
* npm
* Yarn
* pnpm
npm install -S node-llama-cpp
yarn add node-llama-cpp
pnpm add node-llama-cpp
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
You will also need a local Llama 2 model (or a model supported by [node-llama-cpp](https://github.com/withcatai/node-llama-cpp)). You will need to pass the path to this model to the LlamaCpp module as a part of the parameters (see example).
Out-of-the-box `node-llama-cpp` is tuned for running on a MacOS platform with support for the Metal GPU of Apple M-series of processors. If you need to turn this off or need support for the CUDA architecture then refer to the documentation at [node-llama-cpp](https://withcatai.github.io/node-llama-cpp/).
A note to LangChain.js contributors: if you want to run the tests associated with this module you will need to put the path to your local model in the environment variable `LLAMA_PATH`.
Guide to installing Llama2[](#guide-to-installing-llama2 "Direct link to Guide to installing Llama2")
------------------------------------------------------------------------------------------------------
Getting a local Llama2 model running on your machine is a pre-req so this is a quick guide to getting and building Llama 7B (the smallest) and then quantizing it so that it will run comfortably on a laptop. To do this you will need `python3` on your machine (3.11 is recommended), also `gcc` and `make` so that `llama.cpp` can be built.
### Getting the Llama2 models[](#getting-the-llama2-models "Direct link to Getting the Llama2 models")
To get a copy of Llama2 you need to visit [Meta AI](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and request access to their models. Once Meta AI grant you access, you will receive an email containing a unique URL to access the files, this will be needed in the next steps. Now create a directory to work in, for example:
mkdir llama2cd llama2
Now we need to get the Meta AI `llama` repo in place so we can download the model.
git clone https://github.com/facebookresearch/llama.git
Once we have this in place we can change into this directory and run the downloader script to get the model we will be working with. Note: From here on its assumed that the model in use is `llama-2–7b`, if you select a different model don't forget to change the references to the model accordingly.
cd llama/bin/bash ./download.sh
This script will ask you for the URL that Meta AI sent to you (see above), you will also select the model to download, in this case we used `llama-2–7b`. Once this step has completed successfully (this can take some time, the `llama-2–7b` model is around 13.5Gb) there should be a new `llama-2–7b` directory containing the model and other files.
### Converting and quantizing the model[](#converting-and-quantizing-the-model "Direct link to Converting and quantizing the model")
In this step we need to use `llama.cpp` so we need to download that repo.
cd ..git clone https://github.com/ggerganov/llama.cpp.gitcd llama.cpp
Now we need to build the `llama.cpp` tools and set up our `python` environment. In these steps it's assumed that your install of python can be run using `python3` and that the virtual environment can be called `llama2`, adjust accordingly for your own situation.
makepython3 -m venv llama2source llama2/bin/activate
After activating your llama2 environment you should see `(llama2)` prefixing your command prompt to let you know this is the active environment. Note: if you need to come back to build another model or re-quantize the model don't forget to activate the environment again also if you update `llama.cpp` you will need to rebuild the tools and possibly install new or updated dependencies! Now that we have an active python environment, we need to install the python dependencies.
python3 -m pip install -r requirements.txt
Having done this, we can start converting and quantizing the Llama2 model ready for use locally via `llama.cpp`. First, we need to convert the model, prior to the conversion let's create a directory to store it in.
mkdir models/7Bpython3 convert.py --outfile models/7B/gguf-llama2-f16.bin --outtype f16 ../../llama2/llama/llama-2-7b --vocab-dir ../../llama2/llama/llama-2-7b
This should create a converted model called `gguf-llama2-f16.bin` in the directory we just created. Note that this is just a converted model so it is also around 13.5Gb in size, in the next step we will quantize it down to around 4Gb.
./quantize ./models/7B/gguf-llama2-f16.bin ./models/7B/gguf-llama2-q4_0.bin q4_0
Running this should result in a new model being created in the `models\7B` directory, this one called `gguf-llama2-q4_0.bin`, this is the model we can use with langchain. You can validate this model is working by testing it using the `llama.cpp` tools.
./main -m ./models/7B/gguf-llama2-q4_0.bin -n 1024 --repeat_penalty 1.0 --color -i -r "User:" -f ./prompts/chat-with-bob.txt
Running this command fires up the model for a chat session. BTW if you are running out of disk space this small model is the only one we need, so you can backup and/or delete the original and converted 13.5Gb models.
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { LlamaCpp } from "@langchain/community/llms/llama_cpp";const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";const question = "Where do Llamas come from?";const model = new LlamaCpp({ modelPath: llamaPath });console.log(`You: ${question}`);const response = await model.invoke(question);console.log(`AI : ${response}`);
#### API Reference:
* [LlamaCpp](https://api.js.langchain.com/classes/langchain_community_llms_llama_cpp.LlamaCpp.html) from `@langchain/community/llms/llama_cpp`
Streaming[](#streaming "Direct link to Streaming")
---------------------------------------------------
import { LlamaCpp } from "@langchain/community/llms/llama_cpp";const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";const model = new LlamaCpp({ modelPath: llamaPath, temperature: 0.7 });const prompt = "Tell me a short story about a happy Llama.";const stream = await model.stream(prompt);for await (const chunk of stream) { console.log(chunk);}/* Once upon a time , in the rolling hills of Peru ... */
#### API Reference:
* [LlamaCpp](https://api.js.langchain.com/classes/langchain_community_llms_llama_cpp.LlamaCpp.html) from `@langchain/community/llms/llama_cpp`
;
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
HuggingFaceInference
](/v0.1/docs/integrations/llms/huggingface_inference/)[
Next
NIBittensor
](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Setup](#setup)
* [Guide to installing Llama2](#guide-to-installing-llama2)
* [Getting the Llama2 models](#getting-the-llama2-models)
* [Converting and quantizing the model](#converting-and-quantizing-the-model)
* [Usage](#usage)
* [Streaming](#streaming)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/ni_bittensor/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* NIBittensor
NIBittensor
===========
LangChain.js offers experimental support for Neural Internet's Bittensor LLM models.
Here's an example:
import { NIBittensorLLM } from "langchain/experimental/llms/bittensor";const model = new NIBittensorLLM();const res = await model.invoke(`What is Bittensor?`);console.log({ res });/* { res: "\nBittensor is opensource protocol..." } */
#### API Reference:
* [NIBittensorLLM](https://api.js.langchain.com/classes/langchain_experimental_llms_bittensor.NIBittensorLLM.html) from `langchain/experimental/llms/bittensor`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Llama CPP
](/v0.1/docs/integrations/llms/llama_cpp/)[
Next
Ollama
](/v0.1/docs/integrations/llms/ollama/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/ollama/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* Ollama
On this page
Ollama
======
[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2, locally.
Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.
This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance. For a complete list of supported models and model variants, see the [Ollama model library](https://github.com/jmorganca/ollama#model-library).
Setup[](#setup "Direct link to Setup")
---------------------------------------
Follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance.
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
import { Ollama } from "@langchain/community/llms/ollama";const ollama = new Ollama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value});const stream = await ollama.stream( `Translate "I love programming" into German.`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/* I'm glad to help! "I love programming" can be translated to German as "Ich liebe Programmieren." It's important to note that the translation of "I love" in German is "ich liebe," which is a more formal and polite way of saying "I love." In informal situations, people might use "mag ich" or "möchte ich" instead. Additionally, the word "Programmieren" is the correct term for "programming" in German. It's a combination of two words: "Programm" and "-ieren," which means "to do something." So, the full translation of "I love programming" would be "Ich liebe Programmieren.*/
#### API Reference:
* [Ollama](https://api.js.langchain.com/classes/langchain_community_llms_ollama.Ollama.html) from `@langchain/community/llms/ollama`
Multimodal models[](#multimodal-models "Direct link to Multimodal models")
---------------------------------------------------------------------------
Ollama supports open source multimodal models like [LLaVA](https://ollama.ai/library/llava) in versions 0.1.15 and up. You can bind base64 encoded image data to multimodal-capable models to use as context like this:
import { Ollama } from "@langchain/community/llms/ollama";import * as fs from "node:fs/promises";const imageData = await fs.readFile("./hotdog.jpg");const model = new Ollama({ model: "llava", baseUrl: "http://127.0.0.1:11434",}).bind({ images: [imageData.toString("base64")],});const res = await model.invoke("What's in this image?");console.log({ res });/* { res: ' The image displays a hot dog sitting on top of a bun, which is placed directly on the table. The hot dog has a striped pattern on it and looks ready to be eaten.' }*/
#### API Reference:
* [Ollama](https://api.js.langchain.com/classes/langchain_community_llms_ollama.Ollama.html) from `@langchain/community/llms/ollama`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
NIBittensor
](/v0.1/docs/integrations/llms/ni_bittensor/)[
Next
OpenAI
](/v0.1/docs/integrations/llms/openai/)
* [Setup](#setup)
* [Usage](#usage)
* [Multimodal models](#multimodal-models)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/prompt_layer_openai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* PromptLayer OpenAI
PromptLayer OpenAI
==================
LangChain integrates with PromptLayer for logging and debugging prompts and responses. To add support for PromptLayer:
1. Create a PromptLayer account here: [https://promptlayer.com](https://promptlayer.com).
2. Create an API token and pass it either as `promptLayerApiKey` argument in the `PromptLayerOpenAI` constructor or in the `PROMPTLAYER_API_KEY` environment variable.
import { PromptLayerOpenAI } from "langchain/llms/openai";const model = new PromptLayerOpenAI({ temperature: 0.9, apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY});const res = await model.invoke( "What would be a good company name a company that makes colorful socks?");
Azure PromptLayerOpenAI
=======================
LangChain also integrates with PromptLayer for Azure-hosted OpenAI instances:
import { PromptLayerOpenAI } from "langchain/llms/openai";const model = new PromptLayerOpenAI({ temperature: 0.9, azureOpenAIApiKey: "YOUR-AOAI-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiInstanceName: "YOUR-AOAI-INSTANCE-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "YOUR-AOAI-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME azureOpenAIApiCompletionsDeploymentName: "YOUR-AOAI-COMPLETIONS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME azureOpenAIApiEmbeddingsDeploymentName: "YOUR-AOAI-EMBEDDINGS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME azureOpenAIApiVersion: "YOUR-AOAI-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIBasePath: "YOUR-AZURE-OPENAI-BASE-PATH", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY});const res = await model.invoke( "What would be a good company name a company that makes colorful socks?");
The request and the response will be logged in the [PromptLayer dashboard](https://promptlayer.com/home).
> **_Note:_** In streaming mode PromptLayer will not log the response.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
OpenAI
](/v0.1/docs/integrations/llms/openai/)[
Next
RaycastAI
](/v0.1/docs/integrations/llms/raycast/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/raycast/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* RaycastAI
RaycastAI
=========
> **Note:** This is a community-built integration and is not officially supported by Raycast.
You can utilize the LangChain's RaycastAI class within the [Raycast Environment](https://developers.raycast.com/api-reference/ai) to enhance your Raycast extension with Langchain's capabilities.
* The RaycastAI class is only available in the Raycast environment and only to [Raycast Pro](https://www.raycast.com/pro) users as of August 2023. You may check how to create an extension for Raycast [here](https://developers.raycast.com/).
* There is a rate limit of approx 10 requests per minute for each Raycast Pro user. If you exceed this limit, you will receive an error. You can set your desired rpm limit by passing `rateLimitPerMinute` to the `RaycastAI` constructor as shown in the example, as this rate limit may change in the future.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
import { RaycastAI } from "@langchain/community/llms/raycast";import { showHUD } from "@raycast/api";import { initializeAgentExecutorWithOptions } from "langchain/agents";import { Tool } from "@langchain/core/tools";const model = new RaycastAI({ rateLimitPerMinute: 10, // It is 10 by default so you can omit this line model: "gpt-3.5-turbo", creativity: 0, // `creativity` is a term used by Raycast which is equivalent to `temperature` in some other LLMs});const tools: Tool[] = [ // Add your tools here];export default async function main() { // Initialize the agent executor with RaycastAI model const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "chat-conversational-react-description", }); const input = `Describe my today's schedule as Gabriel Garcia Marquez would describe it`; const answer = await executor.invoke({ input }); await showHUD(answer.output);}
#### API Reference:
* [RaycastAI](https://api.js.langchain.com/classes/langchain_community_llms_raycast.RaycastAI.html) from `@langchain/community/llms/raycast`
* [initializeAgentExecutorWithOptions](https://api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents`
* [Tool](https://api.js.langchain.com/classes/langchain_core_tools.Tool.html) from `@langchain/core/tools`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
PromptLayer OpenAI
](/v0.1/docs/integrations/llms/prompt_layer_openai/)[
Next
Replicate
](/v0.1/docs/integrations/llms/replicate/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/togetherai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* Together AI
On this page
Together AI
===========
Here's an example of calling a Together AI model as an LLM:
import { TogetherAI } from "@langchain/community/llms/togetherai";import { PromptTemplate } from "@langchain/core/prompts";const model = new TogetherAI({ model: "mistralai/Mixtral-8x7B-Instruct-v0.1",});const prompt = PromptTemplate.fromTemplate(`System: You are a helpful assistant.User: {input}.Assistant:`);const chain = prompt.pipe(model);const response = await chain.invoke({ input: `Tell me a joke about bears`,});console.log("response", response);/**response Sure, here's a bear joke for you: Why do bears hate shoes so much? Because they like to run around in their bear feet! */
#### API Reference:
* [TogetherAI](https://api.js.langchain.com/classes/langchain_community_llms_togetherai.TogetherAI.html) from `@langchain/community/llms/togetherai`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/f49160bd-a6cd-4234-96de-b8106a9e08a7/r)
You can run other models through Together by changing the `modelName` parameter.
You can find a full list of models on [Together's website](https://api.together.xyz/playground).
### Streaming[](#streaming "Direct link to Streaming")
Together AI also supports streaming, this example demonstrates how to use this feature.
import { TogetherAI } from "@langchain/community/llms/togetherai";import { ChatPromptTemplate } from "@langchain/core/prompts";const model = new TogetherAI({ model: "mistralai/Mixtral-8x7B-Instruct-v0.1", streaming: true,});const prompt = ChatPromptTemplate.fromMessages([ ["ai", "You are a helpful assistant."], [ "human", `Tell me a joke about bears.Assistant:`, ],]);const chain = prompt.pipe(model);const result = await chain.stream({});let fullText = "";for await (const item of result) { console.log("stream item:", item); fullText += item;}console.log(fullText);/**stream item: Surestream item: ,stream item: herestream item: 'stream item: sstream item: astream item: lightstream item: -stream item: heartstream item: edstream item: bearstream item: jokestream item: forstream item: youstream item: :stream item:stream item:stream item: Whystream item: dostream item: bearsstream item: hatestream item: shoesstream item: sostream item: muchstream item: ?stream item:stream item:stream item: Becausestream item: theystream item: likestream item: tostream item: runstream item: aroundstream item: instream item: theirstream item: bearstream item: feetstream item: !stream item: </s> Sure, here's a light-hearted bear joke for you:Why do bears hate shoes so much?Because they like to run around in their bear feet!</s> */
#### API Reference:
* [TogetherAI](https://api.js.langchain.com/classes/langchain_community_llms_togetherai.TogetherAI.html) from `@langchain/community/llms/togetherai`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/26b5716e-6f00-47c1-aa71-1838a1eddbd1/r)
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Replicate
](/v0.1/docs/integrations/llms/replicate/)[
Next
WatsonX AI
](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Streaming](#streaming)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/replicate/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* Replicate
Replicate
=========
Here's an example of calling a Replicate model as an LLM:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install replicate @langchain/community
yarn add replicate @langchain/community
pnpm add replicate @langchain/community
import { Replicate } from "@langchain/community/llms/replicate";const model = new Replicate({ model: "a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5",});const prompt = `User: How much wood would a woodchuck chuck if a wood chuck could chuck wood?Assistant:`;const res = await model.invoke(prompt);console.log({ res });/* { res: "I'm happy to help! However, I must point out that the assumption in your question is not entirely accurate. " + + "Woodchucks, also known as groundhogs, do not actually chuck wood. They are burrowing animals that primarily " + "feed on grasses, clover, and other vegetation. They do not have the physical ability to chuck wood.\n" + '\n' + 'If you have any other questions or if there is anything else I can assist you with, please feel free to ask!' }*/
#### API Reference:
* [Replicate](https://api.js.langchain.com/classes/langchain_community_llms_replicate.Replicate.html) from `@langchain/community/llms/replicate`
You can run other models through Replicate by changing the `model` parameter.
You can find a full list of models on [Replicate's website](https://replicate.com/explore).
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
RaycastAI
](/v0.1/docs/integrations/llms/raycast/)[
Next
Together AI
](/v0.1/docs/integrations/llms/togetherai/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/watsonx_ai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* WatsonX AI
On this page
WatsonX AI
==========
LangChain.js supports integration with IBM WatsonX AI. Checkout [WatsonX AI](https://www.ibm.com/products/watsonx-ai) for a list of available models.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You will need to set the following environment variables for using the WatsonX AI API.
1. `IBM_CLOUD_API_KEY` which can be generated via [IBM Cloud](https://cloud.ibm.com/iam/apikeys)
2. `WATSONX_PROJECT_ID` which can be found in your [project's manage tab](https://dataplatform.cloud.ibm.com/projects/?context=wx)
Alternatively, these can be set during the WatsonxAI Class instantiation as `ibmCloudApiKey` and `projectId` respectively. For example:
const model = new WatsonxAI({ ibmCloudApiKey: "My secret IBM Cloud API Key" projectId: "My secret WatsonX AI Project id"});
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
import { WatsonxAI } from "@langchain/community/llms/watsonx_ai";// Note that modelParameters are optionalconst model = new WatsonxAI({ modelId: "meta-llama/llama-2-70b-chat", modelParameters: { max_new_tokens: 100, min_new_tokens: 0, stop_sequences: [], repetition_penalty: 1, },});const res = await model.invoke( "What would be a good company name for a company that makes colorful socks?");console.log({ res });
#### API Reference:
* [WatsonxAI](https://api.js.langchain.com/classes/langchain_community_llms_watsonx_ai.WatsonxAI.html) from `@langchain/community/llms/watsonx_ai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Together AI
](/v0.1/docs/integrations/llms/togetherai/)[
Next
Writer
](/v0.1/docs/integrations/llms/writer/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/yandex/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* YandexGPT
On this page
YandexGPT
=========
LangChain.js supports calling [YandexGPT](https://cloud.yandex.com/en/services/yandexgpt) LLMs.
Setup[](#setup "Direct link to Setup")
---------------------------------------
First, you should [create service account](https://cloud.yandex.com/en/docs/iam/operations/sa/create) with the `ai.languageModels.user` role.
Next, you have two authentication options:
* [IAM token](https://cloud.yandex.com/en/docs/iam/operations/iam-token/create-for-sa). You can specify the token in a constructor parameter `iam_token` or in an environment variable `YC_IAM_TOKEN`.
* [API key](https://cloud.yandex.com/en/docs/iam/operations/api-key/create) You can specify the key in a constructor parameter `api_key` or in an environment variable `YC_API_KEY`.
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/yandex
yarn add @langchain/yandex
pnpm add @langchain/yandex
import { YandexGPT } from "@langchain/yandex/llms";const model = new YandexGPT();const res = await model.invoke(['Translate "I love programming" into French.']);console.log({ res });
#### API Reference:
* [YandexGPT](https://api.js.langchain.com/classes/langchain_yandex_llms.YandexGPT.html) from `@langchain/yandex/llms`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Writer
](/v0.1/docs/integrations/llms/writer/)[
Next
Chat models
](/v0.1/docs/integrations/chat/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/file_loaders/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
* [ChatGPT files](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
* [CSV files](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Docx files](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [EPUB files](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
* [JSON files](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
* [JSONLines files](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [Notion markdown export](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
* [Open AI Whisper Audio](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
* [PDF files](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
* [PPTX files](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Subtitles](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Text files](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
* [Unstructured](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* File Loaders
File Loaders
============
Compatibility
Only available on Node.js.
These loaders are used to load files given a filesystem path or a Blob object.
[
📄️ Folders with multiple files
-------------------------------
This example goes over how to load data from folders with multiple files. The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.
](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
[
📄️ ChatGPT files
-----------------
This example goes over how to load conversations.json from your ChatGPT data export folder. You can get your data export by email by going to: ChatGPT -> (Profile) - Settings -> Export data -> Confirm export -> Check email.
](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
[
📄️ CSV files
-------------
This example goes over how to load data from CSV files. The second argument is the column name to extract from the CSV file. One document will be created for each row in the CSV file. When column is not specified, each row is converted into a key/value pair with each key/value pair outputted to a new line in the document's pageContent. When column is specified, one document is created for each row, and the value of the specified column is used as the document's pageContent.
](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
[
📄️ Docx files
--------------
This example goes over how to load data from docx files.
](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
[
📄️ EPUB files
--------------
This example goes over how to load data from EPUB files. By default, one document will be created for each chapter in the EPUB file, you can change this behavior by setting the splitChapters option to false.
](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
[
📄️ JSON files
--------------
The JSON loader use JSON pointer to target keys in your JSON files you want to target.
](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
[
📄️ JSONLines files
-------------------
This example goes over how to load data from JSONLines or JSONL files. The second argument is a JSONPointer to the property to extract from each JSON object in the file. One document will be created for each JSON object in the file.
](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
[
📄️ Notion markdown export
--------------------------
This example goes over how to load data from your Notion pages exported from the notion dashboard.
](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
[
📄️ Open AI Whisper Audio
-------------------------
Only available on Node.js.
](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
[
📄️ PDF files
-------------
This example goes over how to load data from PDF files. By default, one document will be created for each page in the PDF file, you can change this behavior by setting the splitPages option to false.
](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
[
📄️ PPTX files
--------------
This example goes over how to load data from PPTX files. By default, one document will be created for all pages in the PPTX file.
](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
[
📄️ Subtitles
-------------
This example goes over how to load data from subtitle files. One document will be created for each subtitles file.
](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
[
📄️ Text files
--------------
This example goes over how to load data from text files.
](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
[
📄️ Unstructured
----------------
This example covers how to use Unstructured to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.
](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Document loaders
](/v0.1/docs/integrations/document_loaders/)[
Next
Folders with multiple files
](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/llms/writer/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* Writer
On this page
Writer
======
LangChain.js supports calling [Writer](https://writer.com/) LLMs.
Setup[](#setup "Direct link to Setup")
---------------------------------------
First, you'll need to sign up for an account at [https://writer.com/](https://writer.com/). Create a service account and note your API key.
Next, you'll need to install the official package as a peer dependency:
* npm
* Yarn
* pnpm
yarn add @writerai/writer-sdk
yarn add @writerai/writer-sdk
yarn add @writerai/writer-sdk
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { Writer } from "@langchain/community/llms/writer";const model = new Writer({ maxTokens: 20, apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.WRITER_API_KEY orgId: "YOUR-ORGANIZATION-ID", // In Node.js defaults to process.env.WRITER_ORG_ID});const res = await model.invoke( "What would be a good company name a company that makes colorful socks?");console.log({ res });
#### API Reference:
* [Writer](https://api.js.langchain.com/classes/langchain_community_llms_writer.Writer.html) from `@langchain/community/llms/writer`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
WatsonX AI
](/v0.1/docs/integrations/llms/watsonx_ai/)[
Next
YandexGPT
](/v0.1/docs/integrations/llms/yandex/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/zhipuai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* ZhipuAI
On this page
ChatZhipuAI
===========
LangChain.js supports the Zhipu AI family of models.
[https://open.bigmodel.cn/dev/howuse/model](https://open.bigmodel.cn/dev/howuse/model)
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to sign up for an Zhipu API key and set it as an environment variable named `ZHIPUAI_API_KEY`
[https://open.bigmodel.cn](https://open.bigmodel.cn)
You'll also need to install the following dependencies:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community jsonwebtoken
yarn add @langchain/community jsonwebtoken
pnpm add @langchain/community jsonwebtoken
Usage[](#usage "Direct link to Usage")
---------------------------------------
Here's an example:
import { ChatZhipuAI } from "@langchain/community/chat_models/zhipuai";import { HumanMessage } from "@langchain/core/messages";// Default model is glm-3-turboconst glm3turbo = new ChatZhipuAI({ zhipuAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ZHIPUAI_API_KEY});// Use glm-4const glm4 = new ChatZhipuAI({ model: "glm-4", // Available models: temperature: 1, zhipuAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ZHIPUAI_API_KEY});const messages = [new HumanMessage("Hello")];const res = await glm3turbo.invoke(messages);/*AIMessage { content: "Hello! How can I help you today? Is there something you would like to talk about or ask about? I'm here to assist you with any questions you may have.",}*/const res2 = await glm4.invoke(messages);/*AIMessage { text: "Hello! How can I help you today? Is there something you would like to talk about or ask about? I'm here to assist you with any questions you may have.",}*/
#### API Reference:
* [ChatZhipuAI](https://api.js.langchain.com/classes/langchain_community_chat_models_zhipuai.ChatZhipuAI.html) from `@langchain/community/chat_models/zhipuai`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
YandexGPT
](/v0.1/docs/integrations/chat/yandex/)[
Next
Document loaders
](/v0.1/docs/integrations/document_loaders/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/)
* [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/)
* [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/)
* [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/)
* [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/)
* [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/)
* [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/)
* [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/)
* [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/)
* [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/)
* [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/)
* [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/)
* [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/)
* [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/)
* [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/)
* [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/)
* [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/)
* [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/)
* [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/)
* [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/)
* [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/)
* [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/)
* [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/)
* [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/)
* [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/)
* [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/)
* [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* Web Loaders
Web Loaders
===========
These loaders are used to load web resources.
[
📄️ Cheerio
-----------
This example goes over how to load data from webpages using Cheerio. One document will be created for each webpage.
](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/)
[
📄️ Puppeteer
-------------
Only available on Node.js.
](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/)
[
📄️ Playwright
--------------
Only available on Node.js.
](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/)
[
📄️ Apify Dataset
-----------------
This guide shows how to use Apify with LangChain to load documents from an Apify Dataset.
](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/)
[
📄️ AssemblyAI Audio Transcript
-------------------------------
This covers how to load audio (and video) transcripts as document objects from a file using the AssemblyAI API.
](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/)
[
📄️ Azure Blob Storage Container
--------------------------------
Only available on Node.js.
](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/)
[
📄️ Azure Blob Storage File
---------------------------
Only available on Node.js.
](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/)
[
📄️ Browserbase Loader
----------------------
Description
](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/)
[
📄️ College Confidential
------------------------
This example goes over how to load data from the college confidential website, using Cheerio. One document will be created for each page.
](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/)
[
📄️ Confluence
--------------
Only available on Node.js.
](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/)
[
📄️ Couchbase
-------------
Couchbase is an award-winning distributed NoSQL cloud database that delivers unmatched versatility, performance, scalability, and financial value for all of your cloud, mobile, AI, and edge computing applications.
](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/)
[
📄️ Figma
---------
This example goes over how to load data from a Figma file.
](/v0.1/docs/integrations/document_loaders/web_loaders/figma/)
[
📄️ Firecrawl
-------------
This guide shows how to use Firecrawl with LangChain to load web data into an LLM-ready format using Firecrawl.
](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/)
[
📄️ GitBook
-----------
This example goes over how to load data from any GitBook, using Cheerio. One document will be created for each page.
](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/)
[
📄️ GitHub
----------
This example goes over how to load data from a GitHub repository.
](/v0.1/docs/integrations/document_loaders/web_loaders/github/)
[
📄️ Hacker News
---------------
This example goes over how to load data from the hacker news website, using Cheerio. One document will be created for each page.
](/v0.1/docs/integrations/document_loaders/web_loaders/hn/)
[
📄️ IMSDB
---------
This example goes over how to load data from the internet movie script database website, using Cheerio. One document will be created for each page.
](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/)
[
📄️ Notion API
--------------
This guide will take you through the steps required to load documents from Notion pages and databases using the Notion API.
](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/)
[
📄️ PDF files
-------------
You can use this version of the popular PDFLoader in web environments.
](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/)
[
📄️ Recursive URL Loader
------------------------
When loading content from a website, we may want to process load all URLs on a page.
](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/)
[
📄️ S3 File
-----------
Only available on Node.js.
](/v0.1/docs/integrations/document_loaders/web_loaders/s3/)
[
📄️ SearchApi Loader
--------------------
This guide shows how to use SearchApi with LangChain to load web search results.
](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/)
[
📄️ SerpAPI Loader
------------------
This guide shows how to use SerpAPI with LangChain to load web search results.
](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/)
[
📄️ Sitemap Loader
------------------
This notebook goes over how to use the SitemapLoader class to load sitemaps into Documents.
](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/)
[
📄️ Sonix Audio
---------------
Only available on Node.js.
](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/)
[
📄️ Blockchain Data
-------------------
This example shows how to load blockchain data, including NFT metadata and transactions for a contract address, via the sort.xyz SQL API.
](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/)
[
📄️ YouTube transcripts
-----------------------
This covers how to load youtube transcript into LangChain documents.
](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/)
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Unstructured
](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)[
Next
Cheerio
](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/alibaba_tongyi/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* Alibaba Tongyi
On this page
ChatAlibabaTongyi
=================
LangChain.js supports the Alibaba qwen family of models.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to sign up for an Alibaba API key and set it as an environment variable named `ALIBABA_API_KEY`.
Then, you'll need to install the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
Here's an example:
import { ChatAlibabaTongyi } from "@langchain/community/chat_models/alibaba_tongyi";import { HumanMessage } from "@langchain/core/messages";// Default model is qwen-turboconst qwenTurbo = new ChatAlibabaTongyi({ alibabaApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ALIBABA_API_KEY});// Use qwen-plusconst qwenPlus = new ChatAlibabaTongyi({ model: "qwen-plus", // Available models: qwen-turbo, qwen-plus, qwen-max temperature: 1, alibabaApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ALIBABA_API_KEY});const messages = [new HumanMessage("Hello")];const res = await qwenTurbo.invoke(messages);/*AIMessage { content: "Hello! How can I help you today? Is there something you would like to talk about or ask about? I'm here to assist you with any questions you may have.",}*/const res2 = await qwenPlus.invoke(messages);/*AIMessage { text: "Hello! How can I help you today? Is there something you would like to talk about or ask about? I'm here to assist you with any questions you may have.",}*/
#### API Reference:
* [ChatAlibabaTongyi](https://api.js.langchain.com/classes/langchain_community_chat_models_alibaba_tongyi.ChatAlibabaTongyi.html) from `@langchain/community/chat_models/alibaba_tongyi`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Chat models
](/v0.1/docs/integrations/chat/)[
Next
Anthropic
](/v0.1/docs/integrations/chat/anthropic/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/baidu_wenxin/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* Baidu Wenxin
ChatBaiduWenxin
===============
LangChain.js supports Baidu's ERNIE-bot family of models. Here's an example:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Available models: `ERNIE-Bot`,`ERNIE-Bot-turbo`,`ERNIE-Bot-4`,`ERNIE-Speed-8K`,`ERNIE-Speed-128K`,`ERNIE-4.0-8K`, `ERNIE-4.0-8K-Preview`,`ERNIE-3.5-8K`,`ERNIE-3.5-8K-Preview`,`ERNIE-Lite-8K`,`ERNIE-Tiny-8K`,`ERNIE-Character-8K`, `ERNIE Speed-AppBuilder`
import { ChatBaiduWenxin } from "@langchain/community/chat_models/baiduwenxin";import { HumanMessage } from "@langchain/core/messages";// Default model is ERNIE-Bot-turboconst ernieTurbo = new ChatBaiduWenxin({ baiduApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.BAIDU_API_KEY baiduSecretKey: "YOUR-SECRET-KEY", // In Node.js defaults to process.env.BAIDU_SECRET_KEY});// Use ERNIE-Botconst ernie = new ChatBaiduWenxin({ model: "ERNIE-Bot", // Available models are shown above temperature: 1, baiduApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.BAIDU_API_KEY baiduSecretKey: "YOUR-SECRET-KEY", // In Node.js defaults to process.env.BAIDU_SECRET_KEY});const messages = [new HumanMessage("Hello")];let res = await ernieTurbo.invoke(messages);/*AIChatMessage { text: 'Hello! How may I assist you today?', name: undefined, additional_kwargs: {} }}*/res = await ernie.invoke(messages);/*AIChatMessage { text: 'Hello! How may I assist you today?', name: undefined, additional_kwargs: {} }}*/
#### API Reference:
* [ChatBaiduWenxin](https://api.js.langchain.com/classes/langchain_community_chat_models_baiduwenxin.ChatBaiduWenxin.html) from `@langchain/community/chat_models/baiduwenxin`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Azure OpenAI
](/v0.1/docs/integrations/chat/azure/)[
Next
Bedrock
](/v0.1/docs/integrations/chat/bedrock/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/bedrock/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* Bedrock
On this page
BedrockChat
===========
> [Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that makes Foundation Models (FMs) from leading AI startups and Amazon available via an API. You can choose from a wide range of FMs to find the model that is best suited for your use case.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to install the `@langchain/community` package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Then, you'll need to install a few official AWS packages as peer dependencies:
* npm
* Yarn
* pnpm
npm install @aws-crypto/sha256-js @aws-sdk/credential-provider-node @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types
yarn add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types
pnpm add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types
You can also use BedrockChat in web environments such as Edge functions or Cloudflare Workers by omitting the `@aws-sdk/credential-provider-node` dependency and using the `web` entrypoint:
* npm
* Yarn
* pnpm
npm install @aws-crypto/sha256-js @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types
yarn add @aws-crypto/sha256-js @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types
pnpm add @aws-crypto/sha256-js @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
Currently, only Anthropic, Cohere, and Mistral models are supported with the chat model integration. For foundation models from AI21 or Amazon, see [the text generation Bedrock variant](/v0.1/docs/integrations/llms/bedrock/).
import { BedrockChat } from "@langchain/community/chat_models/bedrock";// Or, from web environments:// import { BedrockChat } from "@langchain/community/chat_models/bedrock/web";import { HumanMessage } from "@langchain/core/messages";// If no credentials are provided, the default credentials from// @aws-sdk/credential-provider-node will be used.// modelKwargs are additional parameters passed to the model when it// is invoked.const model = new BedrockChat({ model: "anthropic.claude-3-sonnet-20240229-v1:0", region: "us-east-1", // endpointUrl: "custom.amazonaws.com", // credentials: { // accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!, // secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!, // }, // modelKwargs: { // anthropic_version: "bedrock-2023-05-31", // },});// Other model names include:// "mistral.mistral-7b-instruct-v0:2"// "mistral.mixtral-8x7b-instruct-v0:1"//// For a full list, see the Bedrock page in AWS.const res = await model.invoke([ new HumanMessage({ content: "Tell me a joke" }),]);console.log(res);/* AIMessage { content: "Here's a silly joke for you:\n" + '\n' + "Why can't a bicycle stand up by itself?\n" + "Because it's two-tired!", name: undefined, additional_kwargs: { id: 'msg_01NYN7Rf39k4cgurqpZWYyDh' } }*/const stream = await model.stream([ new HumanMessage({ content: "Tell me a joke" }),]);for await (const chunk of stream) { console.log(chunk.content);}/* Here 's a silly joke for you : Why can 't a bicycle stand up by itself ? Because it 's two - tired !*/
#### API Reference:
* [BedrockChat](https://api.js.langchain.com/classes/langchain_community_chat_models_bedrock.BedrockChat.html) from `@langchain/community/chat_models/bedrock`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
Multimodal inputs[](#multimodal-inputs "Direct link to Multimodal inputs")
---------------------------------------------------------------------------
tip
Multimodal inputs are currently only supported by Anthropic Claude-3 models.
Anthropic Claude-3 models hosted on Bedrock have multimodal capabilities and can reason about images. Here's an example:
import * as fs from "node:fs/promises";import { BedrockChat } from "@langchain/community/chat_models/bedrock";// Or, from web environments:// import { BedrockChat } from "@langchain/community/chat_models/bedrock/web";import { HumanMessage } from "@langchain/core/messages";// If no credentials are provided, the default credentials from// @aws-sdk/credential-provider-node will be used.// modelKwargs are additional parameters passed to the model when it// is invoked.const model = new BedrockChat({ model: "anthropic.claude-3-sonnet-20240229-v1:0", region: "us-east-1", // endpointUrl: "custom.amazonaws.com", // credentials: { // accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!, // secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!, // }, // modelKwargs: { // anthropic_version: "bedrock-2023-05-31", // },});const imageData = await fs.readFile("./hotdog.jpg");const res = await model.invoke([ new HumanMessage({ content: [ { type: "text", text: "What's in this image?", }, { type: "image_url", image_url: { url: `data:image/jpeg;base64,${imageData.toString("base64")}`, }, }, ], }),]);console.log(res);/* AIMessage { content: 'The image shows a hot dog or frankfurter. It has a reddish-pink sausage filling encased in a light brown bread-like bun. The hot dog bun is split open, revealing the sausage inside. This classic fast food item is a popular snack or meal, often served at events like baseball games or cookouts. The hot dog appears to be against a plain white background, allowing the details and textures of the food item to be clearly visible.', name: undefined, additional_kwargs: { id: 'msg_01XrLPL9vCb82U3Wrrpza18p' } }*/
#### API Reference:
* [BedrockChat](https://api.js.langchain.com/classes/langchain_community_chat_models_bedrock.BedrockChat.html) from `@langchain/community/chat_models/bedrock`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Baidu Wenxin
](/v0.1/docs/integrations/chat/baidu_wenxin/)[
Next
Cloudflare Workers AI
](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Setup](#setup)
* [Usage](#usage)
* [Multimodal inputs](#multimodal-inputs)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/cohere/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* Cohere
On this page
ChatCohere
==========
info
The Cohere Chat API is still in beta. This means Cohere may make breaking changes at any time.
Setup[](#setup "Direct link to Setup")
---------------------------------------
In order to use the LangChain.js Cohere integration you'll need an API key. You can sign up for a Cohere account and create an API key [here](https://dashboard.cohere.com/welcome/register).
You'll first need to install the [`@langchain/cohere`](https://www.npmjs.com/package/@langchain/cohere) package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/cohere
yarn add @langchain/cohere
pnpm add @langchain/cohere
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { ChatCohere } from "@langchain/cohere";import { ChatPromptTemplate } from "@langchain/core/prompts";const model = new ChatCohere({ apiKey: process.env.COHERE_API_KEY, // Default model: "command", // Default});const prompt = ChatPromptTemplate.fromMessages([ ["ai", "You are a helpful assistant"], ["human", "{input}"],]);const chain = prompt.pipe(model);const response = await chain.invoke({ input: "Hello there friend!",});console.log("response", response);/**response AIMessage { lc_serializable: true, lc_namespace: [ 'langchain_core', 'messages' ], content: "Hi there! I'm not your friend, but I'm happy to help you in whatever way I can today. How are you doing? Is there anything I can assist you with? I am an AI chatbot capable of generating thorough responses, and I'm designed to have helpful, inclusive conversations with users. \n" + '\n' + "If you have any questions, feel free to ask away, and I'll do my best to provide you with helpful responses. \n" + '\n' + 'Would you like me to help you with anything in particular right now?', additional_kwargs: { response_id: 'c6baa057-ef94-4bb0-9c25-3a424963a074', generationId: 'd824fcdc-b922-4ae6-8d45-7b65a21cdd6a', token_count: { prompt_tokens: 66, response_tokens: 104, total_tokens: 170, billed_tokens: 159 }, meta: { api_version: [Object], billed_units: [Object] }, tool_inputs: null }} */
#### API Reference:
* [ChatCohere](https://api.js.langchain.com/classes/langchain_cohere.ChatCohere.html) from `@langchain/cohere`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/69ccd2aa-b651-4f07-9223-ecc0b77e645e/r)
### Streaming[](#streaming "Direct link to Streaming")
Cohere's API also supports streaming token responses. The example below demonstrates how to use this feature.
import { ChatCohere } from "@langchain/cohere";import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";const model = new ChatCohere({ apiKey: process.env.COHERE_API_KEY, // Default model: "command", // Default});const prompt = ChatPromptTemplate.fromMessages([ ["ai", "You are a helpful assistant"], ["human", "{input}"],]);const outputParser = new StringOutputParser();const chain = prompt.pipe(model).pipe(outputParser);const response = await chain.stream({ input: "Why is the sky blue? Be concise with your answer.",});let streamTokens = "";let streamIters = 0;for await (const item of response) { streamTokens += item; streamIters += 1;}console.log("stream tokens:", streamTokens);console.log("stream iters:", streamIters);/**stream item:stream item: Hello! I'm here to help answer any questions youstream item: might have or assist you with any task you'd like tostream item: accomplish. I can provide informationstream item: on a wide range of topicsstream item: , from math and science to history and literature. I canstream item: also help you manage your schedule, set reminders, andstream item: much more. Is there something specific you need help with? Letstream item: me know!stream item: */
#### API Reference:
* [ChatCohere](https://api.js.langchain.com/classes/langchain_cohere.ChatCohere.html) from `@langchain/cohere`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/36ae0564-b096-4ec1-9318-1f82fe705fe8/r)
### Stateful conversation API[](#stateful-conversation-api "Direct link to Stateful conversation API")
Cohere's chat API supports stateful conversations. This means the API stores previous chat messages which can be accessed by passing in a `conversation_id` field. The example below demonstrates how to use this feature.
import { ChatCohere } from "@langchain/cohere";import { HumanMessage } from "@langchain/core/messages";const model = new ChatCohere({ apiKey: process.env.COHERE_API_KEY, // Default model: "command", // Default});const conversationId = `demo_test_id-${Math.random()}`;const response = await model.invoke( [new HumanMessage("Tell me a joke about bears.")], { conversationId, });console.log("response: ", response.content);/**response: Why did the bear go to the dentist?Because she had bear teeth!Hope you found that joke about bears to be a little bit tooth-arious!Would you like me to tell you another one? I could also provide you with a list of jokes about bears if you prefer.Just let me know if you have any other jokes or topics you'd like to hear about! */const response2 = await model.invoke( [new HumanMessage("What was the subject of my last question?")], { conversationId, });console.log("response2: ", response2.content);/**response2: Your last question was about bears. You asked me to tell you a joke about bears, which I am programmed to assist with.Would you like me to assist you with anything else bear-related? I can provide you with facts about bears, stories about bears, or even list other topics that might be of interest to you.Please let me know if you have any other questions and I will do my best to provide you with a response. */
#### API Reference:
* [ChatCohere](https://api.js.langchain.com/classes/langchain_cohere.ChatCohere.html) from `@langchain/cohere`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
info
You can see the LangSmith traces from this example [here](https://smith.langchain.com/public/8e67b05a-4e63-414e-ac91-a91acf21b262/r) and [here](https://smith.langchain.com/public/50fabc25-46fe-4727-a59c-7e4eb0de8e70/r)
### RAG[](#rag "Direct link to RAG")
Cohere also comes out of the box with RAG support. You can pass in documents as context to the API request and Cohere's models will use them when generating responses.
import { ChatCohere } from "@langchain/cohere";import { HumanMessage } from "@langchain/core/messages";const model = new ChatCohere({ apiKey: process.env.COHERE_API_KEY, // Default model: "command", // Default});const documents = [ { title: "Harrison's work", snippet: "Harrison worked at Kensho as an engineer.", }, { title: "Harrison's work duration", snippet: "Harrison worked at Kensho for 3 years.", }, { title: "Polar berars in the Appalachian Mountains", snippet: "Polar bears have surprisingly adapted to the Appalachian Mountains, thriving in the diverse, forested terrain despite their traditional arctic habitat. This unique situation has sparked significant interest and study in climate adaptability and wildlife behavior.", },];const response = await model.invoke( [new HumanMessage("Where did Harrison work and for how long?")], { documents, });console.log("response: ", response.content);/**response: Harrison worked as an engineer at Kensho for about 3 years. */
#### API Reference:
* [ChatCohere](https://api.js.langchain.com/classes/langchain_cohere.ChatCohere.html) from `@langchain/cohere`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/de71fffe-6f01-4c36-9b49-40d1bc87dea3/r)
### Connectors[](#connectors "Direct link to Connectors")
The API also allows for other connections which are not static documents. An example of this is their `web-search` connector which allows you to pass in a query and the API will search the web for relevant documents. The example below demonstrates how to use this feature.
import { ChatCohere } from "@langchain/cohere";import { HumanMessage } from "@langchain/core/messages";const model = new ChatCohere({ apiKey: process.env.COHERE_API_KEY, // Default model: "command", // Default});const response = await model.invoke( [new HumanMessage("How tall are the largest pengiuns?")], { connectors: [{ id: "web-search" }], });console.log("response: ", JSON.stringify(response, null, 2));/**response: { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "The tallest penguin species currently in existence is the Emperor Penguin, with a height of 110cm to the top of their head or 115cm to the tip of their beak. This is equivalent to being approximately 3 feet and 7 inches tall.\n\nA fossil of an Anthropornis penguin was found in New Zealand and is suspected to have been even taller at 1.7 metres, though this is uncertain as the fossil is only known from preserved arm and leg bones. The height of a closely related species, Kumimanu biceae, has been estimated at 1.77 metres.\n\nDid you know that because larger-bodied penguins can hold their breath for longer, the colossus penguin could have stayed underwater for 40 minutes or more?", "additional_kwargs": { "response_id": "a3567a59-2377-439d-894f-0309f7fea1de", "generationId": "65dc5b1b-6099-44c4-8338-50eed0d427c5", "token_count": { "prompt_tokens": 1394, "response_tokens": 149, "total_tokens": 1543, "billed_tokens": 159 }, "meta": { "api_version": { "version": "1" }, "billed_units": { "input_tokens": 10, "output_tokens": 149 } }, "citations": [ { "start": 58, "end": 73, "text": "Emperor Penguin", "documentIds": [ "web-search_3:2", "web-search_4:10" ] }, { "start": 92, "end": 157, "text": "110cm to the top of their head or 115cm to the tip of their beak.", "documentIds": [ "web-search_4:10" ] }, { "start": 200, "end": 225, "text": "3 feet and 7 inches tall.", "documentIds": [ "web-search_3:2", "web-search_4:10" ] }, { "start": 242, "end": 262, "text": "Anthropornis penguin", "documentIds": [ "web-search_9:4" ] }, { "start": 276, "end": 287, "text": "New Zealand", "documentIds": [ "web-search_9:4" ] }, { "start": 333, "end": 343, "text": "1.7 metres", "documentIds": [ "web-search_9:4" ] }, { "start": 403, "end": 431, "text": "preserved arm and leg bones.", "documentIds": [ "web-search_9:4" ] }, { "start": 473, "end": 488, "text": "Kumimanu biceae", "documentIds": [ "web-search_9:4" ] }, { "start": 512, "end": 524, "text": "1.77 metres.", "documentIds": [ "web-search_9:4" ] }, { "start": 613, "end": 629, "text": "colossus penguin", "documentIds": [ "web-search_3:2" ] }, { "start": 663, "end": 681, "text": "40 minutes or more", "documentIds": [ "web-search_3:2" ] } ], "documents": [ { "id": "web-search_3:2", "snippet": " By comparison, the largest species of penguin alive today, the emperor penguin, is \"only\" about 4 feet tall and can weigh as much as 100 pounds.\n\nInterestingly, because larger bodied penguins can hold their breath for longer, the colossus penguin probably could have stayed underwater for 40 minutes or more. It boggles the mind to imagine the kinds of huge, deep sea fish this mammoth bird might have been capable of hunting.\n\nThe fossil was found at the La Meseta formation on Seymour Island, an island in a chain of 16 major islands around the tip of the Graham Land on the Antarctic Peninsula.", "title": "Giant 6-Foot-8 Penguin Discovered in Antarctica", "url": "https://www.treehugger.com/giant-foot-penguin-discovered-in-antarctica-4864169" }, { "id": "web-search_4:10", "snippet": "\n\nWhat is the Tallest Penguin?\n\nThe tallest penguin is the Emperor Penguin which is 110cm to the top of their head or 115cm to the tip of their beak.\n\nHow Tall Are Emperor Penguins in Feet?\n\nAn Emperor Penguin is about 3 feet and 7 inches to the top of its head. They are the largest penguin species currently in existence.\n\nHow Much Do Penguins Weigh in Pounds?\n\nPenguins weigh between 2.5lbs for the smallest species, the Little Penguin, up to 82lbs for the largest species, the Emperor Penguin.\n\nDr. Jackie Symmons is a professional ecologist with a Ph.D. in Ecology and Wildlife Management from Bangor University and over 25 years of experience delivering conservation projects.", "title": "How Big Are Penguins? [Height & Weight of Every Species] - Polar Guidebook", "url": "https://polarguidebook.com/how-big-are-penguins/" }, { "id": "web-search_9:4", "snippet": "\n\nA fossil of an Anthropornis penguin found on the island may have been even taller, but this is likely to be an exception. The majority of these penguins were only 1.7 metres tall and weighed around 80 kilogrammes.\n\nWhile Palaeeudyptes klekowskii remains the tallest ever penguin, it is no longer the heaviest. At an estimated 150 kilogrammes, Kumimanu fordycei would have been around three times heavier than any living penguin.\n\nWhile it's uncertain how tall the species was, the height of a closely related species, Kumimanu biceae, has been estimated at 1.77 metres.\n\nThese measurements, however, are all open for debate. Many fossil penguins are only known from preserved arm and leg bones, rather than complete skeletons.", "title": "The largest ever penguin species has been discovered in New Zealand | Natural History Museum", "url": "https://www.nhm.ac.uk/discover/news/2023/february/largest-ever-penguin-species-discovered-new-zealand.html" } ], "searchResults": [ { "searchQuery": { "text": "largest penguin species height", "generationId": "908fe321-5d27-48c4-bdb6-493be5687344" }, "documentIds": [ "web-search_3:2", "web-search_4:10", "web-search_9:4" ], "connector": { "id": "web-search" } } ], "tool_inputs": null, "searchQueries": [ { "text": "largest penguin species height", "generationId": "908fe321-5d27-48c4-bdb6-493be5687344" } ] } }} */
#### API Reference:
* [ChatCohere](https://api.js.langchain.com/classes/langchain_cohere.ChatCohere.html) from `@langchain/cohere`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/9a6f996b-cff2-4f3f-916a-640469a5a963/r)
We can see in the `kwargs` object that the API request did a few things:
* Performed a search query, storing the result data in the `searchQueries` and `searchResults` fields. In the `searchQueries` field we see they rephrased our query to `largest penguin species height` for better results.
* Generated three documents from the search query.
* Generated a list of citations
* Generated a final response based on the above actions & content.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Cloudflare Workers AI
](/v0.1/docs/integrations/chat/cloudflare_workersai/)[
Next
Fake LLM
](/v0.1/docs/integrations/chat/fake/)
* [Setup](#setup)
* [Usage](#usage)
* [Streaming](#streaming)
* [Stateful conversation API](#stateful-conversation-api)
* [RAG](#rag)
* [Connectors](#connectors)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/cloudflare_workersai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* Cloudflare Workers AI
On this page
ChatCloudflareWorkersAI
=======================
Workers AI allows you to run machine learning models, on the Cloudflare network, from your own code.
Usage[](#usage "Direct link to Usage")
---------------------------------------
You'll first need to install the LangChain Cloudflare integration package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/cloudflare
yarn add @langchain/cloudflare
pnpm add @langchain/cloudflare
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
import { ChatCloudflareWorkersAI } from "@langchain/cloudflare";const model = new ChatCloudflareWorkersAI({ model: "@cf/meta/llama-2-7b-chat-int8", // Default value cloudflareAccountId: process.env.CLOUDFLARE_ACCOUNT_ID, cloudflareApiToken: process.env.CLOUDFLARE_API_TOKEN, // Pass a custom base URL to use Cloudflare AI Gateway // baseUrl: `https://gateway.ai.cloudflare.com/v1/{YOUR_ACCOUNT_ID}/{GATEWAY_NAME}/workers-ai/`,});const response = await model.invoke([ ["system", "You are a helpful assistant that translates English to German."], ["human", `Translate "I love programming".`],]);console.log(response);/*AIMessage { content: `Sure! Here's the translation of "I love programming" into German:\n` + '\n' + '"Ich liebe Programmieren."\n' + '\n' + 'In this sentence, "Ich" means "I," "liebe" means "love," and "Programmieren" means "programming."', additional_kwargs: {}}*/const stream = await model.stream([ ["system", "You are a helpful assistant that translates English to German."], ["human", `Translate "I love programming".`],]);for await (const chunk of stream) { console.log(chunk);}/* AIMessageChunk { content: 'S', additional_kwargs: {} } AIMessageChunk { content: 'ure', additional_kwargs: {} } AIMessageChunk { content: '!', additional_kwargs: {} } AIMessageChunk { content: ' Here', additional_kwargs: {} } ...*/
#### API Reference:
* [ChatCloudflareWorkersAI](https://api.js.langchain.com/classes/langchain_cloudflare.ChatCloudflareWorkersAI.html) from `@langchain/cloudflare`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Bedrock
](/v0.1/docs/integrations/chat/bedrock/)[
Next
Cohere
](/v0.1/docs/integrations/chat/cohere/)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/fake/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* Fake LLM
On this page
Fake LLM
========
LangChain provides a fake LLM chat model for testing purposes. This allows you to mock out calls to the LLM and and simulate what would happen if the LLM responded in a certain way.
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { FakeListChatModel } from "@langchain/core/utils/testing";import { HumanMessage } from "@langchain/core/messages";import { StringOutputParser } from "@langchain/core/output_parsers";/** * The FakeListChatModel can be used to simulate ordered predefined responses. */const chat = new FakeListChatModel({ responses: ["I'll callback later.", "You 'console' them!"],});const firstMessage = new HumanMessage("You want to hear a JavasSript joke?");const secondMessage = new HumanMessage( "How do you cheer up a JavaScript developer?");const firstResponse = await chat.invoke([firstMessage]);const secondResponse = await chat.invoke([secondMessage]);console.log({ firstResponse });console.log({ secondResponse });/** * The FakeListChatModel can also be used to simulate streamed responses. */const stream = await chat .pipe(new StringOutputParser()) .stream(`You want to hear a JavasSript joke?`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/** * The FakeListChatModel can also be used to simulate delays in either either synchronous or streamed responses. */const slowChat = new FakeListChatModel({ responses: ["Because Oct 31 equals Dec 25", "You 'console' them!"], sleep: 1000,});const thirdMessage = new HumanMessage( "Why do programmers always mix up Halloween and Christmas?");const slowResponse = await slowChat.invoke([thirdMessage]);console.log({ slowResponse });const slowStream = await slowChat .pipe(new StringOutputParser()) .stream("How do you cheer up a JavaScript developer?");const slowChunks = [];for await (const chunk of slowStream) { slowChunks.push(chunk);}console.log(slowChunks.join(""));
#### API Reference:
* [FakeListChatModel](https://api.js.langchain.com/classes/langchain_core_utils_testing.FakeListChatModel.html) from `@langchain/core/utils/testing`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
* [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Cohere
](/v0.1/docs/integrations/chat/cohere/)[
Next
Fireworks
](/v0.1/docs/integrations/chat/fireworks/)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/fireworks/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* Fireworks
ChatFireworks
=============
You can use models provided by Fireworks AI as follows:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ temperature: 0.9, // In Node.js defaults to process.env.FIREWORKS_API_KEY apiKey: "YOUR-API-KEY",});
#### API Reference:
* [ChatFireworks](https://api.js.langchain.com/classes/langchain_community_chat_models_fireworks.ChatFireworks.html) from `@langchain/community/chat_models/fireworks`
Behind the scenes, Fireworks AI uses the OpenAI SDK and OpenAI compatible API, with some caveats:
* Certain properties are not supported by the Fireworks API, see [here](https://readme.fireworks.ai/docs/openai-compatibility#api-compatibility).
* Generation using multiple prompts is not supported.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Fake LLM
](/v0.1/docs/integrations/chat/fake/)[
Next
Friendli
](/v0.1/docs/integrations/chat/friendli/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/friendli/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* Friendli
On this page
Friendli
========
> [Friendli](https://friendli.ai/) enhances AI application performance and optimizes cost savings with scalable, efficient deployment options, tailored for high-demand AI workloads.
This tutorial guides you through integrating `ChatFriendli` for chat applications using LangChain. `ChatFriendli` offers a flexible approach to generating conversational AI responses, supporting both synchronous and asynchronous calls.
Setup[](#setup "Direct link to Setup")
---------------------------------------
Ensure the `@langchain/community` is installed.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Sign in to [Friendli Suite](https://suite.friendli.ai/) to create a Personal Access Token, and set it as the `FRIENDLI_TOKEN` environment. You can set team id as `FRIENDLI_TEAM` environment.
You can initialize a Friendli chat model with selecting the model you want to use. The default model is `llama-2-13b-chat`. You can check the available models at [docs.friendli.ai](https://docs.friendli.ai/guides/serverless_endpoints/pricing#text-generation-models).
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { ChatFriendli } from "@langchain/community/chat_models/friendli";const model = new ChatFriendli({ model: "llama-2-13b-chat", // Default value friendliToken: process.env.FRIENDLI_TOKEN, friendliTeam: process.env.FRIENDLI_TEAM, maxTokens: 800, temperature: 0.9, topP: 0.9, frequencyPenalty: 0, stop: [],});const response = await model.invoke( "Draft a cover letter for a role in software engineering.");console.log(response.content);/*Dear [Hiring Manager],I am excited to apply for the role of Software Engineer at [Company Name]. With my passion for innovation, creativity, and problem-solving, I am confident that I would be a valuable asset to your team.As a highly motivated and detail-oriented individual, ...*/const stream = await model.stream( "Draft a cover letter for a role in software engineering.");for await (const chunk of stream) { console.log(chunk.content);}/*Dear [Hiring...[Your Name]*/
#### API Reference:
* [ChatFriendli](https://api.js.langchain.com/classes/langchain_community_chat_models_friendli.ChatFriendli.html) from `@langchain/community/chat_models/friendli`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Fireworks
](/v0.1/docs/integrations/chat/fireworks/)[
Next
Google GenAI
](/v0.1/docs/integrations/chat/google_generativeai/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/llama_cpp/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* Llama CPP
On this page
Llama CPP
=========
Compatibility
Only available on Node.js.
This module is based on the [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) Node.js bindings for [llama.cpp](https://github.com/ggerganov/llama.cpp), allowing you to work with a locally running LLM. This allows you to work with a much smaller quantized model capable of running on a laptop environment, ideal for testing and scratch padding ideas without running up a bill!
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to install the [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) module to communicate with your local model.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install -S node-llama-cpp @langchain/community
yarn add node-llama-cpp @langchain/community
pnpm add node-llama-cpp @langchain/community
You will also need a local Llama 2 model (or a model supported by [node-llama-cpp](https://github.com/withcatai/node-llama-cpp)). You will need to pass the path to this model to the LlamaCpp module as a part of the parameters (see example).
Out-of-the-box `node-llama-cpp` is tuned for running on a MacOS platform with support for the Metal GPU of Apple M-series of processors. If you need to turn this off or need support for the CUDA architecture then refer to the documentation at [node-llama-cpp](https://withcatai.github.io/node-llama-cpp/).
For advice on getting and preparing `llama2` see the documentation for the LLM version of this module.
A note to LangChain.js contributors: if you want to run the tests associated with this module you will need to put the path to your local model in the environment variable `LLAMA_PATH`.
Usage[](#usage "Direct link to Usage")
---------------------------------------
### Basic use[](#basic-use "Direct link to Basic use")
In this case we pass in a prompt wrapped as a message and expect a response.
import { ChatLlamaCpp } from "@langchain/community/chat_models/llama_cpp";import { HumanMessage } from "@langchain/core/messages";const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";const model = new ChatLlamaCpp({ modelPath: llamaPath });const response = await model.invoke([ new HumanMessage({ content: "My name is John." }),]);console.log({ response });/* AIMessage { lc_serializable: true, lc_kwargs: { content: 'Hello John.', additional_kwargs: {} }, lc_namespace: [ 'langchain', 'schema' ], content: 'Hello John.', name: undefined, additional_kwargs: {} }*/
#### API Reference:
* [ChatLlamaCpp](https://api.js.langchain.com/classes/langchain_community_chat_models_llama_cpp.ChatLlamaCpp.html) from `@langchain/community/chat_models/llama_cpp`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
### System messages[](#system-messages "Direct link to System messages")
We can also provide a system message, note that with the `llama_cpp` module a system message will cause the creation of a new session.
import { ChatLlamaCpp } from "@langchain/community/chat_models/llama_cpp";import { SystemMessage, HumanMessage } from "@langchain/core/messages";const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";const model = new ChatLlamaCpp({ modelPath: llamaPath });const response = await model.invoke([ new SystemMessage( "You are a pirate, responses must be very verbose and in pirate dialect, add 'Arr, m'hearty!' to each sentence." ), new HumanMessage("Tell me where Llamas come from?"),]);console.log({ response });/* AIMessage { lc_serializable: true, lc_kwargs: { content: "Arr, m'hearty! Llamas come from the land of Peru.", additional_kwargs: {} }, lc_namespace: [ 'langchain', 'schema' ], content: "Arr, m'hearty! Llamas come from the land of Peru.", name: undefined, additional_kwargs: {} }*/
#### API Reference:
* [ChatLlamaCpp](https://api.js.langchain.com/classes/langchain_community_chat_models_llama_cpp.ChatLlamaCpp.html) from `@langchain/community/chat_models/llama_cpp`
* [SystemMessage](https://api.js.langchain.com/classes/langchain_core_messages.SystemMessage.html) from `@langchain/core/messages`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
### Chains[](#chains "Direct link to Chains")
This module can also be used with chains, note that using more complex chains will require suitably powerful version of `llama2` such as the 70B version.
import { ChatLlamaCpp } from "@langchain/community/chat_models/llama_cpp";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "@langchain/core/prompts";const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";const model = new ChatLlamaCpp({ modelPath: llamaPath, temperature: 0.5 });const prompt = PromptTemplate.fromTemplate( "What is a good name for a company that makes {product}?");const chain = new LLMChain({ llm: model, prompt });const response = await chain.invoke({ product: "colorful socks" });console.log({ response });/* { text: `I'm not sure what you mean by "colorful socks" but here are some ideas:\n` + '\n' + '- Sock-it to me!\n' + '- Socks Away\n' + '- Fancy Footwear' }*/
#### API Reference:
* [ChatLlamaCpp](https://api.js.langchain.com/classes/langchain_community_chat_models_llama_cpp.ChatLlamaCpp.html) from `@langchain/community/chat_models/llama_cpp`
* [LLMChain](https://api.js.langchain.com/classes/langchain_chains.LLMChain.html) from `langchain/chains`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
### Streaming[](#streaming "Direct link to Streaming")
We can also stream with Llama CPP, this can be using a raw 'single prompt' string:
import { ChatLlamaCpp } from "@langchain/community/chat_models/llama_cpp";const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";const model = new ChatLlamaCpp({ modelPath: llamaPath, temperature: 0.7 });const stream = await model.stream("Tell me a short story about a happy Llama.");for await (const chunk of stream) { console.log(chunk.content);}/* Once upon a time , in a green and sunny field ...*/
#### API Reference:
* [ChatLlamaCpp](https://api.js.langchain.com/classes/langchain_community_chat_models_llama_cpp.ChatLlamaCpp.html) from `@langchain/community/chat_models/llama_cpp`
Or you can provide multiple messages, note that this takes the input and then submits a Llama2 formatted prompt to the model.
import { ChatLlamaCpp } from "@langchain/community/chat_models/llama_cpp";import { SystemMessage, HumanMessage } from "@langchain/core/messages";const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";const llamaCpp = new ChatLlamaCpp({ modelPath: llamaPath, temperature: 0.7 });const stream = await llamaCpp.stream([ new SystemMessage( "You are a pirate, responses must be very verbose and in pirate dialect." ), new HumanMessage("Tell me about Llamas?"),]);for await (const chunk of stream) { console.log(chunk.content);}/* Ar rr r , me heart y ! Ye be ask in ' about llam as , e h ? ...*/
#### API Reference:
* [ChatLlamaCpp](https://api.js.langchain.com/classes/langchain_community_chat_models_llama_cpp.ChatLlamaCpp.html) from `@langchain/community/chat_models/llama_cpp`
* [SystemMessage](https://api.js.langchain.com/classes/langchain_core_messages.SystemMessage.html) from `@langchain/core/messages`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
Using the `invoke` method, we can also achieve stream generation, and use `signal` to abort the generation.
import { ChatLlamaCpp } from "@langchain/community/chat_models/llama_cpp";import { SystemMessage, HumanMessage } from "@langchain/core/messages";const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";const model = new ChatLlamaCpp({ modelPath: llamaPath, temperature: 0.7 });const controller = new AbortController();setTimeout(() => { controller.abort(); console.log("Aborted");}, 5000);await model.invoke( [ new SystemMessage( "You are a pirate, responses must be very verbose and in pirate dialect." ), new HumanMessage("Tell me about Llamas?"), ], { signal: controller.signal, callbacks: [ { handleLLMNewToken(token) { console.log(token); }, }, ], });/* Once upon a time , in a green and sunny field ... Aborted AbortError*/
#### API Reference:
* [ChatLlamaCpp](https://api.js.langchain.com/classes/langchain_community_chat_models_llama_cpp.ChatLlamaCpp.html) from `@langchain/community/chat_models/llama_cpp`
* [SystemMessage](https://api.js.langchain.com/classes/langchain_core_messages.SystemMessage.html) from `@langchain/core/messages`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Groq
](/v0.1/docs/integrations/chat/groq/)[
Next
Minimax
](/v0.1/docs/integrations/chat/minimax/)
* [Setup](#setup)
* [Usage](#usage)
* [Basic use](#basic-use)
* [System messages](#system-messages)
* [Chains](#chains)
* [Streaming](#streaming)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/groq/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* Groq
On this page
ChatGroq
========
Setup[](#setup "Direct link to Setup")
---------------------------------------
In order to use the Groq API you'll need an API key. You can sign up for a Groq account and create an API key [here](https://wow.groq.com/).
You'll first need to install the [`@langchain/groq`](https://www.npmjs.com/package/@langchain/groq) package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { ChatGroq } from "@langchain/groq";import { ChatPromptTemplate } from "@langchain/core/prompts";const model = new ChatGroq({ apiKey: process.env.GROQ_API_KEY,});const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["human", "{input}"],]);const chain = prompt.pipe(model);const response = await chain.invoke({ input: "Hello",});console.log("response", response);/**response AIMessage { content: "Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or a question you have?",} */
#### API Reference:
* [ChatGroq](https://api.js.langchain.com/classes/langchain_groq.ChatGroq.html) from `@langchain/groq`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/2ba59207-1383-4e42-b6a6-c1ddcfcd5710/r)
Tool calling[](#tool-calling "Direct link to Tool calling")
------------------------------------------------------------
Groq chat models support calling multiple functions to get all required data to answer a question. Here's an example:
import { ChatGroq } from "@langchain/groq";// Mocked out function, could be a database/API call in productionfunction getCurrentWeather(location: string, _unit?: string) { if (location.toLowerCase().includes("tokyo")) { return JSON.stringify({ location, temperature: "10", unit: "celsius" }); } else if (location.toLowerCase().includes("san francisco")) { return JSON.stringify({ location, temperature: "72", unit: "fahrenheit", }); } else { return JSON.stringify({ location, temperature: "22", unit: "celsius" }); }}// Bind function to the model as a toolconst chat = new ChatGroq({ model: "mixtral-8x7b-32768", maxTokens: 128,}).bind({ tools: [ { type: "function", function: { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, }, ], tool_choice: "auto",});const res = await chat.invoke([ ["human", "What's the weather like in San Francisco?"],]);console.log(res.additional_kwargs.tool_calls);/* [ { id: 'call_01htk055jpftwbb9tvphyf9bnf', type: 'function', function: { name: 'get_current_weather', arguments: '{"location":"San Francisco, CA"}' } } ]*/
#### API Reference:
* [ChatGroq](https://api.js.langchain.com/classes/langchain_groq.ChatGroq.html) from `@langchain/groq`
### `.withStructuredOutput({ ... })`[](#withstructuredoutput-- "Direct link to withstructuredoutput--")
info
The `.withStructuredOutput` method is in beta. It is actively being worked on, so the API may change.
You can also use the `.withStructuredOutput({ ... })` method to coerce `ChatGroq` into returning a structured output.
The method allows for passing in either a Zod object, or a valid JSON schema (like what is returned from [`zodToJsonSchema`](https://www.npmjs.com/package/zod-to-json-schema)).
Using the method is simple. Just define your LLM and call `.withStructuredOutput({ ... })` on it, passing the desired schema.
Here is an example using a Zod schema and the `functionCalling` mode (default mode):
import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatGroq } from "@langchain/groq";import { z } from "zod";const model = new ChatGroq({ temperature: 0, model: "mixtral-8x7b-32768",});const calculatorSchema = z.object({ operation: z.enum(["add", "subtract", "multiply", "divide"]), number1: z.number(), number2: z.number(),});const modelWithStructuredOutput = model.withStructuredOutput(calculatorSchema);const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are VERY bad at math and must always use a calculator."], ["human", "Please help me!! What is 2 + 2?"],]);const chain = prompt.pipe(modelWithStructuredOutput);const result = await chain.invoke({});console.log(result);/* { operation: 'add', number1: 2, number2: 2 }*//** * You can also specify 'includeRaw' to return the parsed * and raw output in the result. */const includeRawModel = model.withStructuredOutput(calculatorSchema, { name: "calculator", includeRaw: true,});const includeRawChain = prompt.pipe(includeRawModel);const includeRawResult = await includeRawChain.invoke({});console.log(includeRawResult);/* { raw: AIMessage { content: '', additional_kwargs: { tool_calls: [ { "id": "call_01htk094ktfgxtkwj40n0ehg61", "type": "function", "function": { "name": "calculator", "arguments": "{\"operation\": \"add\", \"number1\": 2, \"number2\": 2}" } } ] }, response_metadata: { "tokenUsage": { "completionTokens": 197, "promptTokens": 1214, "totalTokens": 1411 }, "finish_reason": "tool_calls" } }, parsed: { operation: 'add', number1: 2, number2: 2 } }*/
#### API Reference:
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [ChatGroq](https://api.js.langchain.com/classes/langchain_groq.ChatGroq.html) from `@langchain/groq`
Streaming[](#streaming "Direct link to Streaming")
---------------------------------------------------
Groq's API also supports streaming token responses. The example below demonstrates how to use this feature.
import { ChatGroq } from "@langchain/groq";import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";const model = new ChatGroq({ apiKey: process.env.GROQ_API_KEY,});const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["human", "{input}"],]);const outputParser = new StringOutputParser();const chain = prompt.pipe(model).pipe(outputParser);const response = await chain.stream({ input: "Hello",});let res = "";for await (const item of response) { res += item; console.log("stream:", res);}/**stream: Hellostream: Hello!stream: Hello! Istream: Hello! I'stream: Hello! I'mstream: Hello! I'm happystream: Hello! I'm happy tostream: Hello! I'm happy to assiststream: Hello! I'm happy to assist youstream: Hello! I'm happy to assist you instream: Hello! I'm happy to assist you in anystream: Hello! I'm happy to assist you in any waystream: Hello! I'm happy to assist you in any way Istream: Hello! I'm happy to assist you in any way I canstream: Hello! I'm happy to assist you in any way I can.stream: Hello! I'm happy to assist you in any way I can. Isstream: Hello! I'm happy to assist you in any way I can. Is therestream: Hello! I'm happy to assist you in any way I can. Is there somethingstream: Hello! I'm happy to assist you in any way I can. Is there something specificstream: Hello! I'm happy to assist you in any way I can. Is there something specific youstream: Hello! I'm happy to assist you in any way I can. Is there something specific you needstream: Hello! I'm happy to assist you in any way I can. Is there something specific you need helpstream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help withstream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with orstream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or astream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or a questionstream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or a question youstream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or a question you havestream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or a question you have? */
#### API Reference:
* [ChatGroq](https://api.js.langchain.com/classes/langchain_groq.ChatGroq.html) from `@langchain/groq`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/72832eb5-b9ae-4ce0-baa2-c2e95eca61a7/r)
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Google Vertex AI
](/v0.1/docs/integrations/chat/google_vertex_ai/)[
Next
Llama CPP
](/v0.1/docs/integrations/chat/llama_cpp/)
* [Setup](#setup)
* [Usage](#usage)
* [Tool calling](#tool-calling)
* [`.withStructuredOutput({ ... })`](#withstructuredoutput--)
* [Streaming](#streaming)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/ni_bittensor/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* NIBittensorChatModel
NIBittensorChatModel
====================
LangChain.js offers experimental support for Neural Internet's Bittensor chat models.
Here's an example:
import { NIBittensorChatModel } from "langchain/experimental/chat_models/bittensor";import { HumanMessage } from "@langchain/core/messages";const chat = new NIBittensorChatModel();const message = new HumanMessage("What is bittensor?");const res = await chat.invoke([message]);console.log({ res });/* { res: "\nBittensor is opensource protocol..." } */
#### API Reference:
* [NIBittensorChatModel](https://api.js.langchain.com/classes/langchain_experimental_chat_models_bittensor.NIBittensorChatModel.html) from `langchain/experimental/chat_models/bittensor`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Mistral AI
](/v0.1/docs/integrations/chat/mistral/)[
Next
Ollama
](/v0.1/docs/integrations/chat/ollama/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/minimax/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* Minimax
On this page
Minimax
=======
[Minimax](https://api.minimax.chat) is a Chinese startup that provides natural language processing models for companies and individuals.
This example demonstrates using LangChain.js to interact with Minimax.
Setup[](#setup "Direct link to Setup")
---------------------------------------
To use Minimax models, you'll need a [Minimax account](https://api.minimax.chat), an [API key](https://api.minimax.chat/user-center/basic-information/interface-key), and a [Group ID](https://api.minimax.chat/user-center/basic-information)
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
Basic usage[](#basic-usage "Direct link to Basic usage")
---------------------------------------------------------
import { ChatMinimax } from "@langchain/community/chat_models/minimax";import { HumanMessage } from "@langchain/core/messages";// Use abab5.5const abab5_5 = new ChatMinimax({ model: "abab5.5-chat", botSetting: [ { bot_name: "MM Assistant", content: "MM Assistant is an AI Assistant developed by minimax.", }, ],});const messages = [ new HumanMessage({ content: "Hello", }),];const res = await abab5_5.invoke(messages);console.log(res);/*AIChatMessage { text: 'Hello! How may I assist you today?', name: undefined, additional_kwargs: {} }}*/// use abab5const abab5 = new ChatMinimax({ proVersion: false, model: "abab5-chat", minimaxGroupId: process.env.MINIMAX_GROUP_ID, // In Node.js defaults to process.env.MINIMAX_GROUP_ID minimaxApiKey: process.env.MINIMAX_API_KEY, // In Node.js defaults to process.env.MINIMAX_API_KEY});const result = await abab5.invoke([ new HumanMessage({ content: "Hello", name: "XiaoMing", }),]);console.log(result);/*AIMessage { lc_serializable: true, lc_kwargs: { content: 'Hello! Can I help you with anything?', additional_kwargs: { function_call: undefined } }, lc_namespace: [ 'langchain', 'schema' ], content: 'Hello! Can I help you with anything?', name: undefined, additional_kwargs: { function_call: undefined }} */
#### API Reference:
* [ChatMinimax](https://api.js.langchain.com/classes/langchain_community_chat_models_minimax.ChatMinimax.html) from `@langchain/community/chat_models/minimax`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
Chain model calls[](#chain-model-calls "Direct link to Chain model calls")
---------------------------------------------------------------------------
import { LLMChain } from "langchain/chains";import { ChatMinimax } from "@langchain/community/chat_models/minimax";import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate,} from "@langchain/core/prompts";// We can also construct an LLMChain from a ChatPromptTemplate and a chat model.const chat = new ChatMinimax({ temperature: 0.01 });const chatPrompt = ChatPromptTemplate.fromMessages([ SystemMessagePromptTemplate.fromTemplate( "You are a helpful assistant that translates {input_language} to {output_language}." ), HumanMessagePromptTemplate.fromTemplate("{text}"),]);const chainB = new LLMChain({ prompt: chatPrompt, llm: chat,});const resB = await chainB.invoke({ input_language: "English", output_language: "Chinese", text: "I love programming.",});console.log({ resB });
#### API Reference:
* [LLMChain](https://api.js.langchain.com/classes/langchain_chains.LLMChain.html) from `langchain/chains`
* [ChatMinimax](https://api.js.langchain.com/classes/langchain_community_chat_models_minimax.ChatMinimax.html) from `@langchain/community/chat_models/minimax`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [HumanMessagePromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.HumanMessagePromptTemplate.html) from `@langchain/core/prompts`
* [SystemMessagePromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.SystemMessagePromptTemplate.html) from `@langchain/core/prompts`
With function calls[](#with-function-calls "Direct link to With function calls")
---------------------------------------------------------------------------------
import { ChatMinimax } from "@langchain/community/chat_models/minimax";import { HumanMessage } from "@langchain/core/messages";const functionSchema = { name: "get_weather", description: " Get weather information.", parameters: { type: "object", properties: { location: { type: "string", description: " The location to get the weather", }, }, required: ["location"], },};// Bind function arguments to the model.// All subsequent invoke calls will use the bound parameters.// "functions.parameters" must be formatted as JSON Schemaconst model = new ChatMinimax({ botSetting: [ { bot_name: "MM Assistant", content: "MM Assistant is an AI Assistant developed by minimax.", }, ],}).bind({ functions: [functionSchema],});const result = await model.invoke([ new HumanMessage({ content: " What is the weather like in NewYork tomorrow?", name: "I", }),]);console.log(result);/*AIMessage { lc_serializable: true, lc_kwargs: { content: '', additional_kwargs: { function_call: [Object] } }, lc_namespace: [ 'langchain', 'schema' ], content: '', name: undefined, additional_kwargs: { function_call: { name: 'get_weather', arguments: '{"location": "NewYork"}' } }}*/// Alternatively, you can pass function call arguments as an additional argument as a one-off:const minimax = new ChatMinimax({ model: "abab5.5-chat", botSetting: [ { bot_name: "MM Assistant", content: "MM Assistant is an AI Assistant developed by minimax.", }, ],});const result2 = await minimax.invoke( [new HumanMessage("What is the weather like in NewYork tomorrow?")], { functions: [functionSchema], });console.log(result2);/*AIMessage { lc_serializable: true, lc_kwargs: { content: '', additional_kwargs: { function_call: [Object] } }, lc_namespace: [ 'langchain', 'schema' ], content: '', name: undefined, additional_kwargs: { function_call: { name: 'get_weather', arguments: '{"location": "NewYork"}' } }} */
#### API Reference:
* [ChatMinimax](https://api.js.langchain.com/classes/langchain_community_chat_models_minimax.ChatMinimax.html) from `@langchain/community/chat_models/minimax`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
Functions with Zod[](#functions-with-zod "Direct link to Functions with Zod")
------------------------------------------------------------------------------
import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";import { ChatMinimax } from "@langchain/community/chat_models/minimax";import { HumanMessage } from "@langchain/core/messages";const extractionFunctionZodSchema = z.object({ location: z.string().describe(" The location to get the weather"),});// Bind function arguments to the model.// "functions.parameters" must be formatted as JSON Schema.// We translate the above Zod schema into JSON schema using the "zodToJsonSchema" package.const model = new ChatMinimax({ model: "abab5.5-chat", botSetting: [ { bot_name: "MM Assistant", content: "MM Assistant is an AI Assistant developed by minimax.", }, ],}).bind({ functions: [ { name: "get_weather", description: " Get weather information.", parameters: zodToJsonSchema(extractionFunctionZodSchema), }, ],});const result = await model.invoke([ new HumanMessage({ content: " What is the weather like in Shanghai tomorrow?", name: "XiaoMing", }),]);console.log(result);/*AIMessage { lc_serializable: true, lc_kwargs: { content: '', additional_kwargs: { function_call: [Object] } }, lc_namespace: [ 'langchain', 'schema' ], content: '', name: undefined, additional_kwargs: { function_call: { name: 'get_weather', arguments: '{"location": "Shanghai"}' } }}*/
#### API Reference:
* [ChatMinimax](https://api.js.langchain.com/classes/langchain_community_chat_models_minimax.ChatMinimax.html) from `@langchain/community/chat_models/minimax`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
With glyph[](#with-glyph "Direct link to With glyph")
------------------------------------------------------
This feature can help users force the model to return content in the requested format.
import { ChatMinimax } from "@langchain/community/chat_models/minimax";import { ChatPromptTemplate, HumanMessagePromptTemplate,} from "@langchain/core/prompts";import { HumanMessage } from "@langchain/core/messages";const model = new ChatMinimax({ model: "abab5.5-chat", botSetting: [ { bot_name: "MM Assistant", content: "MM Assistant is an AI Assistant developed by minimax.", }, ],}).bind({ replyConstraints: { sender_type: "BOT", sender_name: "MM Assistant", glyph: { type: "raw", raw_glyph: "The translated text:{{gen 'content'}}", }, },});const messagesTemplate = ChatPromptTemplate.fromMessages([ HumanMessagePromptTemplate.fromTemplate( " Please help me translate the following sentence in English: {text}" ),]);const messages = await messagesTemplate.formatMessages({ text: "我是谁" });const result = await model.invoke(messages);console.log(result);/*AIMessage { lc_serializable: true, lc_kwargs: { content: 'The translated text: Who am I\x02', additional_kwargs: { function_call: undefined } }, lc_namespace: [ 'langchain', 'schema' ], content: 'The translated text: Who am I\x02', name: undefined, additional_kwargs: { function_call: undefined }}*/// use json_valueconst modelMinimax = new ChatMinimax({ model: "abab5.5-chat", botSetting: [ { bot_name: "MM Assistant", content: "MM Assistant is an AI Assistant developed by minimax.", }, ],}).bind({ replyConstraints: { sender_type: "BOT", sender_name: "MM Assistant", glyph: { type: "json_value", json_properties: { name: { type: "string", }, age: { type: "number", }, is_student: { type: "boolean", }, is_boy: { type: "boolean", }, courses: { type: "object", properties: { name: { type: "string", }, score: { type: "number", }, }, }, }, }, },});const result2 = await modelMinimax.invoke([ new HumanMessage({ content: " My name is Yue Wushuang, 18 years old this year, just finished the test with 99.99 points.", name: "XiaoMing", }),]);console.log(result2);/*AIMessage { lc_serializable: true, lc_kwargs: { content: '{\n' + ' "name": "Yue Wushuang",\n' + ' "is_student": true,\n' + ' "is_boy": false,\n' + ' "courses": {\n' + ' "name": "Mathematics",\n' + ' "score": 99.99\n' + ' },\n' + ' "age": 18\n' + ' }', additional_kwargs: { function_call: undefined } }} */
#### API Reference:
* [ChatMinimax](https://api.js.langchain.com/classes/langchain_community_chat_models_minimax.ChatMinimax.html) from `@langchain/community/chat_models/minimax`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [HumanMessagePromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.HumanMessagePromptTemplate.html) from `@langchain/core/prompts`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
With sample messages[](#with-sample-messages "Direct link to With sample messages")
------------------------------------------------------------------------------------
This feature can help the model better understand the return information the user wants to get, including but not limited to the content, format, and response mode of the information.
import { ChatMinimax } from "@langchain/community/chat_models/minimax";import { AIMessage, HumanMessage } from "@langchain/core/messages";const model = new ChatMinimax({ model: "abab5.5-chat", botSetting: [ { bot_name: "MM Assistant", content: "MM Assistant is an AI Assistant developed by minimax.", }, ],}).bind({ sampleMessages: [ new HumanMessage({ content: " Turn A5 into red and modify the content to minimax.", }), new AIMessage({ content: "select A5 color red change minimax", }), ],});const result = await model.invoke([ new HumanMessage({ content: ' Please reply to my content according to the following requirements: According to the following interface list, give the order and parameters of calling the interface for the content I gave. You just need to give the order and parameters of calling the interface, and do not give any other output. The following is the available interface list: select: select specific table position, input parameter use letters and numbers to determine, for example "B13"; color: dye the selected table position, input parameters use the English name of the color, for example "red"; change: modify the selected table position, input parameters use strings.', }), new HumanMessage({ content: " Process B6 to gray and modify the content to question.", }),]);console.log(result);
#### API Reference:
* [ChatMinimax](https://api.js.langchain.com/classes/langchain_community_chat_models_minimax.ChatMinimax.html) from `@langchain/community/chat_models/minimax`
* [AIMessage](https://api.js.langchain.com/classes/langchain_core_messages.AIMessage.html) from `@langchain/core/messages`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
With plugins[](#with-plugins "Direct link to With plugins")
------------------------------------------------------------
This feature supports calling tools like a search engine to get additional data that can assist the model.
import { ChatMinimax } from "@langchain/community/chat_models/minimax";import { HumanMessage } from "@langchain/core/messages";const model = new ChatMinimax({ model: "abab5.5-chat", botSetting: [ { bot_name: "MM Assistant", content: "MM Assistant is an AI Assistant developed by minimax.", }, ],}).bind({ plugins: ["plugin_web_search"],});const result = await model.invoke([ new HumanMessage({ content: " What is the weather like in NewYork tomorrow?", }),]);console.log(result);/*AIMessage { lc_serializable: true, lc_kwargs: { content: 'The weather in Shanghai tomorrow is expected to be hot. Please note that this is just a forecast and the actual weather conditions may vary.', additional_kwargs: { function_call: undefined } }, lc_namespace: [ 'langchain', 'schema' ], content: 'The weather in Shanghai tomorrow is expected to be hot. Please note that this is just a forecast and the actual weather conditions may vary.', name: undefined, additional_kwargs: { function_call: undefined }}*/
#### API Reference:
* [ChatMinimax](https://api.js.langchain.com/classes/langchain_community_chat_models_minimax.ChatMinimax.html) from `@langchain/community/chat_models/minimax`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Llama CPP
](/v0.1/docs/integrations/chat/llama_cpp/)[
Next
Mistral AI
](/v0.1/docs/integrations/chat/mistral/)
* [Setup](#setup)
* [Basic usage](#basic-usage)
* [Chain model calls](#chain-model-calls)
* [With function calls](#with-function-calls)
* [Functions with Zod](#functions-with-zod)
* [With glyph](#with-glyph)
* [With sample messages](#with-sample-messages)
* [With plugins](#with-plugins)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/ollama_functions/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* Ollama Functions
On this page
Ollama Functions
================
LangChain offers an experimental wrapper around open source models run locally via [Ollama](https://github.com/jmorganca/ollama) that gives it the same API as OpenAI Functions.
Note that more powerful and capable models will perform better with complex schema and/or multiple functions. The examples below use [Mistral](https://ollama.ai/library/mistral).
Setup[](#setup "Direct link to Setup")
---------------------------------------
Follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance.
Initialize model[](#initialize-model "Direct link to Initialize model")
------------------------------------------------------------------------
You can initialize this wrapper the same way you'd initialize a standard `ChatOllama` instance:
import { OllamaFunctions } from "langchain/experimental/chat_models/ollama_functions";const model = new OllamaFunctions({ temperature: 0.1, model: "mistral",});
Passing in functions[](#passing-in-functions "Direct link to Passing in functions")
------------------------------------------------------------------------------------
You can now pass in functions the same way as OpenAI:
import { OllamaFunctions } from "langchain/experimental/chat_models/ollama_functions";import { HumanMessage } from "@langchain/core/messages";const model = new OllamaFunctions({ temperature: 0.1, model: "mistral",}).bind({ functions: [ { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, ], // You can set the `function_call` arg to force the model to use a function function_call: { name: "get_current_weather", },});const response = await model.invoke([ new HumanMessage({ content: "What's the weather in Boston?", }),]);console.log(response);/* AIMessage { content: '', additional_kwargs: { function_call: { name: 'get_current_weather', arguments: '{"location":"Boston, MA","unit":"fahrenheit"}' } } }*/
#### API Reference:
* [OllamaFunctions](https://api.js.langchain.com/classes/langchain_experimental_chat_models_ollama_functions.OllamaFunctions.html) from `langchain/experimental/chat_models/ollama_functions`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
Using for extraction[](#using-for-extraction "Direct link to Using for extraction")
------------------------------------------------------------------------------------
import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";import { OllamaFunctions } from "langchain/experimental/chat_models/ollama_functions";import { JsonOutputFunctionsParser } from "langchain/output_parsers";import { PromptTemplate } from "@langchain/core/prompts";const EXTRACTION_TEMPLATE = `Extract and save the relevant entities mentioned in the following passage together with their properties.Passage:{input}`;const prompt = PromptTemplate.fromTemplate(EXTRACTION_TEMPLATE);// Use Zod for easier schema declarationconst schema = z.object({ people: z.array( z.object({ name: z.string().describe("The name of a person"), height: z.number().describe("The person's height"), hairColor: z.optional(z.string()).describe("The person's hair color"), }) ),});const model = new OllamaFunctions({ temperature: 0.1, model: "mistral",}).bind({ functions: [ { name: "information_extraction", description: "Extracts the relevant information from the passage.", parameters: { type: "object", properties: zodToJsonSchema(schema), }, }, ], function_call: { name: "information_extraction", },});// Use a JsonOutputFunctionsParser to get the parsed JSON response directly.const chain = await prompt.pipe(model).pipe(new JsonOutputFunctionsParser());const response = await chain.invoke({ input: "Alex is 5 feet tall. Claudia is 1 foot taller than Alex and jumps higher than him. Claudia has orange hair and Alex is blonde.",});console.log(response);/* { people: [ { name: 'Alex', height: 5, hairColor: 'blonde' }, { name: 'Claudia', height: 6, hairColor: 'orange' } ] }*/
#### API Reference:
* [OllamaFunctions](https://api.js.langchain.com/classes/langchain_experimental_chat_models_ollama_functions.OllamaFunctions.html) from `langchain/experimental/chat_models/ollama_functions`
* [JsonOutputFunctionsParser](https://api.js.langchain.com/classes/langchain_output_parsers.JsonOutputFunctionsParser.html) from `langchain/output_parsers`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
You can see a LangSmith trace of what this looks like here: [https://smith.langchain.com/public/31457ea4-71ca-4e29-a1e0-aa80e6828883/r](https://smith.langchain.com/public/31457ea4-71ca-4e29-a1e0-aa80e6828883/r)
Customization[](#customization "Direct link to Customization")
---------------------------------------------------------------
Behind the scenes, this uses Ollama's JSON mode to constrain output to JSON, then passes tools schemas as JSON schema into the prompt.
Because different models have different strengths, it may be helpful to pass in your own system prompt. Here's an example:
import { OllamaFunctions } from "langchain/experimental/chat_models/ollama_functions";import { HumanMessage } from "@langchain/core/messages";// Custom system prompt to format tools. You must encourage the model// to wrap output in a JSON object with "tool" and "tool_input" properties.const toolSystemPromptTemplate = `You have access to the following tools:{tools}To use a tool, respond with a JSON object with the following structure:{{ "tool": <name of the called tool>, "tool_input": <parameters for the tool matching the above JSON schema>}}`;const model = new OllamaFunctions({ temperature: 0.1, model: "mistral", toolSystemPromptTemplate,}).bind({ functions: [ { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, ], // You can set the `function_call` arg to force the model to use a function function_call: { name: "get_current_weather", },});const response = await model.invoke([ new HumanMessage({ content: "What's the weather in Boston?", }),]);console.log(response);/* AIMessage { content: '', additional_kwargs: { function_call: { name: 'get_current_weather', arguments: '{"location":"Boston, MA","unit":"fahrenheit"}' } } }*/
#### API Reference:
* [OllamaFunctions](https://api.js.langchain.com/classes/langchain_experimental_chat_models_ollama_functions.OllamaFunctions.html) from `langchain/experimental/chat_models/ollama_functions`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Ollama
](/v0.1/docs/integrations/chat/ollama/)[
Next
OpenAI
](/v0.1/docs/integrations/chat/openai/)
* [Setup](#setup)
* [Initialize model](#initialize-model)
* [Passing in functions](#passing-in-functions)
* [Using for extraction](#using-for-extraction)
* [Customization](#customization)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/premai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* PremAI
On this page
ChatPrem
========
Setup[](#setup "Direct link to Setup")
---------------------------------------
1. Create a Prem AI account and get your API key [here](https://app.premai.io/accounts/signup/).
2. Export or set your API key inline. The ChatPrem class defaults to `process.env.PREM_API_KEY`.
export PREM_API_KEY=your-api-key
You can use models provided by Prem AI as follows:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
import { ChatPrem } from "@langchain/community/chat_models/premai";import { HumanMessage } from "@langchain/core/messages";const model = new ChatPrem({ // In Node.js defaults to process.env.PREM_API_KEY apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PREM_PROJECT_ID project_id: "YOUR-PROJECT_ID",});console.log(await model.invoke([new HumanMessage("Hello there!")]));
#### API Reference:
* [ChatPrem](https://api.js.langchain.com/classes/langchain_community_chat_models_premai.ChatPrem.html) from `@langchain/community/chat_models/premai`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
OpenAI
](/v0.1/docs/integrations/chat/openai/)[
Next
PromptLayer OpenAI
](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [Setup](#setup)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/prompt_layer_openai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* PromptLayer OpenAI
PromptLayerChatOpenAI
=====================
You can pass in the optional `returnPromptLayerId` boolean to get a `promptLayerRequestId` like below. Here is an example of getting the PromptLayerChatOpenAI requestID:
import { PromptLayerChatOpenAI } from "langchain/llms/openai";const chat = new PromptLayerChatOpenAI({ returnPromptLayerId: true,});const respA = await chat.generate([ [ new SystemMessage( "You are a helpful assistant that translates English to French." ), ],]);console.log(JSON.stringify(respA, null, 3));/* { "generations": [ [ { "text": "Bonjour! Je suis un assistant utile qui peut vous aider à traduire de l'anglais vers le français. Que puis-je faire pour vous aujourd'hui?", "message": { "type": "ai", "data": { "content": "Bonjour! Je suis un assistant utile qui peut vous aider à traduire de l'anglais vers le français. Que puis-je faire pour vous aujourd'hui?" } }, "generationInfo": { "promptLayerRequestId": 2300682 } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 35, "promptTokens": 19, "totalTokens": 54 } } }*/
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
PremAI
](/v0.1/docs/integrations/chat/premai/)[
Next
TogetherAI
](/v0.1/docs/integrations/chat/togetherai/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/togetherai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* TogetherAI
On this page
ChatTogetherAI
==============
Setup[](#setup "Direct link to Setup")
---------------------------------------
1. Create a TogetherAI account and get your API key [here](https://api.together.xyz/).
2. Export or set your API key inline. The ChatTogetherAI class defaults to `process.env.TOGETHER_AI_API_KEY`.
export TOGETHER_AI_API_KEY=your-api-key
You can use models provided by TogetherAI as follows:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
import { ChatTogetherAI } from "@langchain/community/chat_models/togetherai";import { HumanMessage } from "@langchain/core/messages";const model = new ChatTogetherAI({ temperature: 0.9, // In Node.js defaults to process.env.TOGETHER_AI_API_KEY apiKey: "YOUR-API-KEY",});console.log(await model.invoke([new HumanMessage("Hello there!")]));
#### API Reference:
* [ChatTogetherAI](https://api.js.langchain.com/classes/langchain_community_chat_models_togetherai.ChatTogetherAI.html) from `@langchain/community/chat_models/togetherai`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
Tool calling & JSON mode[](#tool-calling--json-mode "Direct link to Tool calling & JSON mode")
-----------------------------------------------------------------------------------------------
The TogetherAI chat supports JSON mode and calling tools.
### Tool calling[](#tool-calling "Direct link to Tool calling")
import { ChatTogetherAI } from "@langchain/community/chat_models/togetherai";import { ChatPromptTemplate } from "@langchain/core/prompts";import { convertToOpenAITool } from "@langchain/core/utils/function_calling";import { Calculator } from "@langchain/community/tools/calculator";// Use a pre-built toolconst calculatorTool = convertToOpenAITool(new Calculator());const modelWithCalculator = new ChatTogetherAI({ temperature: 0, // This is the default env variable name it will look for if none is passed. apiKey: process.env.TOGETHER_AI_API_KEY, // Together JSON mode/tool calling only supports a select number of models model: "mistralai/Mixtral-8x7B-Instruct-v0.1",}).bind({ // Bind the tool to the model. tools: [calculatorTool], tool_choice: calculatorTool, // Specify what tool the model should use});const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a super not-so-smart mathmatician."], ["human", "Help me out, how can I add {math}?"],]);// Use LCEL to chain the prompt to the model.const response = await prompt.pipe(modelWithCalculator).invoke({ math: "2 plus 3",});console.log(JSON.stringify(response.additional_kwargs.tool_calls));/**[ { "id": "call_f4lzeeuho939vs4dilwd7267", "type":"function", "function": { "name":"calculator", "arguments": "{\"input\":\"2 + 3\"}" } }] */
#### API Reference:
* [ChatTogetherAI](https://api.js.langchain.com/classes/langchain_community_chat_models_togetherai.ChatTogetherAI.html) from `@langchain/community/chat_models/togetherai`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [convertToOpenAITool](https://api.js.langchain.com/functions/langchain_core_utils_function_calling.convertToOpenAITool.html) from `@langchain/core/utils/function_calling`
* [Calculator](https://api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator`
tip
See a LangSmith trace of the above example [here](https://smith.langchain.com/public/5082ea20-c2de-410f-80e2-dbdfbf4d8adb/r).
### JSON mode[](#json-mode "Direct link to JSON mode")
To use JSON mode you must include the string "JSON" inside the prompt. Typical conventions include telling the model to use JSON, eg: `Respond to the user in JSON format`.
import { ChatTogetherAI } from "@langchain/community/chat_models/togetherai";import { ChatPromptTemplate } from "@langchain/core/prompts";// Define a JSON schema for the responseconst responseSchema = { type: "object", properties: { orderedArray: { type: "array", items: { type: "number", }, }, }, required: ["orderedArray"],};const modelWithJsonSchema = new ChatTogetherAI({ temperature: 0, apiKey: process.env.TOGETHER_AI_API_KEY, model: "mistralai/Mixtral-8x7B-Instruct-v0.1",}).bind({ response_format: { type: "json_object", // Define the response format as a JSON object schema: responseSchema, // Pass in the schema for the model's response },});const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant who responds in JSON."], ["human", "Please list this output in order of DESC {unorderedList}."],]);// Use LCEL to chain the prompt to the model.const response = await prompt.pipe(modelWithJsonSchema).invoke({ unorderedList: "[1, 4, 2, 8]",});console.log(JSON.parse(response.content as string));/**{ orderedArray: [ 8, 4, 2, 1 ] } */
#### API Reference:
* [ChatTogetherAI](https://api.js.langchain.com/classes/langchain_community_chat_models_togetherai.ChatTogetherAI.html) from `@langchain/community/chat_models/togetherai`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
tip
See a LangSmith trace of the above example [here](https://smith.langchain.com/public/3864aebb-5096-4b5f-b096-e54ddd1ec3d2/r).
Behind the scenes, TogetherAI uses the OpenAI SDK and OpenAI compatible API, with some caveats:
* Certain properties are not supported by the TogetherAI API, see [here](https://docs.together.ai/reference/chat-completions).
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
PromptLayer OpenAI
](/v0.1/docs/integrations/chat/prompt_layer_openai/)[
Next
WebLLM
](/v0.1/docs/integrations/chat/web_llm/)
* [Setup](#setup)
* [Tool calling & JSON mode](#tool-calling--json-mode)
* [Tool calling](#tool-calling)
* [JSON mode](#json-mode)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/web_llm/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* WebLLM
On this page
WebLLM
======
Compatibility
Only available in web environments.
You can run LLMs directly in your web browser using LangChain's [WebLLM](https://webllm.mlc.ai) integration.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to install the [WebLLM SDK](https://www.npmjs.com/package/@mlc-ai/web-llm) module to communicate with your local model.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install -S @mlc-ai/web-llm @langchain/community
yarn add @mlc-ai/web-llm @langchain/community
pnpm add @mlc-ai/web-llm @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
Note that the first time a model is called, WebLLM will download the full weights for that model. This can be multiple gigabytes, and may not be possible for all end-users of your application depending on their internet connection and computer specs. While the browser will cache future invocations of that model, we recommend using the smallest possible model you can.
We also recommend using a [separate web worker](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers) when invoking and loading your models to not block execution.
// Must be run in a web environment, e.g. a web workerimport { ChatWebLLM } from "@langchain/community/chat_models/webllm";import { HumanMessage } from "@langchain/core/messages";// Initialize the ChatWebLLM model with the model record and chat options.// Note that if the appConfig field is set, the list of model records// must include the selected model record for the engine.// You can import a list of models available by default here:// https://github.com/mlc-ai/web-llm/blob/main/src/config.ts//// Or by importing it via:// import { prebuiltAppConfig } from "@mlc-ai/web-llm";const model = new ChatWebLLM({ model: "Phi2-q4f32_1", chatOptions: { temperature: 0.5, },});// Call the model with a message and await the response.const response = await model.invoke([ new HumanMessage({ content: "What is 1 + 1?" }),]);console.log(response);/*AIMessage { content: ' 2\n',}*/
#### API Reference:
* [ChatWebLLM](https://api.js.langchain.com/classes/langchain_community_chat_models_webllm.ChatWebLLM.html) from `@langchain/community/chat_models/webllm`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
Streaming is also supported.
Example[](#example "Direct link to Example")
---------------------------------------------
For a full end-to-end example, check out [this project](https://github.com/jacoblee93/fully-local-pdf-chatbot).
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
TogetherAI
](/v0.1/docs/integrations/chat/togetherai/)[
Next
YandexGPT
](/v0.1/docs/integrations/chat/yandex/)
* [Setup](#setup)
* [Usage](#usage)
* [Example](#example)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/yandex/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* YandexGPT
On this page
ChatYandexGPT
=============
LangChain.js supports calling [YandexGPT](https://cloud.yandex.com/en/services/yandexgpt) chat models.
Setup[](#setup "Direct link to Setup")
---------------------------------------
First, you should [create a service account](https://cloud.yandex.com/en/docs/iam/operations/sa/create) with the `ai.languageModels.user` role.
Next, you have two authentication options:
* [IAM token](https://cloud.yandex.com/en/docs/iam/operations/iam-token/create-for-sa). You can specify the token in a constructor parameter as `iam_token` or in an environment variable `YC_IAM_TOKEN`.
* [API key](https://cloud.yandex.com/en/docs/iam/operations/api-key/create) You can specify the key in a constructor parameter as `api_key` or in an environment variable `YC_API_KEY`.
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/yandex
yarn add @langchain/yandex
pnpm add @langchain/yandex
import { ChatYandexGPT } from "@langchain/yandex/chat_models";import { HumanMessage, SystemMessage } from "@langchain/core/messages";const chat = new ChatYandexGPT();const res = await chat.invoke([ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage("I love programming."),]);console.log(res);/*AIMessage { lc_serializable: true, lc_kwargs: { content: "Je t'aime programmer.", additional_kwargs: {} }, lc_namespace: [ 'langchain', 'schema' ], content: "Je t'aime programmer.", name: undefined, additional_kwargs: {}} */
#### API Reference:
* [ChatYandexGPT](https://api.js.langchain.com/classes/langchain_yandex_chat_models.ChatYandexGPT.html) from `@langchain/yandex/chat_models`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
* [SystemMessage](https://api.js.langchain.com/classes/langchain_core_messages.SystemMessage.html) from `@langchain/core/messages`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
WebLLM
](/v0.1/docs/integrations/chat/web_llm/)[
Next
ZhipuAI
](/v0.1/docs/integrations/chat/zhipuai/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_transformers/html-to-text/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [html-to-text](/v0.1/docs/integrations/document_transformers/html-to-text/)
* [@mozilla/readability](/v0.1/docs/integrations/document_transformers/mozilla_readability/)
* [OpenAI functions metadata tagger](/v0.1/docs/integrations/document_transformers/openai_metadata_tagger/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* html-to-text
On this page
html-to-text
============
When ingesting HTML documents for later retrieval, we are often interested only in the actual content of the webpage rather than semantics. Stripping HTML tags from documents with the HtmlToTextTransformer can result in more content-rich chunks, making retrieval more effective.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to install the [`html-to-text`](https://www.npmjs.com/package/html-to-text) npm package:
* npm
* Yarn
* pnpm
npm install html-to-text
yarn add html-to-text
pnpm add html-to-text
Though not required for the transformer by itself, the below usage examples require [`cheerio`](https://www.npmjs.com/package/cheerio) for scraping:
* npm
* Yarn
* pnpm
npm install cheerio
yarn add cheerio
pnpm add cheerio
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
The below example scrapes a Hacker News thread, splits it based on HTML tags to group chunks based on the semantic information from the tags, then extracts content from the individual chunks:
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { HtmlToTextTransformer } from "@langchain/community/document_transformers/html_to_text";const loader = new CheerioWebBaseLoader( "https://news.ycombinator.com/item?id=34817881");const docs = await loader.load();const splitter = RecursiveCharacterTextSplitter.fromLanguage("html");const transformer = new HtmlToTextTransformer();const sequence = splitter.pipe(transformer);const newDocuments = await sequence.invoke(docs);console.log(newDocuments);/* [ Document { pageContent: 'Hacker News new | past | comments | ask | show | jobs | submit login What Lights\n' + 'the Universe’s Standard Candles? (quantamagazine.org) 75 points by Amorymeltzer\n' + '5 months ago | hide | past | favorite | 6 comments delta_p_delta_x 5 months ago\n' + '| next [–] Astrophysical and cosmological simulations are often insightful.\n' + "They're also very cross-disciplinary; besides the obvious astrophysics, there's\n" + 'networking and sysadmin, parallel computing and algorithm theory (so that the\n' + 'simulation programs are actually fast but still accurate), systems design, and\n' + 'even a bit of graphic design for the visualisations.Some of my favourite\n' + 'simulation projects:- IllustrisTNG:', metadata: { source: 'https://news.ycombinator.com/item?id=34817881', loc: [Object] } }, Document { pageContent: 'that the simulation programs are actually fast but still accurate), systems\n' + 'design, and even a bit of graphic design for the visualisations.Some of my\n' + 'favourite simulation projects:- IllustrisTNG: https://www.tng-project.org/-\n' + 'SWIFT: https://swift.dur.ac.uk/- CO5BOLD:\n' + 'https://www.astro.uu.se/~bf/co5bold_main.html (which produced these animations\n' + 'of a red-giant star: https://www.astro.uu.se/~bf/movie/AGBmovie.html)-\n' + 'AbacusSummit: https://abacussummit.readthedocs.io/en/latest/And I can add the\n' + 'simulations in the article, too. froeb 5 months ago | parent | next [–]\n' + 'Supernova simulations are especially interesting too. I have heard them\n' + 'described as the only time in physics when all 4 of the fundamental forces are\n' + 'important. The explosion can be quite finicky too. If I remember right, you\n' + "can't get supernova to explode", metadata: { source: 'https://news.ycombinator.com/item?id=34817881', loc: [Object] } }, Document { pageContent: 'heard them described as the only time in physics when all 4 of the fundamental\n' + 'forces are important. The explosion can be quite finicky too. If I remember\n' + "right, you can't get supernova to explode properly in 1D simulations, only in\n" + 'higher dimensions. This was a mystery until the realization that turbulence is\n' + 'necessary for supernova to trigger--there is no turbulent flow in 1D. andrewflnr\n' + "5 months ago | prev | next [–] Whoa. I didn't know the accretion theory of Ia\n" + 'supernovae was dead, much less that it had been since 2011. andreareina 5 months\n' + 'ago | prev | next [–] This seems to be the paper', metadata: { source: 'https://news.ycombinator.com/item?id=34817881', loc: [Object] } }, Document { pageContent: 'andreareina 5 months ago | prev | next [–] This seems to be the paper\n' + 'https://academic.oup.com/mnras/article/517/4/5260/6779709 andreareina 5 months\n' + "ago | prev [–] Wouldn't double detonation show up as variance in the brightness?\n" + 'yencabulator 5 months ago | parent [–] Or widening of the peak. If one type Ia\n' + 'supernova goes 1,2,3,2,1, the sum of two could go 1+0=1 2+1=3 3+2=5 2+3=5 1+2=3\n' + '0+1=1 Guidelines | FAQ | Lists |', metadata: { source: 'https://news.ycombinator.com/item?id=34817881', loc: [Object] } }, Document { pageContent: 'the sum of two could go 1+0=1 2+1=3 3+2=5 2+3=5 1+2=3 0+1=1 Guidelines | FAQ |\n' + 'Lists | API | Security | Legal | Apply to YC | Contact Search:', metadata: { source: 'https://news.ycombinator.com/item?id=34817881', loc: [Object] } } ]*/
#### API Reference:
* [CheerioWebBaseLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_cheerio.CheerioWebBaseLoader.html) from `langchain/document_loaders/web/cheerio`
* [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter`
* [HtmlToTextTransformer](https://api.js.langchain.com/classes/langchain_community_document_transformers_html_to_text.HtmlToTextTransformer.html) from `@langchain/community/document_transformers/html_to_text`
Customization[](#customization "Direct link to Customization")
---------------------------------------------------------------
You can pass the transformer any [arguments accepted by the `html-to-text` package](https://www.npmjs.com/package/html-to-text) to customize how it works.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Document transformers
](/v0.1/docs/integrations/document_transformers/)[
Next
@mozilla/readability
](/v0.1/docs/integrations/document_transformers/mozilla_readability/)
* [Setup](#setup)
* [Usage](#usage)
* [Customization](#customization)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_transformers/mozilla_readability/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [html-to-text](/v0.1/docs/integrations/document_transformers/html-to-text/)
* [@mozilla/readability](/v0.1/docs/integrations/document_transformers/mozilla_readability/)
* [OpenAI functions metadata tagger](/v0.1/docs/integrations/document_transformers/openai_metadata_tagger/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* @mozilla/readability
On this page
@mozilla/readability
====================
When ingesting HTML documents for later retrieval, we are often interested only in the actual content of the webpage rather than semantics. Stripping HTML tags from documents with the MozillaReadabilityTransformer can result in more content-rich chunks, making retrieval more effective.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to install the [`@mozilla/readability`](https://www.npmjs.com/package/@mozilla/readability) and the [`jsdom`](https://www.npmjs.com/package/jsdom) npm package:
* npm
* Yarn
* pnpm
npm install @mozilla/readability jsdom
yarn add @mozilla/readability jsdom
pnpm add @mozilla/readability jsdom
Though not required for the transformer by itself, the below usage examples require [`cheerio`](https://www.npmjs.com/package/cheerio) for scraping:
* npm
* Yarn
* pnpm
npm install cheerio
yarn add cheerio
pnpm add cheerio
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
The below example scrapes a Hacker News thread, splits it based on HTML tags to group chunks based on the semantic information from the tags, then extracts content from the individual chunks:
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";import { MozillaReadabilityTransformer } from "@langchain/community/document_transformers/mozilla_readability";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";const loader = new CheerioWebBaseLoader( "https://news.ycombinator.com/item?id=34817881");const docs = await loader.load();const splitter = RecursiveCharacterTextSplitter.fromLanguage("html");const transformer = new MozillaReadabilityTransformer();const sequence = splitter.pipe(transformer);const newDocuments = await sequence.invoke(docs);console.log(newDocuments);/* [ Document { pageContent: 'Hacker News new | past | comments | ask | show | jobs | submit login What Lights\n' + 'the Universe’s Standard Candles? (quantamagazine.org) 75 points by Amorymeltzer\n' + '5 months ago | hide | past | favorite | 6 comments delta_p_delta_x 5 months ago\n' + '| next [–] Astrophysical and cosmological simulations are often insightful.\n' + "They're also very cross-disciplinary; besides the obvious astrophysics, there's\n" + 'networking and sysadmin, parallel computing and algorithm theory (so that the\n' + 'simulation programs are actually fast but still accurate), systems design, and\n' + 'even a bit of graphic design for the visualisations.Some of my favourite\n' + 'simulation projects:- IllustrisTNG:', metadata: { source: 'https://news.ycombinator.com/item?id=34817881', loc: [Object] } }, Document { pageContent: 'that the simulation programs are actually fast but still accurate), systems\n' + 'design, and even a bit of graphic design for the visualisations.Some of my\n' + 'favourite simulation projects:- IllustrisTNG: https://www.tng-project.org/-\n' + 'SWIFT: https://swift.dur.ac.uk/- CO5BOLD:\n' + 'https://www.astro.uu.se/~bf/co5bold_main.html (which produced these animations\n' + 'of a red-giant star: https://www.astro.uu.se/~bf/movie/AGBmovie.html)-\n' + 'AbacusSummit: https://abacussummit.readthedocs.io/en/latest/And I can add the\n' + 'simulations in the article, too. froeb 5 months ago | parent | next [–]\n' + 'Supernova simulations are especially interesting too. I have heard them\n' + 'described as the only time in physics when all 4 of the fundamental forces are\n' + 'important. The explosion can be quite finicky too. If I remember right, you\n' + "can't get supernova to explode", metadata: { source: 'https://news.ycombinator.com/item?id=34817881', loc: [Object] } }, Document { pageContent: 'heard them described as the only time in physics when all 4 of the fundamental\n' + 'forces are important. The explosion can be quite finicky too. If I remember\n' + "right, you can't get supernova to explode properly in 1D simulations, only in\n" + 'higher dimensions. This was a mystery until the realization that turbulence is\n' + 'necessary for supernova to trigger--there is no turbulent flow in 1D. andrewflnr\n' + "5 months ago | prev | next [–] Whoa. I didn't know the accretion theory of Ia\n" + 'supernovae was dead, much less that it had been since 2011. andreareina 5 months\n' + 'ago | prev | next [–] This seems to be the paper', metadata: { source: 'https://news.ycombinator.com/item?id=34817881', loc: [Object] } }, Document { pageContent: 'andreareina 5 months ago | prev | next [–] This seems to be the paper\n' + 'https://academic.oup.com/mnras/article/517/4/5260/6779709 andreareina 5 months\n' + "ago | prev [–] Wouldn't double detonation show up as variance in the brightness?\n" + 'yencabulator 5 months ago | parent [–] Or widening of the peak. If one type Ia\n' + 'supernova goes 1,2,3,2,1, the sum of two could go 1+0=1 2+1=3 3+2=5 2+3=5 1+2=3\n' + '0+1=1 Guidelines | FAQ | Lists |', metadata: { source: 'https://news.ycombinator.com/item?id=34817881', loc: [Object] } }, Document { pageContent: 'the sum of two could go 1+0=1 2+1=3 3+2=5 2+3=5 1+2=3 0+1=1 Guidelines | FAQ |\n' + 'Lists | API | Security | Legal | Apply to YC | Contact Search:', metadata: { source: 'https://news.ycombinator.com/item?id=34817881', loc: [Object] } } ]*/
#### API Reference:
* [CheerioWebBaseLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_cheerio.CheerioWebBaseLoader.html) from `langchain/document_loaders/web/cheerio`
* [MozillaReadabilityTransformer](https://api.js.langchain.com/classes/langchain_community_document_transformers_mozilla_readability.MozillaReadabilityTransformer.html) from `@langchain/community/document_transformers/mozilla_readability`
* [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter`
Customization[](#customization "Direct link to Customization")
---------------------------------------------------------------
You can pass the transformer any [arguments accepted by the `@mozilla/readability` package](https://www.npmjs.com/package/@mozilla/readability) to customize how it works.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
html-to-text
](/v0.1/docs/integrations/document_transformers/html-to-text/)[
Next
OpenAI functions metadata tagger
](/v0.1/docs/integrations/document_transformers/openai_metadata_tagger/)
* [Setup](#setup)
* [Usage](#usage)
* [Customization](#customization)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_transformers/openai_metadata_tagger/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [html-to-text](/v0.1/docs/integrations/document_transformers/html-to-text/)
* [@mozilla/readability](/v0.1/docs/integrations/document_transformers/mozilla_readability/)
* [OpenAI functions metadata tagger](/v0.1/docs/integrations/document_transformers/openai_metadata_tagger/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* OpenAI functions metadata tagger
On this page
OpenAI functions metadata tagger
================================
It can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious.
The `MetadataTagger` document transformer automates this process by extracting metadata from each provided document according to a provided schema. It uses a configurable OpenAI Functions-powered chain under the hood, so if you pass a custom LLM instance, it must be an OpenAI model with functions support.
**Note:** This document transformer works best with complete documents, so it's best to run it first with whole documents before doing any other splitting or processing!
### Usage[](#usage "Direct link to Usage")
For example, let's say you wanted to index a set of movie reviews. You could initialize the document transformer as follows:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { z } from "zod";import { createMetadataTaggerFromZod } from "langchain/document_transformers/openai_functions";import { ChatOpenAI } from "@langchain/openai";import { Document } from "@langchain/core/documents";const zodSchema = z.object({ movie_title: z.string(), critic: z.string(), tone: z.enum(["positive", "negative"]), rating: z .optional(z.number()) .describe("The number of stars the critic rated the movie"),});const metadataTagger = createMetadataTaggerFromZod(zodSchema, { llm: new ChatOpenAI({ model: "gpt-3.5-turbo" }),});const documents = [ new Document({ pageContent: "Review of The Bee Movie\nBy Roger Ebert\nThis is the greatest movie ever made. 4 out of 5 stars.", }), new Document({ pageContent: "Review of The Godfather\nBy Anonymous\n\nThis movie was super boring. 1 out of 5 stars.", metadata: { reliable: false }, }),];const taggedDocuments = await metadataTagger.transformDocuments(documents);console.log(taggedDocuments);/* [ Document { pageContent: 'Review of The Bee Movie\n' + 'By Roger Ebert\n' + 'This is the greatest movie ever made. 4 out of 5 stars.', metadata: { movie_title: 'The Bee Movie', critic: 'Roger Ebert', tone: 'positive', rating: 4 } }, Document { pageContent: 'Review of The Godfather\n' + 'By Anonymous\n' + '\n' + 'This movie was super boring. 1 out of 5 stars.', metadata: { movie_title: 'The Godfather', critic: 'Anonymous', tone: 'negative', rating: 1, reliable: false } } ]*/
#### API Reference:
* [createMetadataTaggerFromZod](https://api.js.langchain.com/functions/langchain_document_transformers_openai_functions.createMetadataTaggerFromZod.html) from `langchain/document_transformers/openai_functions`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
There is an additional `createMetadataTagger` method that accepts a valid JSON Schema object as well.
### Customization[](#customization "Direct link to Customization")
You can pass the underlying tagging chain the standard LLMChain arguments in the second options parameter. For example, if you wanted to ask the LLM to focus specific details in the input documents, or extract metadata in a certain style, you could pass in a custom prompt:
import { z } from "zod";import { createMetadataTaggerFromZod } from "langchain/document_transformers/openai_functions";import { ChatOpenAI } from "@langchain/openai";import { Document } from "@langchain/core/documents";import { PromptTemplate } from "@langchain/core/prompts";const taggingChainTemplate = `Extract the desired information from the following passage.Anonymous critics are actually Roger Ebert.Passage:{input}`;const zodSchema = z.object({ movie_title: z.string(), critic: z.string(), tone: z.enum(["positive", "negative"]), rating: z .optional(z.number()) .describe("The number of stars the critic rated the movie"),});const metadataTagger = createMetadataTaggerFromZod(zodSchema, { llm: new ChatOpenAI({ model: "gpt-3.5-turbo" }), prompt: PromptTemplate.fromTemplate(taggingChainTemplate),});const documents = [ new Document({ pageContent: "Review of The Bee Movie\nBy Roger Ebert\nThis is the greatest movie ever made. 4 out of 5 stars.", }), new Document({ pageContent: "Review of The Godfather\nBy Anonymous\n\nThis movie was super boring. 1 out of 5 stars.", metadata: { reliable: false }, }),];const taggedDocuments = await metadataTagger.transformDocuments(documents);console.log(taggedDocuments);/* [ Document { pageContent: 'Review of The Bee Movie\n' + 'By Roger Ebert\n' + 'This is the greatest movie ever made. 4 out of 5 stars.', metadata: { movie_title: 'The Bee Movie', critic: 'Roger Ebert', tone: 'positive', rating: 4 } }, Document { pageContent: 'Review of The Godfather\n' + 'By Anonymous\n' + '\n' + 'This movie was super boring. 1 out of 5 stars.', metadata: { movie_title: 'The Godfather', critic: 'Roger Ebert', tone: 'negative', rating: 1, reliable: false } } ]*/
#### API Reference:
* [createMetadataTaggerFromZod](https://api.js.langchain.com/functions/langchain_document_transformers_openai_functions.createMetadataTaggerFromZod.html) from `langchain/document_transformers/openai_functions`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
@mozilla/readability
](/v0.1/docs/integrations/document_transformers/mozilla_readability/)[
Next
Document compressors
](/v0.1/docs/integrations/document_compressors/)
* [Usage](#usage)
* [Customization](#customization)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/youtube/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/)
* [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/)
* [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/)
* [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/)
* [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/)
* [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/)
* [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/)
* [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/)
* [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/)
* [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/)
* [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/)
* [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/)
* [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/)
* [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/)
* [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/)
* [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/)
* [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/)
* [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/)
* [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/)
* [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/)
* [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/)
* [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/)
* [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/)
* [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/)
* [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/)
* [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/)
* [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* YouTube transcripts
YouTube transcripts
===================
This covers how to load youtube transcript into LangChain documents.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to install the [youtube-transcript](https://www.npmjs.com/package/youtube-transcript) package and [youtubei.js](https://www.npmjs.com/package/youtubei.js) to extract metadata:
* npm
* Yarn
* pnpm
npm install youtube-transcript youtubei.js
yarn add youtube-transcript youtubei.js
pnpm add youtube-transcript youtubei.js
Usage[](#usage "Direct link to Usage")
---------------------------------------
You need to specify a link to the video in the `url`. You can also specify `language` in [ISO 639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) and `addVideoInfo` flag.
import { YoutubeLoader } from "langchain/document_loaders/web/youtube";const loader = YoutubeLoader.createFromUrl("https://youtu.be/bZQun8Y4L2A", { language: "en", addVideoInfo: true,});const docs = await loader.load();console.log(docs);
#### API Reference:
* [YoutubeLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_youtube.YoutubeLoader.html) from `langchain/document_loaders/web/youtube`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Blockchain Data
](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/)[
Next
Document transformers
](/v0.1/docs/integrations/document_transformers/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_compressors/cohere_rerank/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Cohere Rerank](/v0.1/docs/integrations/document_compressors/cohere_rerank/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* Cohere Rerank
On this page
Cohere Rerank
=============
Reranking documents can greatly improve any RAG application and document retrieval system.
At a high level, a rerank API is a language model which analyzes documents and reorders them based on their relevance to a given query.
Cohere offers an API for reranking documents. In this example we'll show you how to use it.
Setup[](#setup "Direct link to Setup")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/cohere
yarn add @langchain/cohere
pnpm add @langchain/cohere
import { CohereRerank } from "@langchain/cohere";import { Document } from "@langchain/core/documents";const query = "What is the capital of the United States?";const docs = [ new Document({ pageContent: "Carson City is the capital city of the American state of Nevada. At the 2010 United States Census, Carson City had a population of 55,274.", }), new Document({ pageContent: "The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean that are a political division controlled by the United States. Its capital is Saipan.", }), new Document({ pageContent: "Charlotte Amalie is the capital and largest city of the United States Virgin Islands. It has about 20,000 people. The city is on the island of Saint Thomas.", }), new Document({ pageContent: "Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district. The President of the USA and many major national government offices are in the territory. This makes it the political center of the United States of America.", }), new Document({ pageContent: "Capital punishment (the death penalty) has existed in the United States since before the United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states. The federal government (including the United States military) also uses capital punishment.", }),];const cohereRerank = new CohereRerank({ apiKey: process.env.COHERE_API_KEY, // Default model: "rerank-english-v2.0", // Default});const rerankedDocuments = await cohereRerank.rerank(docs, query, { topN: 5,});console.log(rerankedDocuments);/**[ { index: 3, relevanceScore: 0.9871293 }, { index: 1, relevanceScore: 0.29961726 }, { index: 4, relevanceScore: 0.27542195 }, { index: 0, relevanceScore: 0.08977329 }, { index: 2, relevanceScore: 0.041462272 }] */
#### API Reference:
* [CohereRerank](https://api.js.langchain.com/classes/langchain_cohere.CohereRerank.html) from `@langchain/cohere`
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
Here, we can see the `.rerank()` method returns just the index of the documents (matching the indexes of the input documents) and their relevancy scores.
If we'd like to have the documents returned from the method itself, we can use the `.compressDocuments()` method.
import { CohereRerank } from "@langchain/cohere";import { Document } from "@langchain/core/documents";const query = "What is the capital of the United States?";const docs = [ new Document({ pageContent: "Carson City is the capital city of the American state of Nevada. At the 2010 United States Census, Carson City had a population of 55,274.", }), new Document({ pageContent: "The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean that are a political division controlled by the United States. Its capital is Saipan.", }), new Document({ pageContent: "Charlotte Amalie is the capital and largest city of the United States Virgin Islands. It has about 20,000 people. The city is on the island of Saint Thomas.", }), new Document({ pageContent: "Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district. The President of the USA and many major national government offices are in the territory. This makes it the political center of the United States of America.", }), new Document({ pageContent: "Capital punishment (the death penalty) has existed in the United States since before the United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states. The federal government (including the United States military) also uses capital punishment.", }),];const cohereRerank = new CohereRerank({ apiKey: process.env.COHERE_API_KEY, // Default topN: 3, // Default model: "rerank-english-v2.0", // Default});const rerankedDocuments = await cohereRerank.compressDocuments(docs, query);console.log(rerankedDocuments);/**[ Document { pageContent: 'Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district. The President of the USA and many major national government offices are in the territory. This makes it the political center of the United States of America.', metadata: { relevanceScore: 0.9871293 } }, Document { pageContent: 'The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean that are a political division controlled by the United States. Its capital is Saipan.', metadata: { relevanceScore: 0.29961726 } }, Document { pageContent: 'Capital punishment (the death penalty) has existed in the United States since before the United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states. The federal government (including the United States military) also uses capital punishment.', metadata: { relevanceScore: 0.27542195 } }] */
#### API Reference:
* [CohereRerank](https://api.js.langchain.com/classes/langchain_cohere.CohereRerank.html) from `@langchain/cohere`
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
From the results, we can see it returned the top 3 documents, and assigned a `relevanceScore` to each.
As expected, the document with the highest `relevanceScore` is the one that references Washington, D.C., with a score of `98.7%`!
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Document compressors
](/v0.1/docs/integrations/document_compressors/)[
Next
Text embedding models
](/v0.1/docs/integrations/text_embedding/)
* [Setup](#setup)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/alibaba_tongyi/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Alibaba Tongyi
On this page
Alibaba Tongyi
==============
The `AlibabaTongyiEmbeddings` class uses the Alibaba Tongyi API to generate embeddings for a given text.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to sign up for an Alibaba API key and set it as an environment variable named `ALIBABA_API_KEY`.
Then, you'll need to install the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { AlibabaTongyiEmbeddings } from "@langchain/community/embeddings/alibaba_tongyi";const model = new AlibabaTongyiEmbeddings({});const res = await model.embedQuery( "What would be a good company name a company that makes colorful socks?");console.log({ res });
#### API Reference:
* [AlibabaTongyiEmbeddings](https://api.js.langchain.com/classes/langchain_community_embeddings_alibaba_tongyi.AlibabaTongyiEmbeddings.html) from `@langchain/community/embeddings/alibaba_tongyi`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Text embedding models
](/v0.1/docs/integrations/text_embedding/)[
Next
Azure OpenAI
](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/baidu_qianfan/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Baidu Qianfan
On this page
Baidu Qianfan
=============
The `BaiduQianfanEmbeddings` class uses the Baidu Qianfan API to generate embeddings for a given text.
Setup[](#setup "Direct link to Setup")
---------------------------------------
Official Website: [https://cloud.baidu.com/doc/WENXINWORKSHOP/s/alj562vvu](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/alj562vvu)
An API key is required to use this embedding model. You can get one by registering at [https://cloud.baidu.com/doc/WENXINWORKSHOP/s/alj562vvu](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/alj562vvu).
Please set the acquired API key as an environment variable named BAIDU\_API\_KEY, and set your secret key as an environment variable named BAIDU\_SECRET\_KEY.
Then, you'll need to install the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { BaiduQianfanEmbeddings } from "@langchain/community/embeddings/baidu_qianfan";const embeddings = new BaiduQianfanEmbeddings();const res = await embeddings.embedQuery( "What would be a good company name a company that makes colorful socks?");console.log({ res });
#### API Reference:
* [BaiduQianfanEmbeddings](https://api.js.langchain.com/classes/langchain_community_embeddings_baidu_qianfan.BaiduQianfanEmbeddings.html) from `@langchain/community/embeddings/baidu_qianfan`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Azure OpenAI
](/v0.1/docs/integrations/text_embedding/azure_openai/)[
Next
Bedrock
](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/cloudflare_ai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Cloudflare Workers AI
Cloudflare Workers AI
=====================
If you're deploying your project in a Cloudflare worker, you can use Cloudflare's [built-in Workers AI embeddings](https://developers.cloudflare.com/workers-ai/) with LangChain.js.
Setup[](#setup "Direct link to Setup")
---------------------------------------
First, [follow the official docs](https://developers.cloudflare.com/workers-ai/get-started/workers-wrangler/) to set up your worker.
You'll also need to install the LangChain Cloudflare integration package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/cloudflare
yarn add @langchain/cloudflare
pnpm add @langchain/cloudflare
Usage[](#usage "Direct link to Usage")
---------------------------------------
Below is an example worker that uses Workers AI embeddings with a [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/) vectorstore.
note
If running locally, be sure to run wrangler as `npx wrangler dev --remote`!
name = "langchain-test"main = "worker.js"compatibility_date = "2024-01-10"[[vectorize]]binding = "VECTORIZE_INDEX"index_name = "langchain-test"[ai]binding = "AI"
// @ts-nocheckimport type { VectorizeIndex, Fetcher, Request,} from "@cloudflare/workers-types";import { CloudflareVectorizeStore, CloudflareWorkersAIEmbeddings,} from "@langchain/cloudflare";export interface Env { VECTORIZE_INDEX: VectorizeIndex; AI: Fetcher;}export default { async fetch(request: Request, env: Env) { const { pathname } = new URL(request.url); const embeddings = new CloudflareWorkersAIEmbeddings({ binding: env.AI, model: "@cf/baai/bge-small-en-v1.5", }); const store = new CloudflareVectorizeStore(embeddings, { index: env.VECTORIZE_INDEX, }); if (pathname === "/") { const results = await store.similaritySearch("hello", 5); return Response.json(results); } else if (pathname === "/load") { // Upsertion by id is supported await store.addDocuments( [ { pageContent: "hello", metadata: {}, }, { pageContent: "world", metadata: {}, }, { pageContent: "hi", metadata: {}, }, ], { ids: ["id1", "id2", "id3"] } ); return Response.json({ success: true }); } else if (pathname === "/clear") { await store.delete({ ids: ["id1", "id2", "id3"] }); return Response.json({ success: true }); } return Response.json({ error: "Not Found" }, { status: 404 }); },};
#### API Reference:
* [CloudflareVectorizeStore](https://api.js.langchain.com/classes/langchain_cloudflare.CloudflareVectorizeStore.html) from `@langchain/cloudflare`
* [CloudflareWorkersAIEmbeddings](https://api.js.langchain.com/classes/langchain_cloudflare.CloudflareWorkersAIEmbeddings.html) from `@langchain/cloudflare`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Bedrock
](/v0.1/docs/integrations/text_embedding/bedrock/)[
Next
Cohere
](/v0.1/docs/integrations/text_embedding/cohere/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/cohere/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Cohere
On this page
Cohere
======
The `CohereEmbeddings` class uses the Cohere API to generate embeddings for a given text.
Usage[](#usage "Direct link to Usage")
---------------------------------------
* npm
* Yarn
* pnpm
npm install cohere-ai @langchain/cohere
yarn add cohere-ai @langchain/cohere
pnpm add cohere-ai @langchain/cohere
import { CohereEmbeddings } from "@langchain/cohere";/* Embed queries */const embeddings = new CohereEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.COHERE_API_KEY batchSize: 48, // Default value if omitted is 48. Max value is 96});const res = await embeddings.embedQuery("Hello world");console.log(res);/* Embed documents */const documentRes = await embeddings.embedDocuments(["Hello world", "Bye bye"]);console.log({ documentRes });
#### API Reference:
* [CohereEmbeddings](https://api.js.langchain.com/classes/langchain_cohere.CohereEmbeddings.html) from `@langchain/cohere`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Cloudflare Workers AI
](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)[
Next
Fireworks
](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/google_palm/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Google PaLM
Google PaLM
===========
note
This integration does not support `embeddings-*` model. Check [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/) embeddings.
The [Google PaLM API](https://developers.generativeai.google/products/palm) can be integrated by first installing the required packages:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install google-auth-library @google-ai/generativelanguage @langchain/community
yarn add google-auth-library @google-ai/generativelanguage @langchain/community
pnpm add google-auth-library @google-ai/generativelanguage @langchain/community
Create an **API key** from [Google MakerSuite](https://makersuite.google.com/app/apikey). You can then set the key as `GOOGLE_PALM_API_KEY` environment variable or pass it as `apiKey` parameter while instantiating the model.
import { GooglePaLMEmbeddings } from "@langchain/community/embeddings/googlepalm";const model = new GooglePaLMEmbeddings({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` model: "models/embedding-gecko-001", // OPTIONAL});/* Embed queries */const res = await model.embedQuery( "What would be a good company name for a company that makes colorful socks?");console.log({ res });/* Embed documents */const documentRes = await model.embedDocuments(["Hello world", "Bye bye"]);console.log({ documentRes });
#### API Reference:
* [GooglePaLMEmbeddings](https://api.js.langchain.com/classes/langchain_community_embeddings_googlepalm.GooglePaLMEmbeddings.html) from `@langchain/community/embeddings/googlepalm`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Google AI
](/v0.1/docs/integrations/text_embedding/google_generativeai/)[
Next
Google Vertex AI
](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/google_generativeai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Google AI
On this page
Google Generative AI
====================
You can access Google's generative AI embeddings models through `@langchain/google-genai` integration package.
Get an API key here: [https://ai.google.dev/tutorials/setup](https://ai.google.dev/tutorials/setup)
You'll need to install the `@langchain/google-genai` package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/google-genai
yarn add @langchain/google-genai
pnpm add @langchain/google-genai
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { GoogleGenerativeAIEmbeddings } from "@langchain/google-genai";import { TaskType } from "@google/generative-ai";/* * Before running this, you should make sure you have created a * Google Cloud Project that has `generativelanguage` API enabled. * * You will also need to generate an API key and set * an environment variable GOOGLE_API_KEY * */const embeddings = new GoogleGenerativeAIEmbeddings({ model: "embedding-001", // 768 dimensions taskType: TaskType.RETRIEVAL_DOCUMENT, title: "Document title",});const res = await embeddings.embedQuery("OK Google");console.log(res, res.length);/* [ 0.010467986, -0.052334797, -0.05164676, -0.0092885755, 0.037551474, 0.007278041, -0.0014511136, -0.0002727135, -0.01205141, -0.028824795, 0.022447161, 0.032513272, -0.0075029004, 0.013371749, 0.03725578, -0.0179886, -0.032127254, -0.019804858, -0.035530213, -0.057539217, 0.030938378, 0.022367297, -0.024294581, 0.011045744, 0.0026335048, -0.018090524, 0.0066266404, -0.05072178, -0.025432976, 0.04673682, -0.044976745, 0.009511519, -0.030653704, 0.0066106077, -0.03870159, -0.04239313, 0.016969211, -0.015911, 0.020452755, 0.033449557, -0.002724189, -0.049285132, -0.016055783, -0.0016474632, 0.013622627, -0.012853559, -0.00383113, 0.0047683385, 0.029007262, -0.082496256, 0.055966448, 0.011457588, 0.04426033, -0.043971397, 0.029413547, 0.012740723, 0.03243298, -0.005483601, -0.01973574, -0.027495336, 0.0031939305, 0.02392931, -0.011409592, 0.053490978, -0.03130516, -0.037364446, -0.028803863, 0.019082755, -0.00075289875, 0.015987953, 0.005136402, -0.045040093, 0.051010687, -0.06252348, -0.09334517, -0.11461444, -0.007226655, 0.034570504, 0.017628446, 0.02613834, -0.0043784343, -0.022333296, -0.053109482, -0.018441308, -0.10350664, 0.048912525, -0.042917475, -0.0014399975, 0.023028672, 0.00041137074, 0.019345555, -0.023254089, 0.060004912, -0.07684076, -0.04034909, 0.05221485, -0.015773885, -0.029030964, 0.02586164, -0.0401004, ... 668 more items ]*/
#### API Reference:
* [GoogleGenerativeAIEmbeddings](https://api.js.langchain.com/classes/langchain_google_genai.GoogleGenerativeAIEmbeddings.html) from `@langchain/google-genai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Fireworks
](/v0.1/docs/integrations/text_embedding/fireworks/)[
Next
Google PaLM
](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/google_vertex_ai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Google Vertex AI
Google Vertex AI
================
The `GoogleVertexAIEmbeddings` class uses Google's Vertex AI PaLM models to generate embeddings for a given text.
The Vertex AI implementation is meant to be used in Node.js and not directly in a browser, since it requires a service account to use.
Before running this code, you should make sure the Vertex AI API is enabled for the relevant project in your Google Cloud dashboard and that you've authenticated to Google Cloud using one of these methods:
* You are logged into an account (using `gcloud auth application-default login`) permitted to that project.
* You are running on a machine using a service account that is permitted to the project.
* You have downloaded the credentials for a service account that is permitted to the project and set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the path of this file.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install google-auth-library @langchain/community
yarn add google-auth-library @langchain/community
pnpm add google-auth-library @langchain/community
import { GoogleVertexAIEmbeddings } from "@langchain/community/embeddings/googlevertexai";export const run = async () => { const model = new GoogleVertexAIEmbeddings(); const res = await model.embedQuery( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res });};
#### API Reference:
* [GoogleVertexAIEmbeddings](https://api.js.langchain.com/classes/langchain_community_embeddings_googlevertexai.GoogleVertexAIEmbeddings.html) from `@langchain/community/embeddings/googlevertexai`
**Note:** The default Google Vertex AI embeddings model, `textembedding-gecko`, has a different number of dimensions than OpenAI's `text-embedding-ada-002` model and may not be supported by all vector store providers.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Google PaLM
](/v0.1/docs/integrations/text_embedding/google_palm/)[
Next
Gradient AI
](/v0.1/docs/integrations/text_embedding/gradient_ai/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/gradient_ai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Gradient AI
On this page
Gradient AI
===========
The `GradientEmbeddings` class uses the Gradient AI API to generate embeddings for a given text.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to install the official Gradient Node SDK as a peer dependency:
* npm
* Yarn
* pnpm
npm i @gradientai/nodejs-sdk
yarn add @gradientai/nodejs-sdk
pnpm add @gradientai/nodejs-sdk
You will need to set the following environment variables for using the Gradient AI API.
export GRADIENT_ACCESS_TOKEN=<YOUR_ACCESS_TOKEN>export GRADIENT_WORKSPACE_ID=<YOUR_WORKSPACE_ID>
Alternatively, these can be set during the GradientAI Class instantiation as `gradientAccessKey` and `workspaceId` respectively. For example:
const model = new GradientEmbeddings({ gradientAccessKey: "My secret Access Token" workspaceId: "My secret workspace id"});
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { GradientEmbeddings } from "@langchain/community/embeddings/gradient_ai";const model = new GradientEmbeddings({});const res = await model.embedQuery( "What would be a good company name a company that makes colorful socks?");console.log({ res });
#### API Reference:
* [GradientEmbeddings](https://api.js.langchain.com/classes/langchain_community_embeddings_gradient_ai.GradientEmbeddings.html) from `@langchain/community/embeddings/gradient_ai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Google Vertex AI
](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)[
Next
HuggingFace Inference
](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/hugging_face_inference/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* HuggingFace Inference
On this page
HuggingFace Inference
=====================
This Embeddings integration uses the HuggingFace Inference API to generate embeddings for a given text using by default the `sentence-transformers/distilbert-base-nli-mean-tokens` model. You can pass a different model name to the constructor to use a different model.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll first need to install the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package and the required peer dep:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community @huggingface/inference@2
yarn add @langchain/community @huggingface/inference@2
pnpm add @langchain/community @huggingface/inference@2
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { HuggingFaceInferenceEmbeddings } from "@langchain/community/embeddings/hf";const embeddings = new HuggingFaceInferenceEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY});
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Gradient AI
](/v0.1/docs/integrations/text_embedding/gradient_ai/)[
Next
Llama CPP
](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/llama_cpp/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Llama CPP
On this page
Llama CPP
=========
Compatibility
Only available on Node.js.
This module is based on the [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) Node.js bindings for [llama.cpp](https://github.com/ggerganov/llama.cpp), allowing you to work with a locally running LLM. This allows you to work with a much smaller quantized model capable of running on a laptop environment, ideal for testing and scratch padding ideas without running up a bill!
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to install the [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) module to communicate with your local model.
* npm
* Yarn
* pnpm
npm install -S node-llama-cpp
yarn add node-llama-cpp
pnpm add node-llama-cpp
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
You will also need a local Llama 2 model (or a model supported by [node-llama-cpp](https://github.com/withcatai/node-llama-cpp)). You will need to pass the path to this model to the LlamaCpp module as a part of the parameters (see example).
Out-of-the-box `node-llama-cpp` is tuned for running on a MacOS platform with support for the Metal GPU of Apple M-series of processors. If you need to turn this off or need support for the CUDA architecture then refer to the documentation at [node-llama-cpp](https://withcatai.github.io/node-llama-cpp/).
For advice on getting and preparing `llama2` see the documentation for the LLM version of this module.
A note to LangChain.js contributors: if you want to run the tests associated with this module you will need to put the path to your local model in the environment variable `LLAMA_PATH`.
Usage[](#usage "Direct link to Usage")
---------------------------------------
### Basic use[](#basic-use "Direct link to Basic use")
We need to provide a path to our local Llama2 model, also the `embeddings` property is always set to `true` in this module.
import { LlamaCppEmbeddings } from "@langchain/community/embeddings/llama_cpp";const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";const embeddings = new LlamaCppEmbeddings({ modelPath: llamaPath,});const res = embeddings.embedQuery("Hello Llama!");console.log(res);/* [ 15043, 365, 29880, 3304, 29991 ]*/
#### API Reference:
* [LlamaCppEmbeddings](https://api.js.langchain.com/classes/langchain_community_embeddings_llama_cpp.LlamaCppEmbeddings.html) from `@langchain/community/embeddings/llama_cpp`
### Document embedding[](#document-embedding "Direct link to Document embedding")
import { LlamaCppEmbeddings } from "@langchain/community/embeddings/llama_cpp";const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";const documents = ["Hello World!", "Bye Bye!"];const embeddings = new LlamaCppEmbeddings({ modelPath: llamaPath,});const res = await embeddings.embedDocuments(documents);console.log(res);/* [ [ 15043, 2787, 29991 ], [ 2648, 29872, 2648, 29872, 29991 ] ]*/
#### API Reference:
* [LlamaCppEmbeddings](https://api.js.langchain.com/classes/langchain_community_embeddings_llama_cpp.LlamaCppEmbeddings.html) from `@langchain/community/embeddings/llama_cpp`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
HuggingFace Inference
](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)[
Next
Minimax
](/v0.1/docs/integrations/text_embedding/minimax/)
* [Setup](#setup)
* [Usage](#usage)
* [Basic use](#basic-use)
* [Document embedding](#document-embedding)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/mistralai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Mistral AI
On this page
Mistral AI
==========
The `MistralAIEmbeddings` class uses the Mistral AI API to generate embeddings for a given text.
Setup[](#setup "Direct link to Setup")
---------------------------------------
In order to use the Mistral API you'll need an API key. You can sign up for a Mistral account and create an API key [here](https://console.mistral.ai/).
You'll first need to install the [`@langchain/mistralai`](https://www.npmjs.com/package/@langchain/mistralai) package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { MistralAIEmbeddings } from "@langchain/mistralai";/* Embed queries */const embeddings = new MistralAIEmbeddings({ apiKey: process.env.MISTRAL_API_KEY,});const res = await embeddings.embedQuery("Hello world");console.log(res);/* Embed documents */const documentRes = await embeddings.embedDocuments(["Hello world", "Bye bye"]);console.log({ documentRes });
#### API Reference:
* [MistralAIEmbeddings](https://api.js.langchain.com/classes/langchain_mistralai.MistralAIEmbeddings.html) from `@langchain/mistralai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Minimax
](/v0.1/docs/integrations/text_embedding/minimax/)[
Next
Nomic
](/v0.1/docs/integrations/text_embedding/nomic/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/minimax/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Minimax
Minimax
=======
The `MinimaxEmbeddings` class uses the Minimax API to generate embeddings for a given text.
Setup
=====
To use Minimax model, you'll need a [Minimax account](https://api.minimax.chat), an [API key](https://api.minimax.chat/user-center/basic-information/interface-key), and a [Group ID](https://api.minimax.chat/user-center/basic-information)
Usage
=====
import { MinimaxEmbeddings } from "langchain/embeddings/minimax";export const run = async () => { /* Embed queries */ const embeddings = new MinimaxEmbeddings(); const res = await embeddings.embedQuery("Hello world"); console.log(res); /* Embed documents */ const documentRes = await embeddings.embedDocuments([ "Hello world", "Bye bye", ]); console.log({ documentRes });};
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Llama CPP
](/v0.1/docs/integrations/text_embedding/llama_cpp/)[
Next
Mistral AI
](/v0.1/docs/integrations/text_embedding/mistralai/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/ollama/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Ollama
Ollama
======
The `OllamaEmbeddings` class uses the `/api/embeddings` route of a locally hosted [Ollama](https://ollama.ai) server to generate embeddings for given texts.
Setup
=====
Follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Usage
=====
Basic usage:
import { OllamaEmbeddings } from "@langchain/community/embeddings/ollama";const embeddings = new OllamaEmbeddings({ model: "llama2", // default value baseUrl: "http://localhost:11434", // default value});
Ollama [model parameters](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values) are also supported:
import { OllamaEmbeddings } from "@langchain/community/embeddings/ollama";const embeddings = new OllamaEmbeddings({ model: "llama2", // default value baseUrl: "http://localhost:11434", // default value requestOptions: { useMMap: true, // use_mmap 1 numThread: 6, // num_thread 6 numGpu: 1, // num_gpu 1 },});
Example usage:
==============
import { OllamaEmbeddings } from "@langchain/community/embeddings/ollama";const embeddings = new OllamaEmbeddings({ model: "llama2", // default value baseUrl: "http://localhost:11434", // default value requestOptions: { useMMap: true, numThread: 6, numGpu: 1, },});const documents = ["Hello World!", "Bye Bye"];const documentEmbeddings = await embeddings.embedDocuments(documents);console.log(documentEmbeddings);
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Nomic
](/v0.1/docs/integrations/text_embedding/nomic/)[
Next
OpenAI
](/v0.1/docs/integrations/text_embedding/openai/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/nomic/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Nomic
On this page
Nomic
=====
The `NomicEmbeddings` class uses the Nomic AI API to generate embeddings for a given text.
Setup[](#setup "Direct link to Setup")
---------------------------------------
In order to use the Nomic API you'll need an API key. You can sign up for a Nomic account and create an API key [here](https://atlas.nomic.ai/).
You'll first need to install the [`@langchain/nomic`](https://www.npmjs.com/package/@langchain/nomic) package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/nomic
yarn add @langchain/nomic
pnpm add @langchain/nomic
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { NomicEmbeddings } from "@langchain/nomic";/* Embed queries */const nomicEmbeddings = new NomicEmbeddings();const res = await nomicEmbeddings.embedQuery("Hello world");console.log(res);/* Embed documents */const documentRes = await nomicEmbeddings.embedDocuments([ "Hello world", "Bye bye",]);console.log(documentRes);
#### API Reference:
* [NomicEmbeddings](https://api.js.langchain.com/classes/langchain_nomic.NomicEmbeddings.html) from `@langchain/nomic`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Mistral AI
](/v0.1/docs/integrations/text_embedding/mistralai/)[
Next
Ollama
](/v0.1/docs/integrations/text_embedding/ollama/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/premai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Prem AI
On this page
Prem AI
=======
The `PremEmbeddings` class uses the Prem AI API to generate embeddings for a given text.
Setup[](#setup "Direct link to Setup")
---------------------------------------
In order to use the Prem API you'll need an API key. You can sign up for a Prem account and create an API key [here](https://app.premai.io/accounts/signup/).
You'll first need to install the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { PremEmbeddings } from "@langchain/community/embeddings/premai";const embeddings = new PremEmbeddings({ // In Node.js defaults to process.env.PREM_API_KEY apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PREM_PROJECT_ID project_id: "YOUR-PROJECT_ID", model: "@cf/baai/bge-small-en-v1.5", // The model to generate the embeddings});const res = await embeddings.embedQuery( "What would be a good company name a company that makes colorful socks?");console.log({ res });
#### API Reference:
* [PremEmbeddings](https://api.js.langchain.com/classes/langchain_community_embeddings_premai.PremEmbeddings.html) from `@langchain/community/embeddings/premai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
OpenAI
](/v0.1/docs/integrations/text_embedding/openai/)[
Next
TensorFlow
](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/tensorflow/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* TensorFlow
TensorFlow
==========
This Embeddings integration runs the embeddings entirely in your browser or Node.js environment, using [TensorFlow.js](https://www.tensorflow.org/js). This means that your data isn't sent to any third party, and you don't need to sign up for any API keys. However, it does require more memory and processing power than the other integrations.
* npm
* Yarn
* pnpm
npm install @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpu
yarn add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpu
pnpm add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpu
import "@tensorflow/tfjs-backend-cpu";import { TensorFlowEmbeddings } from "langchain/embeddings/tensorflow";const embeddings = new TensorFlowEmbeddings();
This example uses the CPU backend, which works in any JS environment. However, you can use any of the backends supported by TensorFlow.js, including GPU and WebAssembly, which will be a lot faster. For Node.js you can use the `@tensorflow/tfjs-node` package, and for the browser you can use the `@tensorflow/tfjs-backend-webgl` package. See the [TensorFlow.js documentation](https://www.tensorflow.org/js/guide/platform_environment) for more information.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Prem AI
](/v0.1/docs/integrations/text_embedding/premai/)[
Next
Together AI
](/v0.1/docs/integrations/text_embedding/togetherai/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/transformers/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* HuggingFace Transformers
On this page
HuggingFace Transformers
========================
The `TransformerEmbeddings` class uses the [Transformers.js](https://huggingface.co/docs/transformers.js/index) package to generate embeddings for a given text.
It runs locally and even works directly in the browser, allowing you to create web apps with built-in embeddings.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to install the [@xenova/transformers](https://www.npmjs.com/package/@xenova/transformers) package as a peer dependency:
* npm
* Yarn
* pnpm
npm install @xenova/transformers
yarn add @xenova/transformers
pnpm add @xenova/transformers
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Example[](#example "Direct link to Example")
---------------------------------------------
Note that if you're using in a browser context, you'll likely want to put all inference-related code in a web worker to avoid blocking the main thread.
See [this guide](https://huggingface.co/docs/transformers.js/tutorials/next) and the other resources in the Transformers.js docs for an idea of how to set up your project.
import { HuggingFaceTransformersEmbeddings } from "@langchain/community/embeddings/hf_transformers";const model = new HuggingFaceTransformersEmbeddings({ model: "Xenova/all-MiniLM-L6-v2",});/* Embed queries */const res = await model.embedQuery( "What would be a good company name for a company that makes colorful socks?");console.log({ res });/* Embed documents */const documentRes = await model.embedDocuments(["Hello world", "Bye bye"]);console.log({ documentRes });
#### API Reference:
* [HuggingFaceTransformersEmbeddings](https://api.js.langchain.com/classes/langchain_community_embeddings_hf_transformers.HuggingFaceTransformersEmbeddings.html) from `@langchain/community/embeddings/hf_transformers`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Together AI
](/v0.1/docs/integrations/text_embedding/togetherai/)[
Next
Voyage AI
](/v0.1/docs/integrations/text_embedding/voyageai/)
* [Setup](#setup)
* [Example](#example)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/voyageai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Voyage AI
Voyage AI
=========
The `VoyageEmbeddings` class uses the Voyage AI REST API to generate embeddings for a given text.
import { VoyageEmbeddings } from "langchain/embeddings/voyage";const embeddings = new VoyageEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.VOYAGEAI_API_KEY});
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
HuggingFace Transformers
](/v0.1/docs/integrations/text_embedding/transformers/)[
Next
ZhipuAI
](/v0.1/docs/integrations/text_embedding/zhipuai/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/togetherai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Together AI
On this page
Together AI
===========
The `TogetherAIEmbeddings` class uses the Together AI API to generate embeddings for a given text.
Setup[](#setup "Direct link to Setup")
---------------------------------------
In order to use the Together API you'll need an API key. You can sign up for a Together account and create an API key [here](https://api.together.xyz/).
You'll first need to install the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { TogetherAIEmbeddings } from "@langchain/community/embeddings/togetherai";const embeddings = new TogetherAIEmbeddings({ apiKey: process.env.TOGETHER_AI_API_KEY, // Default value model: "togethercomputer/m2-bert-80M-8k-retrieval", // Default value});const res = await embeddings.embedQuery( "What would be a good company name a company that makes colorful socks?");console.log({ res });
#### API Reference:
* [TogetherAIEmbeddings](https://api.js.langchain.com/classes/langchain_community_embeddings_togetherai.TogetherAIEmbeddings.html) from `@langchain/community/embeddings/togetherai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
TensorFlow
](/v0.1/docs/integrations/text_embedding/tensorflow/)[
Next
HuggingFace Transformers
](/v0.1/docs/integrations/text_embedding/transformers/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/usearch/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* USearch
On this page
USearch
=======
Compatibility
Only available on Node.js.
[USearch](https://github.com/unum-cloud/usearch) is a library for efficient similarity search and clustering of dense vectors.
Setup[](#setup "Direct link to Setup")
---------------------------------------
Install the [usearch](https://github.com/unum-cloud/usearch/tree/main/javascript) package, which is a Node.js binding for [USearch](https://github.com/unum-cloud/usearch).
* npm
* Yarn
* pnpm
npm install -S usearch
yarn add usearch
pnpm add usearch
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
### Create a new index from texts[](#create-a-new-index-from-texts "Direct link to Create a new index from texts")
import { USearch } from "@langchain/community/vectorstores/usearch";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await USearch.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);
#### API Reference:
* [USearch](https://api.js.langchain.com/classes/langchain_community_vectorstores_usearch.USearch.html) from `@langchain/community/vectorstores/usearch`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Create a new index from a loader[](#create-a-new-index-from-a-loader "Direct link to Create a new index from a loader")
import { USearch } from "@langchain/community/vectorstores/usearch";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await USearch.fromDocuments(docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);
#### API Reference:
* [USearch](https://api.js.langchain.com/classes/langchain_community_vectorstores_usearch.USearch.html) from `@langchain/community/vectorstores/usearch`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Upstash Vector
](/v0.1/docs/integrations/vectorstores/upstash/)[
Next
Vectara
](/v0.1/docs/integrations/vectorstores/vectara/)
* [Setup](#setup)
* [Usage](#usage)
* [Create a new index from texts](#create-a-new-index-from-texts)
* [Create a new index from a loader](#create-a-new-index-from-a-loader)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/vectara/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Vectara
On this page
Vectara
=======
Vectara is a platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy.
You can use Vectara as a vector store with LangChain.js.
👉 Embeddings Included[](#-embeddings-included "Direct link to 👉 Embeddings Included")
----------------------------------------------------------------------------------------
Vectara uses its own embeddings under the hood, so you don't have to provide any yourself or call another service to obtain embeddings.
This also means that if you provide your own embeddings, they'll be a no-op.
const store = await VectaraStore.fromTexts( ["hello world", "hi there"], [{ foo: "bar" }, { foo: "baz" }], // This won't have an effect. Provide a FakeEmbeddings instance instead for clarity. new OpenAIEmbeddings(), args);
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to:
* Create a [free Vectara account](https://vectara.com/integrations/langchain).
* Create a [corpus](https://docs.vectara.com/docs/console-ui/creating-a-corpus) to store your data
* Create an [API key](https://docs.vectara.com/docs/common-use-cases/app-authn-authz/api-keys) with QueryService and IndexService access so you can access this corpus
Configure your `.env` file or provide args to connect LangChain to your Vectara corpus:
VECTARA_CUSTOMER_ID=your_customer_idVECTARA_CORPUS_ID=your_corpus_idVECTARA_API_KEY=your-vectara-api-key
Note that you can provide multiple corpus IDs separated by commas for querying multiple corpora at once. For example: `VECTARA_CORPUS_ID=3,8,9,43`. For indexing multiple corpora, you'll need to create a separate VectaraStore instance for each corpus.
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { VectaraStore } from "@langchain/community/vectorstores/vectara";import { VectaraSummaryRetriever } from "@langchain/community/retrievers/vectara_summary";import { Document } from "@langchain/core/documents";// Create the Vectara store.const store = new VectaraStore({ customerId: Number(process.env.VECTARA_CUSTOMER_ID), corpusId: Number(process.env.VECTARA_CORPUS_ID), apiKey: String(process.env.VECTARA_API_KEY), verbose: true,});// Add two documents with some metadata.const doc_ids = await store.addDocuments([ new Document({ pageContent: "Do I dare to eat a peach?", metadata: { foo: "baz", }, }), new Document({ pageContent: "In the room the women come and go talking of Michelangelo", metadata: { foo: "bar", }, }),]);// Perform a similarity search.const resultsWithScore = await store.similaritySearchWithScore( "What were the women talking about?", 1, { lambda: 0.025, });// Print the results.console.log(JSON.stringify(resultsWithScore, null, 2));/*[ [ { "pageContent": "In the room the women come and go talking of Michelangelo", "metadata": { "lang": "eng", "offset": "0", "len": "57", "foo": "bar" } }, 0.4678752 ]]*/const retriever = new VectaraSummaryRetriever({ vectara: store, topK: 3 });const documents = await retriever.invoke("What were the women talking about?");console.log(JSON.stringify(documents, null, 2));/*[ { "pageContent": "<b>In the room the women come and go talking of Michelangelo</b>", "metadata": { "lang": "eng", "offset": "0", "len": "57", "foo": "bar" } }, { "pageContent": "<b>In the room the women come and go talking of Michelangelo</b>", "metadata": { "lang": "eng", "offset": "0", "len": "57", "foo": "bar" } }, { "pageContent": "<b>In the room the women come and go talking of Michelangelo</b>", "metadata": { "lang": "eng", "offset": "0", "len": "57", "foo": "bar" } }]*/// Delete the documents.await store.deleteDocuments(doc_ids);
#### API Reference:
* [VectaraStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_vectara.VectaraStore.html) from `@langchain/community/vectorstores/vectara`
* [VectaraSummaryRetriever](https://api.js.langchain.com/classes/langchain_community_retrievers_vectara_summary.VectaraSummaryRetriever.html) from `@langchain/community/retrievers/vectara_summary`
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
Note that `lambda` is a parameter related to Vectara's hybrid search capbility, providing a tradeoff between neural search and boolean/exact match as described [here](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching). We recommend the value of 0.025 as a default, while providing a way for advanced users to customize this value if needed.
APIs[](#apis "Direct link to APIs")
------------------------------------
Vectara's LangChain vector store consumes Vectara's core APIs:
* [Indexing API](https://docs.vectara.com/docs/indexing-apis/indexing) for storing documents in a Vectara corpus.
* [Search API](https://docs.vectara.com/docs/search-apis/search) for querying this data. This API supports hybrid search.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
USearch
](/v0.1/docs/integrations/vectorstores/usearch/)[
Next
Vercel Postgres
](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [👉 Embeddings Included](#-embeddings-included)
* [Setup](#setup)
* [Usage](#usage)
* [APIs](#apis)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/zhipuai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* ZhipuAI
On this page
ZhipuAI
=======
The `ZhipuAIEmbeddings` class uses the ZhipuAI API to generate embeddings for a given text.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll need to sign up for an ZhipuAI API key and set it as an environment variable named `ZHIPUAI_API_KEY`.
[https://open.bigmodel.cn](https://open.bigmodel.cn)
Then, you'll need to install the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community jsonwebtoken
yarn add @langchain/community jsonwebtoken
pnpm add @langchain/community jsonwebtoken
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { ZhipuAIEmbeddings } from "@langchain/community/embeddings/zhipuai";const model = new ZhipuAIEmbeddings({});const res = await model.embedQuery( "What would be a good company name a company that makes colorful socks?");console.log({ res });
#### API Reference:
* [ZhipuAIEmbeddings](https://api.js.langchain.com/classes/langchain_community_embeddings_zhipuai.ZhipuAIEmbeddings.html) from `@langchain/community/embeddings/zhipuai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Voyage AI
](/v0.1/docs/integrations/text_embedding/voyageai/)[
Next
Vector stores
](/v0.1/docs/integrations/vectorstores/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/astradb/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Astra DB
On this page
Astra DB
========
Compatibility
Only available on Node.js.
DataStax [Astra DB](https://astra.datastax.com/register) is a serverless vector-capable database built on [Apache Cassandra](https://cassandra.apache.org/_/index.html) and made conveniently available through an easy-to-use JSON API.
Setup[](#setup "Direct link to Setup")
---------------------------------------
1. Create an [Astra DB account](https://astra.datastax.com/register).
2. Create a [vector enabled database](https://astra.datastax.com/createDatabase).
3. Grab your `API Endpoint` and `Token` from the Database Details.
4. Set up the following env vars:
export ASTRA_DB_APPLICATION_TOKEN=YOUR_ASTRA_DB_APPLICATION_TOKEN_HEREexport ASTRA_DB_ENDPOINT=YOUR_ASTRA_DB_ENDPOINT_HEREexport ASTRA_DB_COLLECTION=YOUR_ASTRA_DB_COLLECTION_HEREexport OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HERE
Where `ASTRA_DB_COLLECTION` is the desired name of your collection
6. Install the Astra TS Client & the LangChain community package
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @datastax/astra-db-ts @langchain/community
yarn add @langchain/openai @datastax/astra-db-ts @langchain/community
pnpm add @langchain/openai @datastax/astra-db-ts @langchain/community
Indexing docs[](#indexing-docs "Direct link to Indexing docs")
---------------------------------------------------------------
import { OpenAIEmbeddings } from "@langchain/openai";import { AstraDBVectorStore, AstraLibArgs,} from "@langchain/community/vectorstores/astradb";const astraConfig: AstraLibArgs = { token: process.env.ASTRA_DB_APPLICATION_TOKEN as string, endpoint: process.env.ASTRA_DB_ENDPOINT as string, collection: process.env.ASTRA_DB_COLLECTION ?? "langchain_test", collectionOptions: { vector: { dimension: 1536, metric: "cosine", }, },};const vectorStore = await AstraDBVectorStore.fromTexts( [ "AstraDB is built on Apache Cassandra", "AstraDB is a NoSQL DB", "AstraDB supports vector search", ], [{ foo: "foo" }, { foo: "bar" }, { foo: "baz" }], new OpenAIEmbeddings(), astraConfig);// Querying docs:const results = await vectorStore.similaritySearch("Cassandra", 1);// or filtered query:const filteredQueryResults = await vectorStore.similaritySearch("A", 1, { foo: "bar",});
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [AstraDBVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_astradb.AstraDBVectorStore.html) from `@langchain/community/vectorstores/astradb`
* [AstraLibArgs](https://api.js.langchain.com/interfaces/langchain_community_vectorstores_astradb.AstraLibArgs.html) from `@langchain/community/vectorstores/astradb`
Vector Types[](#vector-types "Direct link to Vector Types")
------------------------------------------------------------
Astra DB supports `cosine` (the default), `dot_product`, and `euclidean` similarity search; this is defined when the vector store is first created as part of the `CreateCollectionOptions`:
vector: { dimension: number; metric?: "cosine" | "euclidean" | "dot_product"; };
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
AnalyticDB
](/v0.1/docs/integrations/vectorstores/analyticdb/)[
Next
Azure AI Search
](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Setup](#setup)
* [Indexing docs](#indexing-docs)
* [Vector Types](#vector-types)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/voy/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Voy
On this page
Voy
===
[Voy](https://github.com/tantaraio/voy) is a WASM vector similarity search engine written in Rust. It's supported in non-Node environments like browsers. You can use Voy as a vector store with LangChain.js.
### Install Voy[](#install-voy "Direct link to Install Voy")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai voy-search @langchain/community
yarn add @langchain/openai voy-search @langchain/community
pnpm add @langchain/openai voy-search @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { VoyVectorStore } from "@langchain/community/vectorstores/voy";import { Voy as VoyClient } from "voy-search";import { OpenAIEmbeddings } from "@langchain/openai";import { Document } from "@langchain/core/documents";// Create Voy client using the library.const voyClient = new VoyClient();// Create embeddingsconst embeddings = new OpenAIEmbeddings();// Create the Voy store.const store = new VoyVectorStore(voyClient, embeddings);// Add two documents with some metadata.await store.addDocuments([ new Document({ pageContent: "How has life been treating you?", metadata: { foo: "Mike", }, }), new Document({ pageContent: "And I took it personally...", metadata: { foo: "Testing", }, }),]);const model = new OpenAIEmbeddings();const query = await model.embedQuery("And I took it personally");// Perform a similarity search.const resultsWithScore = await store.similaritySearchVectorWithScore(query, 1);// Print the results.console.log(JSON.stringify(resultsWithScore, null, 2));/* [ [ { "pageContent": "And I took it personally...", "metadata": { "foo": "Testing" } }, 0 ] ]*/
#### API Reference:
* [VoyVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_voy.VoyVectorStore.html) from `@langchain/community/vectorstores/voy`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Vercel Postgres
](/v0.1/docs/integrations/vectorstores/vercel_postgres/)[
Next
Weaviate
](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Install Voy](#install-voy)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/vercel_postgres/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Vercel Postgres
On this page
Vercel Postgres
===============
LangChain.js supports using the [`@vercel/postgres`](https://www.npmjs.com/package/@vercel/postgres) package to use generic Postgres databases as vector stores, provided they support the [`pgvector`](https://github.com/pgvector/pgvector) Postgres extension.
This integration is particularly useful from web environments like Edge functions.
Setup[](#setup "Direct link to Setup")
---------------------------------------
To work with Vercel Postgres, you need to install the `@vercel/postgres` package:
* npm
* Yarn
* pnpm
npm install @vercel/postgres
yarn add @vercel/postgres
pnpm add @vercel/postgres
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
This integration automatically connects using the connection string set under `process.env.POSTGRES_URL`. You can also pass a connection string manually like this:
const vectorstore = await VercelPostgres.initialize(new OpenAIEmbeddings(), { postgresConnectionOptions: { connectionString: "postgres://<username>:<password>@<hostname>:<port>/<dbname>", },});
### Connecting to Vercel Postgres[](#connecting-to-vercel-postgres "Direct link to Connecting to Vercel Postgres")
A simple way to get started is to create a serverless [Vercel Postgres instance](https://vercel.com/docs/storage/vercel-postgres/quickstart). If you're deploying to a Vercel project with an associated Vercel Postgres instance, the required `POSTGRES_URL` environment variable will already be populated in hosted environments.
### Connecting to other databases[](#connecting-to-other-databases "Direct link to Connecting to other databases")
If you prefer to host your own Postgres instance, you can use a similar flow to LangChain's [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/) vectorstore integration and set the connection string either as an environment variable or as shown above.
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { CohereEmbeddings } from "@langchain/cohere";import { VercelPostgres } from "@langchain/community/vectorstores/vercel_postgres";// Config is only required if you want to override default values.const config = { // tableName: "testvercelvectorstorelangchain", // postgresConnectionOptions: { // connectionString: "postgres://<username>:<password>@<hostname>:<port>/<dbname>", // }, // columns: { // idColumnName: "id", // vectorColumnName: "vector", // contentColumnName: "content", // metadataColumnName: "metadata", // },};const vercelPostgresStore = await VercelPostgres.initialize( new CohereEmbeddings(), config);const docHello = { pageContent: "hello", metadata: { topic: "nonsense" },};const docHi = { pageContent: "hi", metadata: { topic: "nonsense" } };const docMitochondria = { pageContent: "Mitochondria is the powerhouse of the cell", metadata: { topic: "science" },};const ids = await vercelPostgresStore.addDocuments([ docHello, docHi, docMitochondria,]);const results = await vercelPostgresStore.similaritySearch("hello", 2);console.log(results);/* [ Document { pageContent: 'hello', metadata: { topic: 'nonsense' } }, Document { pageContent: 'hi', metadata: { topic: 'nonsense' } } ]*/// Metadata filteringconst results2 = await vercelPostgresStore.similaritySearch( "Irrelevant query, metadata filtering", 2, { topic: "science", });console.log(results2);/* [ Document { pageContent: 'Mitochondria is the powerhouse of the cell', metadata: { topic: 'science' } } ]*/// Metadata filtering with IN-filters works as wellconst results3 = await vercelPostgresStore.similaritySearch( "Irrelevant query, metadata filtering", 3, { topic: { in: ["science", "nonsense"] }, });console.log(results3);/* [ Document { pageContent: 'hello', metadata: { topic: 'nonsense' } }, Document { pageContent: 'hi', metadata: { topic: 'nonsense' } }, Document { pageContent: 'Mitochondria is the powerhouse of the cell', metadata: { topic: 'science' } } ]*/// Upserting is supported as wellawait vercelPostgresStore.addDocuments( [ { pageContent: "ATP is the powerhouse of the cell", metadata: { topic: "science" }, }, ], { ids: [ids[2]] });const results4 = await vercelPostgresStore.similaritySearch( "What is the powerhouse of the cell?", 1);console.log(results4);/* [ Document { pageContent: 'ATP is the powerhouse of the cell', metadata: { topic: 'science' } } ]*/await vercelPostgresStore.delete({ ids: [ids[2]] });const results5 = await vercelPostgresStore.similaritySearch( "No more metadata", 2, { topic: "science", });console.log(results5);/* []*/// Remember to call .end() to close the connection!await vercelPostgresStore.end();
#### API Reference:
* [CohereEmbeddings](https://api.js.langchain.com/classes/langchain_cohere.CohereEmbeddings.html) from `@langchain/cohere`
* [VercelPostgres](https://api.js.langchain.com/classes/langchain_community_vectorstores_vercel_postgres.VercelPostgres.html) from `@langchain/community/vectorstores/vercel_postgres`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Vectara
](/v0.1/docs/integrations/vectorstores/vectara/)[
Next
Voy
](/v0.1/docs/integrations/vectorstores/voy/)
* [Setup](#setup)
* [Connecting to Vercel Postgres](#connecting-to-vercel-postgres)
* [Connecting to other databases](#connecting-to-other-databases)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/xata/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Xata
On this page
Xata
====
[Xata](https://xata.io) is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a UI for managing your data.
Xata has a native vector type, which can be added to any table, and supports similarity search. LangChain inserts vectors directly to Xata, and queries it for the nearest neighbors of a given vector, so that you can use all the LangChain Embeddings integrations with Xata.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Install the Xata CLI[](#install-the-xata-cli "Direct link to Install the Xata CLI")
npm install @xata.io/cli -g
### Create a database to be used as a vector store[](#create-a-database-to-be-used-as-a-vector-store "Direct link to Create a database to be used as a vector store")
In the [Xata UI](https://app.xata.io) create a new database. You can name it whatever you want, but for this example we'll use `langchain`. Create a table, again you can name it anything, but we will use `vectors`. Add the following columns via the UI:
* `content` of type "Text". This is used to store the `Document.pageContent` values.
* `embedding` of type "Vector". Use the dimension used by the model you plan to use (1536 for OpenAI).
* any other columns you want to use as metadata. They are populated from the `Document.metadata` object. For example, if in the `Document.metadata` object you have a `title` property, you can create a `title` column in the table and it will be populated.
### Initialize the project[](#initialize-the-project "Direct link to Initialize the project")
In your project, run:
xata init
and then choose the database you created above. This will also generate a `xata.ts` or `xata.js` file that defines the client you can use to interact with the database. See the [Xata getting started docs](https://xata.io/docs/getting-started/installation) for more details on using the Xata JavaScript/TypeScript SDK.
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
### Example: Q&A chatbot using OpenAI and Xata as vector store[](#example-qa-chatbot-using-openai-and-xata-as-vector-store "Direct link to Example: Q&A chatbot using OpenAI and Xata as vector store")
This example uses the `VectorDBQAChain` to search the documents stored in Xata and then pass them as context to the OpenAI model, in order to answer the question asked by the user.
import { XataVectorSearch } from "@langchain/community/vectorstores/xata";import { OpenAIEmbeddings, OpenAI } from "@langchain/openai";import { BaseClient } from "@xata.io/client";import { VectorDBQAChain } from "langchain/chains";import { Document } from "@langchain/core/documents";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/data_connection/vectorstores/integrations/xata// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};export async function run() { const client = getXataClient(); const table = "vectors"; const embeddings = new OpenAIEmbeddings(); const store = new XataVectorSearch(embeddings, { client, table }); // Add documents const docs = [ new Document({ pageContent: "Xata is a Serverless Data platform based on PostgreSQL", }), new Document({ pageContent: "Xata offers a built-in vector type that can be used to store and query vectors", }), new Document({ pageContent: "Xata includes similarity search", }), ]; const ids = await store.addDocuments(docs); // eslint-disable-next-line no-promise-executor-return await new Promise((r) => setTimeout(r, 2000)); const model = new OpenAI(); const chain = VectorDBQAChain.fromLLM(model, store, { k: 1, returnSourceDocuments: true, }); const response = await chain.invoke({ query: "What is Xata?" }); console.log(JSON.stringify(response, null, 2)); await store.delete({ ids });}
#### API Reference:
* [XataVectorSearch](https://api.js.langchain.com/classes/langchain_community_vectorstores_xata.XataVectorSearch.html) from `@langchain/community/vectorstores/xata`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [VectorDBQAChain](https://api.js.langchain.com/classes/langchain_chains.VectorDBQAChain.html) from `langchain/chains`
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
### Example: Similarity search with a metadata filter[](#example-similarity-search-with-a-metadata-filter "Direct link to Example: Similarity search with a metadata filter")
This example shows how to implement semantic search using LangChain.js and Xata. Before running it, make sure to add an `author` column of type String to the `vectors` table in Xata.
import { XataVectorSearch } from "@langchain/community/vectorstores/xata";import { OpenAIEmbeddings } from "@langchain/openai";import { BaseClient } from "@xata.io/client";import { Document } from "@langchain/core/documents";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/data_connection/vectorstores/integrations/xata// Also, add a column named "author" to the "vectors" table.// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};export async function run() { const client = getXataClient(); const table = "vectors"; const embeddings = new OpenAIEmbeddings(); const store = new XataVectorSearch(embeddings, { client, table }); // Add documents const docs = [ new Document({ pageContent: "Xata works great with Langchain.js", metadata: { author: "Xata" }, }), new Document({ pageContent: "Xata works great with Langchain", metadata: { author: "Langchain" }, }), new Document({ pageContent: "Xata includes similarity search", metadata: { author: "Xata" }, }), ]; const ids = await store.addDocuments(docs); // eslint-disable-next-line no-promise-executor-return await new Promise((r) => setTimeout(r, 2000)); // author is applied as pre-filter to the similarity search const results = await store.similaritySearchWithScore("xata works great", 6, { author: "Langchain", }); console.log(JSON.stringify(results, null, 2)); await store.delete({ ids });}
#### API Reference:
* [XataVectorSearch](https://api.js.langchain.com/classes/langchain_community_vectorstores_xata.XataVectorSearch.html) from `@langchain/community/vectorstores/xata`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Weaviate
](/v0.1/docs/integrations/vectorstores/weaviate/)[
Next
Zep
](/v0.1/docs/integrations/vectorstores/zep/)
* [Setup](#setup)
* [Install the Xata CLI](#install-the-xata-cli)
* [Create a database to be used as a vector store](#create-a-database-to-be-used-as-a-vector-store)
* [Initialize the project](#initialize-the-project)
* [Usage](#usage)
* [Example: Q&A chatbot using OpenAI and Xata as vector store](#example-qa-chatbot-using-openai-and-xata-as-vector-store)
* [Example: Similarity search with a metadata filter](#example-similarity-search-with-a-metadata-filter)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/azure_cosmosdb/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Azure Cosmos DB
On this page
Azure Cosmos DB
===============
> [Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/) makes it easy to create a database with full native MongoDB support. You can apply your MongoDB experience and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the API for MongoDB vCore account’s connection string. Use vector search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate your AI-based applications with your data that’s stored in Azure Cosmos DB.
Azure Cosmos DB for MongoDB vCore provides developers with a fully managed MongoDB-compatible database service for building modern applications with a familiar architecture.
Learn how to leverage the vector search capabilities of Azure Cosmos DB for MongoDB vCore from [this page](https://learn.microsoft.com/azure/cosmos-db/mongodb/vcore/vector-search). If you don't have an Azure account, you can [create a free account](https://azure.microsoft.com/free/) to get started.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll first need to install the `mongodb` SDK and the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai mongodb @langchain/community
yarn add @langchain/openai mongodb @langchain/community
pnpm add @langchain/openai mongodb @langchain/community
You'll also need to have an Azure Cosmos DB for MongoDB vCore instance running. You can deploy a free version on Azure Portal without any cost, following [this guide](https://learn.microsoft.com/azure/cosmos-db/mongodb/vcore/quickstart-portal).
Once you have your instance running, make sure you have the connection string and the admin key. You can find them in the Azure Portal, under the "Connection strings" section of your instance. Then you need to set the following environment variables:
# Azure CosmosDB for MongoDB vCore connection stringAZURE_COSMOSDB_CONNECTION_STRING=# If you're using Azure OpenAI API, you'll need to set these variablesAZURE_OPENAI_API_KEY=AZURE_OPENAI_API_INSTANCE_NAME=AZURE_OPENAI_API_DEPLOYMENT_NAME=AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=AZURE_OPENAI_API_VERSION=# Or you can use the OpenAI API directlyOPENAI_API_KEY=
#### API Reference:
Example[](#example "Direct link to Example")
---------------------------------------------
Below is an example that indexes documents from a file in Azure Cosmos DB for MongoDB vCore, runs a vector search query, and finally uses a chain to answer a question in natural language based on the retrieved documents.
import { AzureCosmosDBVectorStore, AzureCosmosDBSimilarityType,} from "@langchain/community/vectorstores/azure_cosmosdb";import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { createRetrievalChain } from "langchain/chains/retrieval";import { TextLoader } from "langchain/document_loaders/fs/text";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";// Load documents from fileconst loader = new TextLoader("./state_of_the_union.txt");const rawDocuments = await loader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 0,});const documents = await splitter.splitDocuments(rawDocuments);// Create Azure Cosmos DB vector storeconst store = await AzureCosmosDBVectorStore.fromDocuments( documents, new OpenAIEmbeddings(), { databaseName: "langchain", collectionName: "documents", indexOptions: { numLists: 100, dimensions: 1536, similarity: AzureCosmosDBSimilarityType.COS, }, });// Performs a similarity searchconst resultDocuments = await store.similaritySearch( "What did the president say about Ketanji Brown Jackson?");console.log("Similarity search results:");console.log(resultDocuments[0].pageContent);/* Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.*/// Use the store as part of a chainconst model = new ChatOpenAI({ model: "gpt-3.5-turbo-1106" });const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm: model, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: store.asRetriever(), combineDocsChain,});const res = await chain.invoke({ input: "What is the president's top priority regarding prices?",});console.log("Chain response:");console.log(res.answer);/* The president's top priority is getting prices under control.*/// Clean upawait store.delete();await store.close();
#### API Reference:
* [AzureCosmosDBVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_azure_cosmosdb.AzureCosmosDBVectorStore.html) from `@langchain/community/vectorstores/azure_cosmosdb`
* [AzureCosmosDBSimilarityType](https://api.js.langchain.com/types/langchain_community_vectorstores_azure_cosmosdb.AzureCosmosDBSimilarityType.html) from `@langchain/community/vectorstores/azure_cosmosdb`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [createStuffDocumentsChain](https://api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents`
* [createRetrievalChain](https://api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Azure AI Search
](/v0.1/docs/integrations/vectorstores/azure_aisearch/)[
Next
Cassandra
](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Setup](#setup)
* [Example](#example)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/cassandra/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Cassandra
On this page
Cassandra
=========
Compatibility
Only available on Node.js.
[Apache Cassandra®](https://cassandra.apache.org/_/index.html) is a NoSQL, row-oriented, highly scalable and highly available database.
The [latest version](https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-30%3A+Approximate+Nearest+Neighbor(ANN)+Vector+Search+via+Storage-Attached+Indexes) of Apache Cassandra natively supports Vector Similarity Search.
Setup[](#setup "Direct link to Setup")
---------------------------------------
First, install the Cassandra Node.js driver:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install cassandra-driver @langchain/community @langchain/openai
yarn add cassandra-driver @langchain/community @langchain/openai
pnpm add cassandra-driver @langchain/community @langchain/openai
Depending on your database providers, the specifics of how to connect to the database will vary. We will create a document `configConnection` which will be used as part of the vector store configuration.
### Apache Cassandra®[](#apache-cassandra "Direct link to Apache Cassandra®")
Vector search is supported in [Apache Cassandra® 5.0](https://cassandra.apache.org/_/Apache-Cassandra-5.0-Moving-Toward-an-AI-Driven-Future.html) and above. You can use a standard connection document, for example:
const configConnection = { contactPoints: ['h1', 'h2'], localDataCenter: 'datacenter1', credentials: { username: <...> as string, password: <...> as string, },};
### Astra DB[](#astra-db "Direct link to Astra DB")
Astra DB is a cloud-native Cassandra-as-a-Service platform.
1. Create an [Astra DB account](https://astra.datastax.com/register).
2. Create a [vector enabled database](https://astra.datastax.com/createDatabase).
3. Create a [token](https://docs.datastax.com/en/astra/docs/manage-application-tokens.html) for your database.
const configConnection = { serviceProviderArgs: { astra: { token: <...> as string, endpoint: <...> as string, }, },};
Instead of `endpoint:`, you many provide property `datacenterID:` and optionally `regionName:`.
Indexing docs[](#indexing-docs "Direct link to Indexing docs")
---------------------------------------------------------------
import { CassandraStore } from "langchain/vectorstores/cassandra";import { OpenAIEmbeddings } from "@langchain/openai";// The configConnection document is defined aboveconst config = { ...configConnection, keyspace: "test", dimensions: 1536, table: "test", indices: [{ name: "name", value: "(name)" }], primaryKey: { name: "id", type: "int", }, metadataColumns: [ { name: "name", type: "text", }, ],};const vectorStore = await CassandraStore.fromTexts( ["I am blue", "Green yellow purple", "Hello there hello"], [ { id: 2, name: "2" }, { id: 1, name: "1" }, { id: 3, name: "3" }, ], new OpenAIEmbeddings(), cassandraConfig);
Querying docs[](#querying-docs "Direct link to Querying docs")
---------------------------------------------------------------
const results = await vectorStore.similaritySearch("Green yellow purple", 1);
or filtered query:
const results = await vectorStore.similaritySearch("B", 1, { name: "Bubba" });
Vector Types[](#vector-types "Direct link to Vector Types")
------------------------------------------------------------
Cassandra supports `cosine` (the default), `dot_product`, and `euclidean` similarity search; this is defined when the vector store is first created, and specifed in the constructor parameter `vectorType`, for example:
..., vectorType: "dot_product", ...
Indices[](#indices "Direct link to Indices")
---------------------------------------------
With Version 5, Cassandra introduced Storage Attached Indexes, or SAIs. These allow `WHERE` filtering without specifying the partition key, and allow for additional operator types such as non-equalities. You can define these with the `indices` parameter, which accepts zero or more dictionaries each containing `name` and `value` entries.
Indices are optional, though required if using filtered queries on non-partition columns.
* The `name` entry is part of the object name; on a table named `test_table` an index with `name: "some_column"` would be `idx_test_table_some_column`.
* The `value` entry is the column on which the index is created, surrounded by `(` and `)`. With the above column `some_column` it would be specified as `value: "(some_column)"`.
* An optional `options` entry is a map passed to the `WITH OPTIONS =` clause of the `CREATE CUSTOM INDEX` statement. The specific entries on this map are index type specific.
indices: [{ name: "some_column", value: "(some_column)" }],
Advanced Filtering[](#advanced-filtering "Direct link to Advanced Filtering")
------------------------------------------------------------------------------
By default, filters are applied with an equality `=`. For those fields that have an `indices` entry, you may provide an `operator` with a string of a value supported by the index; in this case, you specify one or more filters, as either a singleton or in a list (which will be `AND`\-ed together). For example:
{ name: "create_datetime", operator: ">", value: some_datetime_variable }
or
[ { userid: userid_variable }, { name: "create_datetime", operator: ">", value: some_date_variable },];
`value` can be a single value or an array. If it is not an array, or there is only one element in `value`, the resulting query will be along the lines of `${name} ${operator} ?` with `value` bound to the `?`.
If there is more than one element in the `value` array, the number of unquoted `?` in `name` are counted and subtracted from the length of `value`, and this number of `?` is put on the right side of the operator; if there are more than one `?` then they will be encapsulated in `(` and `)`, e.g. `(?, ?, ?)`.
This faciliates bind values on the left of the operator, which is useful for some functions; for example a geo-distance filter:
{ name: "GEO_DISTANCE(coord, ?)", operator: "<", value: [new Float32Array([53.3730617,-6.3000515]), 10000],},
Data Partitioning and Composite Keys[](#data-partitioning-and-composite-keys "Direct link to Data Partitioning and Composite Keys")
------------------------------------------------------------------------------------------------------------------------------------
In some systems, you may wish to partition the data for various reasons, perhaps by user or by session. Data in Cassandra is always partitioned; by default this library will partition by the first primary key field. You may specify multiple columns which comprise the primary (unique) key of a record, and optionally indicate those fields which should be part of the partition key. For example, the vector store could be partitioned by both `userid` and `collectionid`, with additional fields `docid` and `docpart` making an individual entry unique:
..., primaryKey: [ {name: "userid", type: "text", partition: true}, {name: "collectionid", type: "text", partition: true}, {name: "docid", type: "text"}, {name: "docpart", type: "int"}, ], ...
When searching, you may include partition keys on the filter without defining `indices` for these columns; you do not need to specify all partition keys, but must specify those in the key first. In the above example, you could specify a filter of `{userid: userid_variable}` and `{userid: userid_variable, collectionid: collectionid_variable}`, but if you wanted to specify a filter of only `{collectionid: collectionid_variable}` you would have to include `collectionid` on the `indices` list.
Additional Configuration Options[](#additional-configuration-options "Direct link to Additional Configuration Options")
------------------------------------------------------------------------------------------------------------------------
In the configuration document, further optional parameters are provided; their defaults are:
..., maxConcurrency: 25, batchSize: 1, withClause: "", ...
Parameter
Usage
`maxConcurrency`
How many concurrent requests will be sent to Cassandra at a given time.
`batchSize`
How many documents will be sent on a single request to Cassandra. When using a value > 1, you should ensure your batch size will not exceed the Cassandra parameter `batch_size_fail_threshold_in_kb`. Batches are unlogged.
`withClause`
Cassandra tables may be created with an optional `WITH` clause; this is generally not needed but provided for completeness.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Azure Cosmos DB
](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)[
Next
Chroma
](/v0.1/docs/integrations/vectorstores/chroma/)
* [Setup](#setup)
* [Apache Cassandra®](#apache-cassandra)
* [Astra DB](#astra-db)
* [Indexing docs](#indexing-docs)
* [Querying docs](#querying-docs)
* [Vector Types](#vector-types)
* [Indices](#indices)
* [Advanced Filtering](#advanced-filtering)
* [Data Partitioning and Composite Keys](#data-partitioning-and-composite-keys)
* [Additional Configuration Options](#additional-configuration-options)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Cloudflare Vectorize
Cloudflare Vectorize
====================
If you're deploying your project in a Cloudflare worker, you can use [Cloudflare Vectorize](https://developers.cloudflare.com/vectorize/) with LangChain.js. It's a powerful and convenient option that's built directly into Cloudflare.
Setup[](#setup "Direct link to Setup")
---------------------------------------
Compatibility
Cloudflare Vectorize is currently in open beta, and requires a Cloudflare account on a paid plan to use.
After [setting up your project](https://developers.cloudflare.com/vectorize/get-started/intro/#prerequisites), create an index by running the following Wrangler command:
$ npx wrangler vectorize create <index_name> --preset @cf/baai/bge-small-en-v1.5
You can see a full list of options for the `vectorize` command [in the official documentation](https://developers.cloudflare.com/workers/wrangler/commands/#vectorize).
You'll then need to update your `wrangler.toml` file to include an entry for `[[vectorize]]`:
[[vectorize]]binding = "VECTORIZE_INDEX"index_name = "<index_name>"
Finally, you'll need to install the LangChain Cloudflare integration package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/cloudflare
yarn add @langchain/cloudflare
pnpm add @langchain/cloudflare
Usage[](#usage "Direct link to Usage")
---------------------------------------
Below is an example worker that adds documents to a vectorstore, queries it, or clears it depending on the path used. It also uses [Cloudflare Workers AI Embeddings](/v0.1/docs/integrations/text_embedding/cloudflare_ai/).
note
If running locally, be sure to run wrangler as `npx wrangler dev --remote`!
name = "langchain-test"main = "worker.ts"compatibility_date = "2024-01-10"[[vectorize]]binding = "VECTORIZE_INDEX"index_name = "langchain-test"[ai]binding = "AI"
// @ts-nocheckimport type { VectorizeIndex, Fetcher, Request,} from "@cloudflare/workers-types";import { CloudflareVectorizeStore, CloudflareWorkersAIEmbeddings,} from "@langchain/cloudflare";export interface Env { VECTORIZE_INDEX: VectorizeIndex; AI: Fetcher;}export default { async fetch(request: Request, env: Env) { const { pathname } = new URL(request.url); const embeddings = new CloudflareWorkersAIEmbeddings({ binding: env.AI, model: "@cf/baai/bge-small-en-v1.5", }); const store = new CloudflareVectorizeStore(embeddings, { index: env.VECTORIZE_INDEX, }); if (pathname === "/") { const results = await store.similaritySearch("hello", 5); return Response.json(results); } else if (pathname === "/load") { // Upsertion by id is supported await store.addDocuments( [ { pageContent: "hello", metadata: {}, }, { pageContent: "world", metadata: {}, }, { pageContent: "hi", metadata: {}, }, ], { ids: ["id1", "id2", "id3"] } ); return Response.json({ success: true }); } else if (pathname === "/clear") { await store.delete({ ids: ["id1", "id2", "id3"] }); return Response.json({ success: true }); } return Response.json({ error: "Not Found" }, { status: 404 }); },};
#### API Reference:
* [CloudflareVectorizeStore](https://api.js.langchain.com/classes/langchain_cloudflare.CloudflareVectorizeStore.html) from `@langchain/cloudflare`
* [CloudflareWorkersAIEmbeddings](https://api.js.langchain.com/classes/langchain_cloudflare.CloudflareWorkersAIEmbeddings.html) from `@langchain/cloudflare`
You can also pass a `filter` parameter to filter by previously loaded metadata. See [the official documentation](https://developers.cloudflare.com/vectorize/learning/metadata-filtering/) for information on the required format.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
CloseVector
](/v0.1/docs/integrations/vectorstores/closevector/)[
Next
Convex
](/v0.1/docs/integrations/vectorstores/convex/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/azure_aisearch/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Azure AI Search
On this page
Azure AI Search
===============
[Azure AI Search](https://azure.microsoft.com/products/ai-services/ai-search) (formerly known as Azure Search and Azure Cognitive Search) is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads on Azure. It supports also vector search using the [k-nearest neighbor](https://en.wikipedia.org/wiki/Nearest_neighbor_search) (kNN) algorithm and also [semantic search](https://learn.microsoft.com/azure/search/semantic-search-overview).
This vector store integration supports full text search, vector search and [hybrid search for best ranking performance](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/azure-cognitive-search-outperforming-vector-search-with-hybrid/ba-p/3929167).
Learn how to leverage the vector search capabilities of Azure AI Search from [this page](https://learn.microsoft.com/azure/search/vector-search-overview). If you don't have an Azure account, you can [create a free account](https://azure.microsoft.com/free/) to get started.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll first need to install the `@azure/search-documents` SDK and the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install -S @langchain/community @azure/search-documents
yarn add @langchain/community @azure/search-documents
pnpm add @langchain/community @azure/search-documents
You'll also need to have an Azure AI Search instance running. You can deploy a free version on Azure Portal without any cost, following [this guide](https://learn.microsoft.com/azure/search/search-create-service-portal).
Once you have your instance running, make sure you have the endpoint and the admin key (query keys can be used only to search document, not to index, update or delete). The endpoint is the URL of your instance which you can find in the Azure Portal, under the "Overview" section of your instance. The admin key can be found under the "Keys" section of your instance. Then you need to set the following environment variables:
# Azure AI Search connection settingsAZURE_AISEARCH_ENDPOINT=AZURE_AISEARCH_KEY=# If you're using Azure OpenAI API, you'll need to set these variablesAZURE_OPENAI_API_KEY=AZURE_OPENAI_API_INSTANCE_NAME=AZURE_OPENAI_API_DEPLOYMENT_NAME=AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=AZURE_OPENAI_API_VERSION=# Or you can use the OpenAI API directlyOPENAI_API_KEY=
#### API Reference:
About hybrid search[](#about-hybrid-search "Direct link to About hybrid search")
---------------------------------------------------------------------------------
Hybrid search is a feature that combines the strengths of full text search and vector search to provide the best ranking performance. It's enabled by default in Azure AI Search vector stores, but you can select a different search query type by setting the `search.type` property when creating the vector store.
You can read more about hybrid search and how it may improve your search results in the [official documentation](https://learn.microsoft.com/azure/search/hybrid-search-overview).
In some scenarios like retrieval-augmented generation (RAG), you may want to enable **semantic ranking** in addition to hybrid search to improve the relevance of the search results. You can enable semantic ranking by setting the `search.type` property to `AzureAISearchQueryType.SemanticHybrid` when creating the vector store. Note that semantic ranking capabilities are only available in the Basic and higher pricing tiers, and subject to [regional availability](https://azure.microsoft.com/en-us/explore/global-infrastructure/products-by-region/?products=search).
You can read more about the performance of using semantic ranking with hybrid search in [this blog post](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/azure-cognitive-search-outperforming-vector-search-with-hybrid/ba-p/3929167).
Example: index docs, vector search and LLM integration[](#example-index-docs-vector-search-and-llm-integration "Direct link to Example: index docs, vector search and LLM integration")
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Below is an example that indexes documents from a file in Azure AI Search, runs a hybrid search query, and finally uses a chain to answer a question in natural language based on the retrieved documents.
import { AzureAISearchVectorStore, AzureAISearchQueryType,} from "@langchain/community/vectorstores/azure_aisearch";import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { createRetrievalChain } from "langchain/chains/retrieval";import { TextLoader } from "langchain/document_loaders/fs/text";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";// Load documents from fileconst loader = new TextLoader("./state_of_the_union.txt");const rawDocuments = await loader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 0,});const documents = await splitter.splitDocuments(rawDocuments);// Create Azure AI Search vector storeconst store = await AzureAISearchVectorStore.fromDocuments( documents, new OpenAIEmbeddings(), { search: { type: AzureAISearchQueryType.SimilarityHybrid, }, });// The first time you run this, the index will be created.// You may need to wait a bit for the index to be created before you can perform// a search, or you can create the index manually beforehand.// Performs a similarity searchconst resultDocuments = await store.similaritySearch( "What did the president say about Ketanji Brown Jackson?");console.log("Similarity search results:");console.log(resultDocuments[0].pageContent);/* Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.*/// Use the store as part of a chainconst model = new ChatOpenAI({ model: "gpt-3.5-turbo-1106" });const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm: model, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: store.asRetriever(), combineDocsChain,});const response = await chain.invoke({ input: "What is the president's top priority regarding prices?",});console.log("Chain response:");console.log(response.answer);/* The president's top priority is getting prices under control.*/
#### API Reference:
* [AzureAISearchVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_azure_aisearch.AzureAISearchVectorStore.html) from `@langchain/community/vectorstores/azure_aisearch`
* [AzureAISearchQueryType](https://api.js.langchain.com/types/langchain_community_vectorstores_azure_aisearch.AzureAISearchQueryType.html) from `@langchain/community/vectorstores/azure_aisearch`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [createStuffDocumentsChain](https://api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents`
* [createRetrievalChain](https://api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Astra DB
](/v0.1/docs/integrations/vectorstores/astradb/)[
Next
Azure Cosmos DB
](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Setup](#setup)
* [About hybrid search](#about-hybrid-search)
* [Example: index docs, vector search and LLM integration](#example-index-docs-vector-search-and-llm-integration)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/convex/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Convex
On this page
Convex
======
LangChain.js supports [Convex](https://convex.dev/) as a [vector store](https://docs.convex.dev/vector-search), and supports the standard similarity search.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Create project[](#create-project "Direct link to Create project")
Get a working [Convex](https://docs.convex.dev/) project set up, for example by using:
npm create convex@latest
### Add database accessors[](#add-database-accessors "Direct link to Add database accessors")
Add query and mutation helpers to `convex/langchain/db.ts`:
convex/langchain/db.ts
export * from "langchain/util/convex";
### Configure your schema[](#configure-your-schema "Direct link to Configure your schema")
Set up your schema (for vector indexing):
convex/schema.ts
import { defineSchema, defineTable } from "convex/server";import { v } from "convex/values";export default defineSchema({ documents: defineTable({ embedding: v.array(v.number()), text: v.string(), metadata: v.any(), }).vectorIndex("byEmbedding", { vectorField: "embedding", dimensions: 1536, }),});
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
### Ingestion[](#ingestion "Direct link to Ingestion")
convex/myActions.ts
"use node";import { ConvexVectorStore } from "@langchain/community/vectorstores/convex";import { OpenAIEmbeddings } from "@langchain/openai";import { action } from "./_generated/server.js";export const ingest = action({ args: {}, handler: async (ctx) => { await ConvexVectorStore.fromTexts( ["Hello world", "Bye bye", "What's this?"], [{ prop: 2 }, { prop: 1 }, { prop: 3 }], new OpenAIEmbeddings(), { ctx } ); },});
#### API Reference:
* [ConvexVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_convex.ConvexVectorStore.html) from `@langchain/community/vectorstores/convex`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Search[](#search "Direct link to Search")
convex/myActions.ts
"use node";import { ConvexVectorStore } from "@langchain/community/vectorstores/convex";import { OpenAIEmbeddings } from "@langchain/openai";import { v } from "convex/values";import { action } from "./_generated/server.js";export const search = action({ args: { query: v.string(), }, handler: async (ctx, args) => { const vectorStore = new ConvexVectorStore(new OpenAIEmbeddings(), { ctx }); const resultOne = await vectorStore.similaritySearch(args.query, 1); console.log(resultOne); },});
#### API Reference:
* [ConvexVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_convex.ConvexVectorStore.html) from `@langchain/community/vectorstores/convex`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Cloudflare Vectorize
](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)[
Next
Couchbase
](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Setup](#setup)
* [Create project](#create-project)
* [Add database accessors](#add-database-accessors)
* [Configure your schema](#configure-your-schema)
* [Usage](#usage)
* [Ingestion](#ingestion)
* [Search](#search)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/elasticsearch/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Elasticsearch
On this page
Elasticsearch
=============
Compatibility
Only available on Node.js.
[Elasticsearch](https://github.com/elastic/elasticsearch) is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. It supports also vector search using the [k-nearest neighbor](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) (kNN) algorithm and also [custom models for Natural Language Processing](https://www.elastic.co/blog/how-to-deploy-nlp-text-embeddings-and-vector-search) (NLP). You can read more about the support of vector search in Elasticsearch [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/knn-search.html).
LangChain.js accepts [@elastic/elasticsearch](https://github.com/elastic/elasticsearch-js) as the client for Elasticsearch vectorstore.
Setup[](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install -S @elastic/elasticsearch
yarn add @elastic/elasticsearch
pnpm add @elastic/elasticsearch
You'll also need to have an Elasticsearch instance running. You can use the [official Docker image](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html) to get started, or you can use [Elastic Cloud](https://www.elastic.co/cloud/), Elastic's official cloud service.
For connecting to Elastic Cloud you can read the documentation reported [here](https://www.elastic.co/guide/en/kibana/current/api-keys.html) for obtaining an API key.
Example: index docs, vector search and LLM integration[](#example-index-docs-vector-search-and-llm-integration "Direct link to Example: index docs, vector search and LLM integration")
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Below is an example that indexes 4 documents in Elasticsearch, runs a vector search query, and finally uses an LLM to answer a question in natural language based on the retrieved documents.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { Client, ClientOptions } from "@elastic/elasticsearch";import { OpenAI, OpenAIEmbeddings } from "@langchain/openai";import { VectorDBQAChain } from "langchain/chains";import { ElasticClientArgs, ElasticVectorSearch,} from "@langchain/community/vectorstores/elasticsearch";import { Document } from "@langchain/core/documents";// to run this first run Elastic's docker-container with `docker-compose up -d --build`export async function run() { const config: ClientOptions = { node: process.env.ELASTIC_URL ?? "http://127.0.0.1:9200", }; if (process.env.ELASTIC_API_KEY) { config.auth = { apiKey: process.env.ELASTIC_API_KEY, }; } else if (process.env.ELASTIC_USERNAME && process.env.ELASTIC_PASSWORD) { config.auth = { username: process.env.ELASTIC_USERNAME, password: process.env.ELASTIC_PASSWORD, }; } const clientArgs: ElasticClientArgs = { client: new Client(config), indexName: process.env.ELASTIC_INDEX ?? "test_vectorstore", }; // Index documents const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "Elasticsearch is a powerful vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "Elasticsearch a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads.", }), ]; const embeddings = new OpenAIEmbeddings(); // await ElasticVectorSearch.fromDocuments(docs, embeddings, clientArgs); const vectorStore = new ElasticVectorSearch(embeddings, clientArgs); // Also supports an additional {ids: []} parameter for upsertion const ids = await vectorStore.addDocuments(docs); /* Search the vector DB independently with meta filters */ const results = await vectorStore.similaritySearch("fox jump", 1); console.log(JSON.stringify(results, null, 2)); /* [ { "pageContent": "the quick brown fox jumped over the lazy dog", "metadata": { "foo": "bar" } } ] */ /* Use as part of a chain (currently no metadata filters) for LLM query */ const model = new OpenAI(); const chain = VectorDBQAChain.fromLLM(model, vectorStore, { k: 1, returnSourceDocuments: true, }); const response = await chain.invoke({ query: "What is Elasticsearch?" }); console.log(JSON.stringify(response, null, 2)); /* { "text": " Elasticsearch is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads.", "sourceDocuments": [ { "pageContent": "Elasticsearch a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads.", "metadata": { "baz": "qux" } } ] } */ await vectorStore.delete({ ids }); const response2 = await chain.invoke({ query: "What is Elasticsearch?" }); console.log(JSON.stringify(response2, null, 2)); /* [] */}
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [VectorDBQAChain](https://api.js.langchain.com/classes/langchain_chains.VectorDBQAChain.html) from `langchain/chains`
* [ElasticClientArgs](https://api.js.langchain.com/interfaces/langchain_community_vectorstores_elasticsearch.ElasticClientArgs.html) from `@langchain/community/vectorstores/elasticsearch`
* [ElasticVectorSearch](https://api.js.langchain.com/classes/langchain_community_vectorstores_elasticsearch.ElasticVectorSearch.html) from `@langchain/community/vectorstores/elasticsearch`
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Couchbase
](/v0.1/docs/integrations/vectorstores/couchbase/)[
Next
Faiss
](/v0.1/docs/integrations/vectorstores/faiss/)
* [Setup](#setup)
* [Example: index docs, vector search and LLM integration](#example-index-docs-vector-search-and-llm-integration)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/googlevertexai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Google Vertex AI Matching Engine
On this page
Google Vertex AI Matching Engine
================================
Compatibility
Only available on Node.js.
The Google Vertex AI Matching Engine "provides the industry's leading high-scale low latency vector database. These vector databases are commonly referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service."
Setup[](#setup "Direct link to Setup")
---------------------------------------
caution
This module expects an endpoint and deployed index already created as the creation time takes close to one hour. To learn more, see the LangChain python documentation [Create Index and deploy it to an Endpoint](https://python.langchain.com/docs/integrations/vectorstores/matchingengine#create-index-and-deploy-it-to-an-endpoint).
Before running this code, you should make sure the Vertex AI API is enabled for the relevant project in your Google Cloud dashboard and that you've authenticated to Google Cloud using one of these methods:
* You are logged into an account (using `gcloud auth application-default login`) permitted to that project.
* You are running on a machine using a service account that is permitted to the project.
* You have downloaded the credentials for a service account that is permitted to the project and set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the path of this file.
Install the authentication library with:
* npm
* Yarn
* pnpm
npm install google-auth-library
yarn add google-auth-library
pnpm add google-auth-library
The Matching Engine does not store the actual document contents, only embeddings. Therefore, you'll need a docstore. The below example uses Google Cloud Storage, which requires the following:
* npm
* Yarn
* pnpm
npm install @google-cloud/storage
yarn add @google-cloud/storage
pnpm add @google-cloud/storage
Usage[](#usage "Direct link to Usage")
---------------------------------------
### Initializing the engine[](#initializing-the-engine "Direct link to Initializing the engine")
When creating the `MatchingEngine` object, you'll need some information about the matching engine configuration. You can get this information from the Cloud Console for Matching Engine:
* The id for the Index
* The id for the Index Endpoint
You will also need a document store. While an `InMemoryDocstore` is ok for initial testing, you will want to use something like a [GoogleCloudStorageDocstore](https://api.js.langchain.com/classes/langchain_stores_doc_gcs.GoogleCloudStorageDocstore.html) to store it more permanently.
import { MatchingEngine } from "langchain/vectorstores/googlevertexai";import { Document } from "langchain/document";import { SyntheticEmbeddings } from "langchain/embeddings/fake";import { GoogleCloudStorageDocstore } from "langchain/stores/doc/gcs";const embeddings = new SyntheticEmbeddings({ vectorSize: Number.parseInt( process.env.SYNTHETIC_EMBEDDINGS_VECTOR_SIZE ?? "768", 10 ),});const store = new GoogleCloudStorageDocstore({ bucket: process.env.GOOGLE_CLOUD_STORAGE_BUCKET!,});const config = { index: process.env.GOOGLE_VERTEXAI_MATCHINGENGINE_INDEX!, indexEndpoint: process.env.GOOGLE_VERTEXAI_MATCHINGENGINE_INDEXENDPOINT!, apiVersion: "v1beta1", docstore: store,};const engine = new MatchingEngine(embeddings, config);
### Adding documents[](#adding-documents "Direct link to Adding documents")
const doc = new Document({ pageContent: "this" });await engine.addDocuments([doc]);
Any metadata in a document is converted into Matching Engine "allow list" values that can be used to filter during a query.
const documents = [ new Document({ pageContent: "this apple", metadata: { color: "red", category: "edible", }, }), new Document({ pageContent: "this blueberry", metadata: { color: "blue", category: "edible", }, }), new Document({ pageContent: "this firetruck", metadata: { color: "red", category: "machine", }, }),];// Add all our documentsawait engine.addDocuments(documents);
The documents are assumed to have an "id" parameter available as well. If this is not set, then an ID will be assigned and returned as part of the Document.
### Querying documents[](#querying-documents "Direct link to Querying documents")
Doing a straightforward k-nearest-neighbor search which returns all results is done using any of the standard methods:
const results = await engine.similaritySearch("this");
### Querying documents with a filter / restriction[](#querying-documents-with-a-filter--restriction "Direct link to Querying documents with a filter / restriction")
We can limit what documents are returned based on the metadata that was set for the document. So if we just wanted to limit the results to those with a red color, we can do:
import { Restriction } from `langchain/vectorstores/googlevertexai`;const redFilter: Restriction[] = [ { namespace: "color", allowList: ["red"], },];const redResults = await engine.similaritySearch("this", 4, redFilter);
If we wanted to do something more complicated, like things that are red, but not edible:
const filter: Restriction[] = [ { namespace: "color", allowList: ["red"], }, { namespace: "category", denyList: ["edible"], },];const results = await engine.similaritySearch("this", 4, filter);
### Deleting documents[](#deleting-documents "Direct link to Deleting documents")
Deleting documents are done using ID.
import { IdDocument } from `langchain/vectorstores/googlevertexai`;const oldResults: IdDocument[] = await engine.similaritySearch("this", 10);const oldIds = oldResults.map( doc => doc.id! );await engine.delete({ids: oldIds});
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Faiss
](/v0.1/docs/integrations/vectorstores/faiss/)[
Next
SAP HANA Cloud Vector Engine
](/v0.1/docs/integrations/vectorstores/hanavector/)
* [Setup](#setup)
* [Usage](#usage)
* [Initializing the engine](#initializing-the-engine)
* [Adding documents](#adding-documents)
* [Querying documents](#querying-documents)
* [Querying documents with a filter / restriction](#querying-documents-with-a-filter--restriction)
* [Deleting documents](#deleting-documents)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/couchbase/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Couchbase
Couchbase
=========
[Couchbase](http://couchbase.com/) is an award-winning distributed NoSQL cloud database that delivers unmatched versatility, performance, scalability, and financial value for all of your cloud, mobile, AI, and edge computing applications. Couchbase embraces AI with coding assistance for developers and vector search for their applications.
Vector Search is a part of the [Full Text Search Service](https://docs.couchbase.com/server/current/learn/services-and-indexes/services/search-service.html) (Search Service) in Couchbase.
This tutorial explains how to use Vector Search in Couchbase. You can work with both [Couchbase Capella](https://www.couchbase.com/products/capella/) and your self-managed Couchbase Server.
Installation[](#installation "Direct link to Installation")
------------------------------------------------------------
You will need couchbase and langchain community to use couchbase vector store. For this tutorial, we will use OpenAI embeddings
* npm
* Yarn
* pnpm
npm install couchbase @langchain/openai @langchain/community
yarn add couchbase @langchain/openai @langchain/community
pnpm add couchbase @langchain/openai @langchain/community
Create Couchbase Connection Object[](#create-couchbase-connection-object "Direct link to Create Couchbase Connection Object")
------------------------------------------------------------------------------------------------------------------------------
We create a connection to the Couchbase cluster initially and then pass the cluster object to the Vector Store. Here, we are connecting using the username and password. You can also connect using any other supported way to your cluster.
For more information on connecting to the Couchbase cluster, please check the [Node SDK documentation](https://docs.couchbase.com/nodejs-sdk/current/hello-world/start-using-sdk.html#connect).
import { Cluster } from "couchbase";const connectionString = "couchbase://localhost"; // or couchbases://localhost if you are using TLSconst dbUsername = "Administrator"; // valid database user with read access to the bucket being queriedconst dbPassword = "Password"; // password for the database userconst couchbaseClient = await Cluster.connect(connectionString, { username: dbUsername, password: dbPassword, configProfile: "wanDevelopment",});
Create the Search Index[](#create-the-search-index "Direct link to Create the Search Index")
---------------------------------------------------------------------------------------------
Currently, the Search index needs to be created from the Couchbase Capella or Server UI or using the REST interface.
For this example, let us use the Import Index feature on the Search Service on the UI.
Let us define a Search index with the name `vector-index` on the testing bucket. We are defining an index on the `testing` bucket's `_default` scope on the `_default` collection with the vector field set to `embedding` with 1536 dimensions and the text field set to `text`. We are also indexing and storing all the fields under `metadata` in the document as a dynamic mapping to account for varying document structures. The similarity metric is set to `dot_product`.
### How to Import an Index to the Full Text Search service?[](#how-to-import-an-index-to-the-full-text-search-service "Direct link to How to Import an Index to the Full Text Search service?")
* [Couchbase Server](https://docs.couchbase.com/server/current/search/import-search-index.html)
* Click on Search -> Add Index -> Import
* Copy the following Index definition in the Import screen
* Click on Create Index to create the index.
* [Couchbase Capella](https://docs.couchbase.com/cloud/search/import-search-index.html)
* Copy the following index definition to a new file `index.json`
* Import the file in Capella using the instructions in the documentation.
* Click on Create Index to create the index.
### Index Definition[](#index-definition "Direct link to Index Definition")
{ "name": "vector-index", "type": "fulltext-index", "params": { "doc_config": { "docid_prefix_delim": "", "docid_regexp": "", "mode": "type_field", "type_field": "type" }, "mapping": { "default_analyzer": "standard", "default_datetime_parser": "dateTimeOptional", "default_field": "_all", "default_mapping": { "dynamic": true, "enabled": true, "properties": { "metadata": { "dynamic": true, "enabled": true }, "embedding": { "enabled": true, "dynamic": false, "fields": [ { "dims": 1536, "index": true, "name": "embedding", "similarity": "dot_product", "type": "vector", "vector_index_optimized_for": "recall" } ] }, "text": { "enabled": true, "dynamic": false, "fields": [ { "index": true, "name": "text", "store": true, "type": "text" } ] } } }, "default_type": "_default", "docvalues_dynamic": false, "index_dynamic": true, "store_dynamic": true, "type_field": "_type" }, "store": { "indexType": "scorch", "segmentVersion": 16 } }, "sourceType": "gocbcore", "sourceName": "testing", "sourceParams": {}, "planParams": { "maxPartitionsPerPIndex": 103, "indexPartitions": 10, "numReplicas": 0 }}
For more details on how to create a search index with support for Vector fields, please refer to the documentation:
* [Couchbase Capella](https://docs.couchbase.com/cloud/search/create-search-indexes.html)
* [Couchbase Server](https://docs.couchbase.com/server/current/search/create-search-indexes.html)
For using this vector store, CouchbaseVectorStoreArgs needs to be configured. textKey and embeddingKey are optional fields, required if you want to use specific keys
const couchbaseConfig: CouchbaseVectorStoreArgs = { cluster: couchbaseClient, bucketName: "testing", scopeName: "_default", collectionName: "_default", indexName: "vector-index", textKey: "text", embeddingKey: "embedding",};
Create Vector Store[](#create-vector-store "Direct link to Create Vector Store")
---------------------------------------------------------------------------------
We create the vector store object with the cluster information and the search index name.
const store = await CouchbaseVectorStore.initialize( embeddings, // embeddings object to create embeddings from text couchbaseConfig);
Basic Vector Search Example[](#basic-vector-search-example "Direct link to Basic Vector Search Example")
---------------------------------------------------------------------------------------------------------
The following example showcases how to use couchbase vector search and perform similarity search. For this example, we are going to load the "state\_of\_the\_union.txt" file via the TextLoader, chunk the text into 500 character chunks with no overlaps and index all these chunks into Couchbase.
After the data is indexed, we perform a simple query to find the top 4 chunks that are similar to the query "What did president say about Ketanji Brown Jackson". At the emd, also shows how to get similarity score
import { OpenAIEmbeddings } from "@langchain/openai";import { CouchbaseVectorStoreArgs, CouchbaseVectorStore,} from "@langchain/community/vectorstores/couchbase";import { Cluster } from "couchbase";import { TextLoader } from "langchain/document_loaders/fs/text";import { CharacterTextSplitter } from "langchain/text_splitter";const connectionString = process.env.COUCHBASE_DB_CONN_STR ?? "couchbase://localhost";const databaseUsername = process.env.COUCHBASE_DB_USERNAME ?? "Administrator";const databasePassword = process.env.COUCHBASE_DB_PASSWORD ?? "Password";// Load documents from fileconst loader = new TextLoader("./state_of_the_union.txt");const rawDocuments = await loader.load();const splitter = new CharacterTextSplitter({ chunkSize: 500, chunkOverlap: 0,});const docs = await splitter.splitDocuments(rawDocuments);const couchbaseClient = await Cluster.connect(connectionString, { username: databaseUsername, password: databasePassword, configProfile: "wanDevelopment",});// Open AI API Key is required to use OpenAIEmbeddings, some other embeddings may also be usedconst embeddings = new OpenAIEmbeddings({ apiKey: process.env.OPENAI_API_KEY,});const couchbaseConfig: CouchbaseVectorStoreArgs = { cluster: couchbaseClient, bucketName: "testing", scopeName: "_default", collectionName: "_default", indexName: "vector-index", textKey: "text", embeddingKey: "embedding",};const store = await CouchbaseVectorStore.fromDocuments( docs, embeddings, couchbaseConfig);const query = "What did president say about Ketanji Brown Jackson";const resultsSimilaritySearch = await store.similaritySearch(query);console.log("resulting documents: ", resultsSimilaritySearch[0]);// Similarity Search With Scoreconst resultsSimilaritySearchWithScore = await store.similaritySearchWithScore( query, 1);console.log("resulting documents: ", resultsSimilaritySearchWithScore[0][0]);console.log("resulting scores: ", resultsSimilaritySearchWithScore[0][1]);const result = await store.similaritySearch(query, 1, { fields: ["metadata.source"],});console.log(result[0]);
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [CouchbaseVectorStoreArgs](https://api.js.langchain.com/interfaces/langchain_community_vectorstores_couchbase.CouchbaseVectorStoreArgs.html) from `@langchain/community/vectorstores/couchbase`
* [CouchbaseVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_couchbase.CouchbaseVectorStore.html) from `@langchain/community/vectorstores/couchbase`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* [CharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.CharacterTextSplitter.html) from `langchain/text_splitter`
Specifying Fields to Return[](#specifying-fields-to-return "Direct link to Specifying Fields to Return")
---------------------------------------------------------------------------------------------------------
You can specify the fields to return from the document using `fields` parameter in the filter during searches. These fields are returned as part of the `metadata` object. You can fetch any field that is stored in the index. The `textKey` of the document is returned as part of the document's `pageContent`.
If you do not specify any fields to be fetched, all the fields stored in the index are returned.
If you want to fetch one of the fields in the metadata, you need to specify it using `.` For example, to fetch the `source` field in the metadata, you need to use `metadata.source`.
const result = await store.similaritySearch(query, 1, { fields: ["metadata.source"],});console.log(result[0]);
Hybrid Search[](#hybrid-search "Direct link to Hybrid Search")
---------------------------------------------------------------
Couchbase allows you to do hybrid searches by combining vector search results with searches on non-vector fields of the document like the `metadata` object.
The results will be based on the combination of the results from both vector search and the searches supported by full text search service. The scores of each of the component searches are added up to get the total score of the result.
To perform hybrid searches, there is an optional key, `searchOptions` in `fields` parameter that can be passed to all the similarity searches.
The different search/query possibilities for the `searchOptions` can be found [here](https://docs.couchbase.com/server/current/search/search-request-params.html#query-object).
### Create Diverse Metadata for Hybrid Search[](#create-diverse-metadata-for-hybrid-search "Direct link to Create Diverse Metadata for Hybrid Search")
In order to simulate hybrid search, let us create some random metadata from the existing documents. We uniformly add three fields to the metadata, `date` between 2010 & 2020, `rating` between 1 & 5 and `author` set to either John Doe or Jane Doe. We will also declare few sample queries.
for (let i = 0; i < docs.length; i += 1) { docs[i].metadata.date = `${2010 + (i % 10)}-01-01`; docs[i].metadata.rating = 1 + (i % 5); docs[i].metadata.author = ["John Doe", "Jane Doe"][i % 2];}const store = await CouchbaseVectorStore.fromDocuments( docs, embeddings, couchbaseConfig);const query = "What did the president say about Ketanji Brown Jackson";const independenceQuery = "Any mention about independence?";
### Example: Search by Exact Value[](#example-search-by-exact-value "Direct link to Example: Search by Exact Value")
We can search for exact matches on a textual field like the author in the `metadata` object.
const exactValueResult = await store.similaritySearch(query, 4, { fields: ["metadata.author"], searchOptions: { query: { field: "metadata.author", match: "John Doe" }, },});console.log(exactValueResult[0]);
### Example: Search by Partial Match[](#example-search-by-partial-match "Direct link to Example: Search by Partial Match")
We can search for partial matches by specifying a fuzziness for the search. This is useful when you want to search for slight variations or misspellings of a search query.
Here, "Johny" is close (fuzziness of 1) to "John Doe".
const partialMatchResult = await store.similaritySearch(query, 4, { fields: ["metadata.author"], searchOptions: { query: { field: "metadata.author", match: "Johny", fuzziness: 1 }, },});console.log(partialMatchResult[0]);
### Example: Search by Date Range Query[](#example-search-by-date-range-query "Direct link to Example: Search by Date Range Query")
We can search for documents that are within a date range query on a date field like `metadata.date`.
const dateRangeResult = await store.similaritySearch(independenceQuery, 4, { fields: ["metadata.date", "metadata.author"], searchOptions: { query: { start: "2016-12-31", end: "2017-01-02", inclusiveStart: true, inclusiveEnd: false, field: "metadata.date", }, },});console.log(dateRangeResult[0]);
### Example: Search by Numeric Range Query[](#example-search-by-numeric-range-query "Direct link to Example: Search by Numeric Range Query")
We can search for documents that are within a range for a numeric field like `metadata.rating`.
const ratingRangeResult = await store.similaritySearch(independenceQuery, 4, { fields: ["metadata.rating"], searchOptions: { query: { min: 3, max: 5, inclusiveMin: false, inclusiveMax: true, field: "metadata.rating", }, },});console.log(ratingRangeResult[0]);
### Example: Combining Multiple Search Conditions[](#example-combining-multiple-search-conditions "Direct link to Example: Combining Multiple Search Conditions")
Different queries can by combined using AND (conjuncts) or OR (disjuncts) operators.
In this example, we are checking for documents with a rating between 3 & 4 and dated between 2015 & 2018.
const multipleConditionsResult = await store.similaritySearch(texts[0], 4, { fields: ["metadata.rating", "metadata.date"], searchOptions: { query: { conjuncts: [ { min: 3, max: 4, inclusive_max: true, field: "metadata.rating" }, { start: "2016-12-31", end: "2017-01-02", field: "metadata.date" }, ], }, },});console.log(multipleConditionsResult[0]);
### Other Queries[](#other-queries "Direct link to Other Queries")
Similarly, you can use any of the supported Query methods like Geo Distance, Polygon Search, Wildcard, Regular Expressions, etc in the `searchOptions` Key of `filter` parameter. Please refer to the documentation for more details on the available query methods and their syntax.
* [Couchbase Capella](https://docs.couchbase.com/cloud/search/search-request-params.html#query-object)
* [Couchbase Server](https://docs.couchbase.com/server/current/search/search-request-params.html#query-object)
Frequently Asked Questions
==========================
Question: Should I create the Search index before creating the CouchbaseVectorStore object?[](#question-should-i-create-the-search-index-before-creating-the-couchbasevectorstore-object "Direct link to Question: Should I create the Search index before creating the CouchbaseVectorStore object?")
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Yes, currently you need to create the Search index before creating the `CouchbaseVectorStore` object.
Question: I am not seeing all the fields that I specified in my search results.[](#question-i-am-not-seeing-all-the-fields-that-i-specified-in-my-search-results "Direct link to Question: I am not seeing all the fields that I specified in my search results.")
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
In Couchbase, we can only return the fields stored in the Search index. Please ensure that the field that you are trying to access in the search results is part of the Search index.
One way to handle this is to index and store a document's fields dynamically in the index.
* In Capella, you need to go to "Advanced Mode" then under the chevron "General Settings" you can check "\[X\] Store Dynamic Fields" or "\[X\] Index Dynamic Fields"
* In Couchbase Server, in the Index Editor (not Quick Editor) under the chevron "Advanced" you can check "\[X\] Store Dynamic Fields" or "\[X\] Index Dynamic Fields"
Note that these options will increase the size of the index.
For more details on dynamic mappings, please refer to the [documentation](https://docs.couchbase.com/cloud/search/customize-index.html).
Question: I am unable to see the metadata object in my search results.[](#question-i-am-unable-to-see-the-metadata-object-in-my-search-results "Direct link to Question: I am unable to see the metadata object in my search results.")
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
This is most likely due to the `metadata` field in the document not being indexed and/or stored by the Couchbase Search index. In order to index the `metadata` field in the document, you need to add it to the index as a child mapping.
If you select to map all the fields in the mapping, you will be able to search by all metadata fields. Alternatively, to optimize the index, you can select the specific fields inside `metadata` object to be indexed. You can refer to the [docs](https://docs.couchbase.com/cloud/search/customize-index.html) to learn more about indexing child mappings.
To create Child Mappings, you can refer to the following docs -
* [Couchbase Capella](https://docs.couchbase.com/cloud/search/create-child-mapping.html)
* [Couchbase Server](https://docs.couchbase.com/server/current/fts/fts-creating-index-from-UI-classic-editor-dynamic.html)
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Convex
](/v0.1/docs/integrations/vectorstores/convex/)[
Next
Elasticsearch
](/v0.1/docs/integrations/vectorstores/elasticsearch/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/hanavector/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* SAP HANA Cloud Vector Engine
On this page
SAP HANA Cloud Vector Engine
============================
[SAP HANA Cloud Vector Engine](https://www.sap.com/events/teched/news-guide/ai.html#article8) is a vector store fully integrated into the `SAP HANA Cloud database`.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You'll first need to install either the [`@sap/hana-client`](https://www.npmjs.com/package/@sap/hana-client) or the [`hdb`](https://www.npmjs.com/package/hdb) package, and the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install -S @langchain/community @sap/hana-client# ornpm install -S @langchain/community hdb
yarn add @langchain/community @sap/hana-client# oryarn add @langchain/community hdb
pnpm add @langchain/community @sap/hana-client# orpnpm add @langchain/community hdb
You'll also need to have database connection to a HANA Cloud instance.
OPENAI_API_KEY = "Your OpenAI API key"HANA_HOST = "HANA_DB_ADDRESS"HANA_PORT = "HANA_DB_PORT"HANA_UID = "HANA_DB_USER"HANA_PWD = "HANA_DB_PASSWORD"
#### API Reference:
Create a new index from texts[](#create-a-new-index-from-texts "Direct link to Create a new index from texts")
---------------------------------------------------------------------------------------------------------------
import { OpenAIEmbeddings } from "@langchain/openai";import hanaClient from "hdb";import { HanaDB, HanaDBArgs,} from "@langchain/community/vectorstores/hanavector";const connectionParams = { host: process.env.HANA_HOST, port: process.env.HANA_PORT, user: process.env.HANA_UID, password: process.env.HANA_PWD, // useCesu8 : false};const client = hanaClient.createClient(connectionParams);// connet to hanaDBawait new Promise<void>((resolve, reject) => { client.connect((err: Error) => { // Use arrow function here if (err) { reject(err); } else { console.log("Connected to SAP HANA successfully."); resolve(); } });});const embeddings = new OpenAIEmbeddings();const args: HanaDBArgs = { connection: client, tableName: "test_fromTexts",};// This function will create a table "test_fromTexts" if not exist, if exists,// then the value will be appended to the table.const vectorStore = await HanaDB.fromTexts( ["Bye bye", "Hello world", "hello nice world"], [ { id: 2, name: "2" }, { id: 1, name: "1" }, { id: 3, name: "3" }, ], embeddings, args);const response = await vectorStore.similaritySearch("hello world", 2);console.log(response);/* This result is based on no table "test_fromTexts" existing in the database. [ { pageContent: 'Hello world', metadata: { id: 1, name: '1' } }, { pageContent: 'hello nice world', metadata: { id: 3, name: '3' } } ]*/client.disconnect();
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [HanaDB](https://api.js.langchain.com/classes/langchain_community_vectorstores_hanavector.HanaDB.html) from `@langchain/community/vectorstores/hanavector`
* [HanaDBArgs](https://api.js.langchain.com/interfaces/langchain_community_vectorstores_hanavector.HanaDBArgs.html) from `@langchain/community/vectorstores/hanavector`
Create a new index from a loader and perform similarity searches[](#create-a-new-index-from-a-loader-and-perform-similarity-searches "Direct link to Create a new index from a loader and perform similarity searches")
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
import hanaClient from "hdb";import { HanaDB, HanaDBArgs,} from "@langchain/community/vectorstores/hanavector";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import { CharacterTextSplitter } from "langchain/text_splitter";const connectionParams = { host: process.env.HANA_HOST, port: process.env.HANA_PORT, user: process.env.HANA_UID, password: process.env.HANA_PWD, // useCesu8 : false};const client = hanaClient.createClient(connectionParams);// connet to hanaDBawait new Promise<void>((resolve, reject) => { client.connect((err: Error) => { // Use arrow function here if (err) { reject(err); } else { console.log("Connected to SAP HANA successfully."); resolve(); } });});const embeddings = new OpenAIEmbeddings();const args: HanaDBArgs = { connection: client, tableName: "test_fromDocs",};// Load documents from fileconst loader = new TextLoader("./state_of_the_union.txt");const rawDocuments = await loader.load();const splitter = new CharacterTextSplitter({ chunkSize: 500, chunkOverlap: 0,});const documents = await splitter.splitDocuments(rawDocuments);// Create a LangChain VectorStore interface for the HANA database and specify the table (collection) to use in args.const vectorStore = new HanaDB(embeddings, args);await vectorStore.initialize();// Delete already existing documents from the tableawait vectorStore.delete({ filter: {} });// add the loaded document chunksawait vectorStore.addDocuments(documents);// similarity search (default:“Cosine Similarity”, options:["euclidean", "cosine"])const query = "What did the president say about Ketanji Brown Jackson";const docs = await vectorStore.similaritySearch(query, 2);docs.forEach((doc) => { console.log("-".repeat(80)); console.log(doc.pageContent);});/* -------------------------------------------------------------------------------- One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice*/// similiarity search using euclidean distance methodconst argsL2d: HanaDBArgs = { connection: client, tableName: "test_fromDocs", distanceStrategy: "euclidean",};const vectorStoreL2d = new HanaDB(embeddings, argsL2d);const docsL2d = await vectorStoreL2d.similaritySearch(query, 2);docsL2d.forEach((docsL2d) => { console.log("-".repeat(80)); console.log(docsL2d.pageContent);});// Output should be the same as the cosine similarity search method.// Maximal Marginal Relevance Search (MMR)const docsMMR = await vectorStore.maxMarginalRelevanceSearch(query, { k: 2, fetchK: 20,});docsMMR.forEach((docsMMR) => { console.log("-".repeat(80)); console.log(docsMMR.pageContent);});/* -------------------------------------------------------------------------------- One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world.*/client.disconnect();
#### API Reference:
* [HanaDB](https://api.js.langchain.com/classes/langchain_community_vectorstores_hanavector.HanaDB.html) from `@langchain/community/vectorstores/hanavector`
* [HanaDBArgs](https://api.js.langchain.com/interfaces/langchain_community_vectorstores_hanavector.HanaDBArgs.html) from `@langchain/community/vectorstores/hanavector`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* [CharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.CharacterTextSplitter.html) from `langchain/text_splitter`
Basic Vectorstore Operations[](#basic-vectorstore-operations "Direct link to Basic Vectorstore Operations")
------------------------------------------------------------------------------------------------------------
import { OpenAIEmbeddings } from "@langchain/openai";import hanaClient from "hdb";// or import another node.js driver// import hanaClient from "@sap/haha-client";import { Document } from "@langchain/core/documents";import { HanaDB, HanaDBArgs,} from "@langchain/community/vectorstores/hanavector";const connectionParams = { host: process.env.HANA_HOST, port: process.env.HANA_PORT, user: process.env.HANA_UID, password: process.env.HANA_PWD, // useCesu8 : false};const client = hanaClient.createClient(connectionParams);// connet to hanaDBawait new Promise<void>((resolve, reject) => { client.connect((err: Error) => { // Use arrow function here if (err) { reject(err); } else { console.log("Connected to SAP HANA successfully."); resolve(); } });});const embeddings = new OpenAIEmbeddings();// define instance argsconst args: HanaDBArgs = { connection: client, tableName: "testBasics",};// Add documents with metadata.const docs: Document[] = [ { pageContent: "foo", metadata: { start: 100, end: 150, docName: "foo.txt", quality: "bad" }, }, { pageContent: "bar", metadata: { start: 200, end: 250, docName: "bar.txt", quality: "good" }, },];// Create a LangChain VectorStore interface for the HANA database and specify the table (collection) to use in args.const vectorStore = new HanaDB(embeddings, args);// need to initialize once an instance is created.await vectorStore.initialize();// Delete already existing documents from the tableawait vectorStore.delete({ filter: {} });await vectorStore.addDocuments(docs);// Query documents with specific metadata.const filterMeta = { quality: "bad" };const query = "foobar";// With filtering on {"quality": "bad"}, only one document should be returnedconst results = await vectorStore.similaritySearch(query, 1, filterMeta);console.log(results);/* [ { pageContent: "foo", metadata: { start: 100, end: 150, docName: "foo.txt", quality: "bad" } } ]*/// Delete documents with specific metadata.await vectorStore.delete({ filter: filterMeta });// Now the similarity search with the same filter will return no resultsconst resultsAfterFilter = await vectorStore.similaritySearch( query, 1, filterMeta);console.log(resultsAfterFilter);/* []*/client.disconnect();
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* [HanaDB](https://api.js.langchain.com/classes/langchain_community_vectorstores_hanavector.HanaDB.html) from `@langchain/community/vectorstores/hanavector`
* [HanaDBArgs](https://api.js.langchain.com/interfaces/langchain_community_vectorstores_hanavector.HanaDBArgs.html) from `@langchain/community/vectorstores/hanavector`
Using a VectorStore as a retriever in chains for retrieval augmented generation (RAG)[](#using-a-vectorstore-as-a-retriever-in-chains-for-retrieval-augmented-generation-rag "Direct link to Using a VectorStore as a retriever in chains for retrieval augmented generation (RAG)")
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { createRetrievalChain } from "langchain/chains/retrieval";import hanaClient from "hdb";import { HanaDB, HanaDBArgs,} from "@langchain/community/vectorstores/hanavector";// Connection parametersconst connectionParams = { host: process.env.HANA_HOST, port: process.env.HANA_PORT, user: process.env.HANA_UID, password: process.env.HANA_PWD, // useCesu8 : false};const client = hanaClient.createClient(connectionParams);// connet to hanaDBawait new Promise<void>((resolve, reject) => { client.connect((err: Error) => { // Use arrow function here if (err) { reject(err); } else { console.log("Connected to SAP HANA successfully."); resolve(); } });});const embeddings = new OpenAIEmbeddings();const args: HanaDBArgs = { connection: client, tableName: "test_fromDocs",};const vectorStore = new HanaDB(embeddings, args);await vectorStore.initialize();// Use the store as part of a chain, under the premise that "test_fromDocs" exists and contains the chunked docs.const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo-1106" });const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "You are an expert in state of the union topics. You are provided multiple context items that are related to the prompt you have to answer. Use the following pieces of context to answer the question at the end.\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm: model, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: vectorStore.asRetriever(), combineDocsChain,});// Ask the first question (and verify how many text chunks have been used).const response = await chain.invoke({ input: "What about Mexico and Guatemala?",});console.log("Chain response:");console.log(response.answer);console.log( `Number of used source document chunks: ${response.context.length}`);/* The United States has set up joint patrols with Mexico and Guatemala to catch more human traffickers. Number of used source document chunks: 4*/const responseOther = await chain.invoke({ input: "What about other countries?",});console.log("Chain response:");console.log(responseOther.answer);/* Ask another question on the same conversational chain. The answer should relate to the previous answer given.....including members of NATO, the European Union, and other allies such as Canada....*/client.disconnect();
#### API Reference:
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [createStuffDocumentsChain](https://api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents`
* [createRetrievalChain](https://api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval`
* [HanaDB](https://api.js.langchain.com/classes/langchain_community_vectorstores_hanavector.HanaDB.html) from `@langchain/community/vectorstores/hanavector`
* [HanaDBArgs](https://api.js.langchain.com/interfaces/langchain_community_vectorstores_hanavector.HanaDBArgs.html) from `@langchain/community/vectorstores/hanavector`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Google Vertex AI Matching Engine
](/v0.1/docs/integrations/vectorstores/googlevertexai/)[
Next
HNSWLib
](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [Setup](#setup)
* [Create a new index from texts](#create-a-new-index-from-texts)
* [Create a new index from a loader and perform similarity searches](#create-a-new-index-from-a-loader-and-perform-similarity-searches)
* [Basic Vectorstore Operations](#basic-vectorstore-operations)
* [Using a VectorStore as a retriever in chains for retrieval augmented generation (RAG)](#using-a-vectorstore-as-a-retriever-in-chains-for-retrieval-augmented-generation-rag)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/momento_vector_index/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Momento Vector Index (MVI)
On this page
Momento Vector Index (MVI)
==========================
[MVI](https://gomomento.com): the most productive, easiest to use, serverless vector index for your data. To get started with MVI, simply sign up for an account. There's no need to handle infrastructure, manage servers, or be concerned about scaling. MVI is a service that scales automatically to meet your needs. Whether in Node.js, browser, or edge, Momento has you covered.
To sign up and access MVI, visit the [Momento Console](https://console.gomomento.com).
Setup[](#setup "Direct link to Setup")
---------------------------------------
1. Sign up for an API key in the [Momento Console](https://console.gomomento.com/).
2. Install the SDK for your environment.
2.1. For **Node.js**:
* npm
* Yarn
* pnpm
npm install @gomomento/sdk
yarn add @gomomento/sdk
pnpm add @gomomento/sdk
2.2. For **browser or edge environments**:
* npm
* Yarn
* pnpm
npm install @gomomento/sdk-web
yarn add @gomomento/sdk-web
pnpm add @gomomento/sdk-web
3. Setup Env variables for Momento before running the code
3.1 OpenAI
export OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HERE
3.2 Momento
export MOMENTO_API_KEY=YOUR_MOMENTO_API_KEY_HERE # https://console.gomomento.com
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
### Index documents using `fromTexts` and search[](#index-documents-using-fromtexts-and-search "Direct link to index-documents-using-fromtexts-and-search")
This example demonstrates using the `fromTexts` method to instantiate the vector store and index documents. If the index does not exist, then it will be created. If the index already exists, then the documents will be added to the existing index.
The `ids` are optional; if you omit them, then Momento will generate UUIDs for you.
import { MomentoVectorIndex } from "@langchain/community/vectorstores/momento_vector_index";// For browser/edge, adjust this to import from "@gomomento/sdk-web";import { PreviewVectorIndexClient, VectorIndexConfigurations, CredentialProvider,} from "@gomomento/sdk";import { OpenAIEmbeddings } from "@langchain/openai";import { sleep } from "langchain/util/time";const vectorStore = await MomentoVectorIndex.fromTexts( ["hello world", "goodbye world", "salutations world", "farewell world"], {}, new OpenAIEmbeddings(), { client: new PreviewVectorIndexClient({ configuration: VectorIndexConfigurations.Laptop.latest(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_API_KEY", }), }), indexName: "langchain-example-index", }, { ids: ["1", "2", "3", "4"] });// because indexing is async, wait for it to finish to search directly afterawait sleep();const response = await vectorStore.similaritySearch("hello", 2);console.log(response);/*[ Document { pageContent: 'hello world', metadata: {} }, Document { pageContent: 'salutations world', metadata: {} }]*/
#### API Reference:
* [MomentoVectorIndex](https://api.js.langchain.com/classes/langchain_community_vectorstores_momento_vector_index.MomentoVectorIndex.html) from `@langchain/community/vectorstores/momento_vector_index`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [sleep](https://api.js.langchain.com/functions/langchain_util_time.sleep.html) from `langchain/util/time`
### Index documents using `fromDocuments` and search[](#index-documents-using-fromdocuments-and-search "Direct link to index-documents-using-fromdocuments-and-search")
Similar to the above, this example demonstrates using the `fromDocuments` method to instantiate the vector store and index documents. If the index does not exist, then it will be created. If the index already exists, then the documents will be added to the existing index.
Using `fromDocuments` allows you to seamlessly chain the various document loaders with indexing.
import { MomentoVectorIndex } from "@langchain/community/vectorstores/momento_vector_index";// For browser/edge, adjust this to import from "@gomomento/sdk-web";import { PreviewVectorIndexClient, VectorIndexConfigurations, CredentialProvider,} from "@gomomento/sdk";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import { sleep } from "langchain/util/time";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();const vectorStore = await MomentoVectorIndex.fromDocuments( docs, new OpenAIEmbeddings(), { client: new PreviewVectorIndexClient({ configuration: VectorIndexConfigurations.Laptop.latest(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_API_KEY", }), }), indexName: "langchain-example-index", });// because indexing is async, wait for it to finish to search directly afterawait sleep();// Search for the most similar documentconst response = await vectorStore.similaritySearch("hello", 1);console.log(response);/*[ Document { pageContent: 'Foo\nBar\nBaz\n\n', metadata: { source: 'src/document_loaders/example_data/example.txt' } }]*/
#### API Reference:
* [MomentoVectorIndex](https://api.js.langchain.com/classes/langchain_community_vectorstores_momento_vector_index.MomentoVectorIndex.html) from `@langchain/community/vectorstores/momento_vector_index`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* [sleep](https://api.js.langchain.com/functions/langchain_util_time.sleep.html) from `langchain/util/time`
### Search from an existing collection[](#search-from-an-existing-collection "Direct link to Search from an existing collection")
import { MomentoVectorIndex } from "@langchain/community/vectorstores/momento_vector_index";// For browser/edge, adjust this to import from "@gomomento/sdk-web";import { PreviewVectorIndexClient, VectorIndexConfigurations, CredentialProvider,} from "@gomomento/sdk";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = new MomentoVectorIndex(new OpenAIEmbeddings(), { client: new PreviewVectorIndexClient({ configuration: VectorIndexConfigurations.Laptop.latest(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_API_KEY", }), }), indexName: "langchain-example-index",});const response = await vectorStore.similaritySearch("hello", 1);console.log(response);/*[ Document { pageContent: 'Foo\nBar\nBaz\n\n', metadata: { source: 'src/document_loaders/example_data/example.txt' } }]*/
#### API Reference:
* [MomentoVectorIndex](https://api.js.langchain.com/classes/langchain_community_vectorstores_momento_vector_index.MomentoVectorIndex.html) from `@langchain/community/vectorstores/momento_vector_index`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Milvus
](/v0.1/docs/integrations/vectorstores/milvus/)[
Next
MongoDB Atlas
](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [Setup](#setup)
* [Usage](#usage)
* [Index documents using `fromTexts` and search](#index-documents-using-fromtexts-and-search)
* [Index documents using `fromDocuments` and search](#index-documents-using-fromdocuments-and-search)
* [Search from an existing collection](#search-from-an-existing-collection)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/mongodb_atlas/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* MongoDB Atlas
On this page
MongoDB Atlas
=============
Compatibility
Only available on Node.js.
You can still create API routes that use MongoDB with Next.js by setting the `runtime` variable to `nodejs` like so:
export const runtime = "nodejs";
You can read more about Edge runtimes in the Next.js documentation [here](https://nextjs.org/docs/app/building-your-application/rendering/edge-and-nodejs-runtimes).
LangChain.js supports MongoDB Atlas as a vector store, and supports both standard similarity search and maximal marginal relevance search, which takes a combination of documents are most similar to the inputs, then reranks and optimizes for diversity.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Installation[](#installation "Direct link to Installation")
First, add the Node MongoDB SDK to your project:
* npm
* Yarn
* pnpm
npm install -S mongodb
yarn add mongodb
pnpm add mongodb
### Initial Cluster Configuration[](#initial-cluster-configuration "Direct link to Initial Cluster Configuration")
Next, you'll need create a MongoDB Atlas cluster. Navigate to the [MongoDB Atlas website](https://www.mongodb.com/atlas/database) and create an account if you don't already have one.
Create and name a cluster when prompted, then find it under `Database`. Select `Collections` and create either a blank collection or one from the provided sample data.
**Note** The cluster created must be MongoDB 7.0 or higher. If you are using a pre-7.0 version of MongoDB, you must use a version of langchainjs<=0.0.163.
### Creating an Index[](#creating-an-index "Direct link to Creating an Index")
After configuring your cluster, you'll need to create an index on the collection field you want to search over.
Switch to the `Atlas Search` tab and click `Create Search Index`. From there, make sure you select `Atlas Vector Search - JSON Editor`, then select the appropriate database and collection and paste the following into the textbox:
{ "fields": [ { "numDimensions": 1024, "path": "embedding", "similarity": "euclidean", "type": "vector" } ]}
Note that the `dimensions` property should match the dimensionality of the embeddings you are using. For example, Cohere embeddings have 1024 dimensions, and by default OpenAI embeddings have 1536:
**Note:** By default the vector store expects an index name of `default`, an indexed collection field name of `embedding`, and a raw text field name of `text`. You should initialize the vector store with field names matching your index name collection schema as shown below.
Finally, proceed to build the index.
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
### Ingestion[](#ingestion "Direct link to Ingestion")
import { MongoDBAtlasVectorSearch } from "@langchain/mongodb";import { CohereEmbeddings } from "@langchain/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(".");const collection = client.db(dbName).collection(collectionName);const vectorstore = await MongoDBAtlasVectorSearch.fromTexts( ["Hello world", "Bye bye", "What's this?"], [{ id: 2 }, { id: 1 }, { id: 3 }], new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. Defaults to "embedding" });const assignedIds = await vectorstore.addDocuments([ { pageContent: "upsertable", metadata: {} },]);const upsertedDocs = [{ pageContent: "overwritten", metadata: {} }];await vectorstore.addDocuments(upsertedDocs, { ids: assignedIds });await client.close();
#### API Reference:
* [MongoDBAtlasVectorSearch](https://api.js.langchain.com/classes/langchain_mongodb.MongoDBAtlasVectorSearch.html) from `@langchain/mongodb`
* [CohereEmbeddings](https://api.js.langchain.com/classes/langchain_cohere.CohereEmbeddings.html) from `@langchain/cohere`
### Search[](#search "Direct link to Search")
import { MongoDBAtlasVectorSearch } from "@langchain/mongodb";import { CohereEmbeddings } from "@langchain/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(".");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. Defaults to "embedding"});const resultOne = await vectorStore.similaritySearch("Hello world", 1);console.log(resultOne);await client.close();
#### API Reference:
* [MongoDBAtlasVectorSearch](https://api.js.langchain.com/classes/langchain_mongodb.MongoDBAtlasVectorSearch.html) from `@langchain/mongodb`
* [CohereEmbeddings](https://api.js.langchain.com/classes/langchain_cohere.CohereEmbeddings.html) from `@langchain/cohere`
### Maximal marginal relevance[](#maximal-marginal-relevance "Direct link to Maximal marginal relevance")
import { MongoDBAtlasVectorSearch } from "@langchain/mongodb";import { CohereEmbeddings } from "@langchain/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(".");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. Defaults to "embedding"});const resultOne = await vectorStore.maxMarginalRelevanceSearch("Hello world", { k: 4, fetchK: 20, // The number of documents to return on initial fetch});console.log(resultOne);// Using MMR in a vector store retrieverconst retriever = await vectorStore.asRetriever({ searchType: "mmr", searchKwargs: { fetchK: 20, lambda: 0.1, },});const retrieverOutput = await retriever.invoke("Hello world");console.log(retrieverOutput);await client.close();
#### API Reference:
* [MongoDBAtlasVectorSearch](https://api.js.langchain.com/classes/langchain_mongodb.MongoDBAtlasVectorSearch.html) from `@langchain/mongodb`
* [CohereEmbeddings](https://api.js.langchain.com/classes/langchain_cohere.CohereEmbeddings.html) from `@langchain/cohere`
### Metadata filtering[](#metadata-filtering "Direct link to Metadata filtering")
MongoDB Atlas supports pre-filtering of results on other fields. They require you to define which metadata fields you plan to filter on by updating the index. Here's an example:
{ "fields": [ { "numDimensions": 1024, "path": "embedding", "similarity": "euclidean", "type": "vector" }, { "path": "docstore_document_id", "type": "filter" } ]}
Above, the first item in `fields` is the vector index, and the second item is the metadata property you want to filter on. The name of the property is `path`, so the above index would allow us to search on a metadata field named `docstore_document_id`.
Then, in your code you can use [MQL Query Operators](https://www.mongodb.com/docs/manual/reference/operator/query/) for filtering. Here's an example:
import { MongoDBAtlasVectorSearch } from "@langchain/mongodb";import { CohereEmbeddings } from "@langchain/cohere";import { MongoClient } from "mongodb";import { sleep } from "langchain/util/time";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(".");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. Defaults to "embedding"});await vectorStore.addDocuments([ { pageContent: "Hey hey hey", metadata: { docstore_document_id: "somevalue" }, },]);const retriever = vectorStore.asRetriever({ filter: { preFilter: { docstore_document_id: { $eq: "somevalue", }, }, },});// Mongo has a slight processing delay between ingest and availabilityawait sleep(2000);const results = await retriever.invoke("goodbye");console.log(results);await client.close();
#### API Reference:
* [MongoDBAtlasVectorSearch](https://api.js.langchain.com/classes/langchain_mongodb.MongoDBAtlasVectorSearch.html) from `@langchain/mongodb`
* [CohereEmbeddings](https://api.js.langchain.com/classes/langchain_cohere.CohereEmbeddings.html) from `@langchain/cohere`
* [sleep](https://api.js.langchain.com/functions/langchain_util_time.sleep.html) from `langchain/util/time`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Momento Vector Index (MVI)
](/v0.1/docs/integrations/vectorstores/momento_vector_index/)[
Next
MyScale
](/v0.1/docs/integrations/vectorstores/myscale/)
* [Setup](#setup)
* [Installation](#installation)
* [Initial Cluster Configuration](#initial-cluster-configuration)
* [Creating an Index](#creating-an-index)
* [Usage](#usage)
* [Ingestion](#ingestion)
* [Search](#search)
* [Maximal marginal relevance](#maximal-marginal-relevance)
* [Metadata filtering](#metadata-filtering)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/neo4jvector/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Neo4j Vector Index
On this page
Neo4j Vector Index
==================
Neo4j is an open-source graph database with integrated support for vector similarity search. It supports:
* approximate nearest neighbor search
* Euclidean similarity and cosine similarity
* Hybrid search combining vector and keyword searches
Setup[](#setup "Direct link to Setup")
---------------------------------------
To work with Neo4j Vector Index, you need to install the `neo4j-driver` package:
* npm
* Yarn
* pnpm
npm install neo4j-driver
yarn add neo4j-driver
pnpm add neo4j-driver
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
### Setup a `Neo4j` self hosted instance with `docker-compose`[](#setup-a-neo4j-self-hosted-instance-with-docker-compose "Direct link to setup-a-neo4j-self-hosted-instance-with-docker-compose")
`Neo4j` provides a prebuilt Docker image that can be used to quickly setup a self-hosted Neo4j database instance. Create a file below named `docker-compose.yml`:
export default {services:{database:{image:'neo4j',ports:['7687:7687','7474:7474'],environment:['NEO4J_AUTH=neo4j/pleaseletmein']}}};
#### API Reference:
And then in the same directory, run `docker compose up` to start the container.
You can find more information on how to setup `Neo4j` on their [website](https://neo4j.com/docs/operations-manual/current/installation/).
Usage[](#usage "Direct link to Usage")
---------------------------------------
One complete example of using `Neo4jVectorStore` is the following:
import { OpenAIEmbeddings } from "@langchain/openai";import { Neo4jVectorStore } from "@langchain/community/vectorstores/neo4j_vector";// Configuration object for Neo4j connection and other related settingsconst config = { url: "bolt://localhost:7687", // URL for the Neo4j instance username: "neo4j", // Username for Neo4j authentication password: "pleaseletmein", // Password for Neo4j authentication indexName: "vector", // Name of the vector index keywordIndexName: "keyword", // Name of the keyword index if using hybrid search searchType: "vector" as const, // Type of search (e.g., vector, hybrid) nodeLabel: "Chunk", // Label for the nodes in the graph textNodeProperty: "text", // Property of the node containing text embeddingNodeProperty: "embedding", // Property of the node containing embedding};const documents = [ { pageContent: "what's this", metadata: { a: 2 } }, { pageContent: "Cat drinks milk", metadata: { a: 1 } },];const neo4jVectorIndex = await Neo4jVectorStore.fromDocuments( documents, new OpenAIEmbeddings(), config);const results = await neo4jVectorIndex.similaritySearch("water", 1);console.log(results);/* [ Document { pageContent: 'Cat drinks milk', metadata: { a: 1 } } ]*/await neo4jVectorIndex.close();
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [Neo4jVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_neo4j_vector.Neo4jVectorStore.html) from `@langchain/community/vectorstores/neo4j_vector`
### Use retrievalQuery parameter to customize responses[](#use-retrievalquery-parameter-to-customize-responses "Direct link to Use retrievalQuery parameter to customize responses")
import { OpenAIEmbeddings } from "@langchain/openai";import { Neo4jVectorStore } from "@langchain/community/vectorstores/neo4j_vector";/* * The retrievalQuery is a customizable Cypher query fragment used in the Neo4jVectorStore class to define how * search results should be retrieved and presented from the Neo4j database. It allows developers to specify * the format and structure of the data returned after a similarity search. * Mandatory columns for `retrievalQuery`: * * 1. text: * - Description: Represents the textual content of the node. * - Type: String * * 2. score: * - Description: Represents the similarity score of the node in relation to the search query. A * higher score indicates a closer match. * - Type: Float (ranging between 0 and 1, where 1 is a perfect match) * * 3. metadata: * - Description: Contains additional properties and information about the node. This can include * any other attributes of the node that might be relevant to the application. * - Type: Object (key-value pairs) * - Example: { "id": "12345", "category": "Books", "author": "John Doe" } * * Note: While you can customize the `retrievalQuery` to fetch additional columns or perform * transformations, never omit the mandatory columns. The names of these columns (`text`, `score`, * and `metadata`) should remain consistent. Renaming them might lead to errors or unexpected behavior. */// Configuration object for Neo4j connection and other related settingsconst config = { url: "bolt://localhost:7687", // URL for the Neo4j instance username: "neo4j", // Username for Neo4j authentication password: "pleaseletmein", // Password for Neo4j authentication retrievalQuery: ` RETURN node.text AS text, score, {a: node.a * 2} AS metadata `,};const documents = [ { pageContent: "what's this", metadata: { a: 2 } }, { pageContent: "Cat drinks milk", metadata: { a: 1 } },];const neo4jVectorIndex = await Neo4jVectorStore.fromDocuments( documents, new OpenAIEmbeddings(), config);const results = await neo4jVectorIndex.similaritySearch("water", 1);console.log(results);/* [ Document { pageContent: 'Cat drinks milk', metadata: { a: 2 } } ]*/await neo4jVectorIndex.close();
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [Neo4jVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_neo4j_vector.Neo4jVectorStore.html) from `@langchain/community/vectorstores/neo4j_vector`
### Instantiate Neo4jVectorStore from existing graph[](#instantiate-neo4jvectorstore-from-existing-graph "Direct link to Instantiate Neo4jVectorStore from existing graph")
import { OpenAIEmbeddings } from "@langchain/openai";import { Neo4jVectorStore } from "@langchain/community/vectorstores/neo4j_vector";/** * `fromExistingGraph` Method: * * Description: * This method initializes a `Neo4jVectorStore` instance using an existing graph in the Neo4j database. * It's designed to work with nodes that already have textual properties but might not have embeddings. * The method will compute and store embeddings for nodes that lack them. * * Note: * This method is particularly useful when you have a pre-existing graph with textual data and you want * to enhance it with vector embeddings for similarity searches without altering the original data structure. */// Configuration object for Neo4j connection and other related settingsconst config = { url: "bolt://localhost:7687", // URL for the Neo4j instance username: "neo4j", // Username for Neo4j authentication password: "pleaseletmein", // Password for Neo4j authentication indexName: "wikipedia", nodeLabel: "Wikipedia", textNodeProperties: ["title", "description"], embeddingNodeProperty: "embedding", searchType: "hybrid" as const,};// You should have a populated Neo4j database to use this methodconst neo4jVectorIndex = await Neo4jVectorStore.fromExistingGraph( new OpenAIEmbeddings(), config);await neo4jVectorIndex.close();
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [Neo4jVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_neo4j_vector.Neo4jVectorStore.html) from `@langchain/community/vectorstores/neo4j_vector`
Disclaimer ⚠️
=============
_Security note_: Make sure that the database connection uses credentials that are narrowly-scoped to only include necessary permissions. Failure to do so may result in data corruption or loss, since the calling code may attempt commands that would result in deletion, mutation of data if appropriately prompted or reading sensitive data if such data is present in the database. The best way to guard against such negative outcomes is to (as appropriate) limit the permissions granted to the credentials used with this tool. For example, creating read only users for the database is a good way to ensure that the calling code cannot mutate or delete data. See the [security page](/v0.1/docs/security/) for more information.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
MyScale
](/v0.1/docs/integrations/vectorstores/myscale/)[
Next
Neon Postgres
](/v0.1/docs/integrations/vectorstores/neon/)
* [Setup](#setup)
* [Setup a `Neo4j` self hosted instance with `docker-compose`](#setup-a-neo4j-self-hosted-instance-with-docker-compose)
* [Usage](#usage)
* [Use retrievalQuery parameter to customize responses](#use-retrievalquery-parameter-to-customize-responses)
* [Instantiate Neo4jVectorStore from existing graph](#instantiate-neo4jvectorstore-from-existing-graph)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/opensearch/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* OpenSearch
On this page
OpenSearch
==========
Compatibility
Only available on Node.js.
[OpenSearch](https://opensearch.org/) is a fork of [Elasticsearch](https://www.elastic.co/elasticsearch/) that is fully compatible with the Elasticsearch API. Read more about their support for Approximate Nearest Neighbors [here](https://opensearch.org/docs/latest/search-plugins/knn/approximate-knn/).
Langchain.js accepts [@opensearch-project/opensearch](https://opensearch.org/docs/latest/clients/javascript/index/) as the client for OpenSearch vectorstore.
Setup[](#setup "Direct link to Setup")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install -S @langchain/openai @opensearch-project/opensearch
yarn add @langchain/openai @opensearch-project/opensearch
pnpm add @langchain/openai @opensearch-project/opensearch
You'll also need to have an OpenSearch instance running. You can use the [official Docker image](https://opensearch.org/docs/latest/opensearch/install/docker/) to get started. You can also find an example docker-compose file [here](https://github.com/langchain-ai/langchainjs/blob/main/examples/src/indexes/vector_stores/opensearch/docker-compose.yml).
Index docs[](#index-docs "Direct link to Index docs")
------------------------------------------------------
import { Client } from "@opensearch-project/opensearch";import { Document } from "langchain/document";import { OpenAIEmbeddings } from "@langchain/openai";import { OpenSearchVectorStore } from "langchain/vectorstores/opensearch";const client = new Client({ nodes: [process.env.OPENSEARCH_URL ?? "http://127.0.0.1:9200"],});const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "opensearch is also a vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications", }),];await OpenSearchVectorStore.fromDocuments(docs, new OpenAIEmbeddings(), { client, indexName: process.env.OPENSEARCH_INDEX, // Will default to `documents`});
Query docs[](#query-docs "Direct link to Query docs")
------------------------------------------------------
import { Client } from "@opensearch-project/opensearch";import { VectorDBQAChain } from "langchain/chains";import { OpenAIEmbeddings } from "@langchain/openai";import { OpenAI } from "@langchain/openai";import { OpenSearchVectorStore } from "langchain/vectorstores/opensearch";const client = new Client({ nodes: [process.env.OPENSEARCH_URL ?? "http://127.0.0.1:9200"],});const vectorStore = new OpenSearchVectorStore(new OpenAIEmbeddings(), { client,});/* Search the vector DB independently with meta filters */const results = await vectorStore.similaritySearch("hello world", 1);console.log(JSON.stringify(results, null, 2));/* [ { "pageContent": "Hello world", "metadata": { "id": 2 } } ] *//* Use as part of a chain (currently no metadata filters) */const model = new OpenAI();const chain = VectorDBQAChain.fromLLM(model, vectorStore, { k: 1, returnSourceDocuments: true,});const response = await chain.call({ query: "What is opensearch?" });console.log(JSON.stringify(response, null, 2));/* { "text": " Opensearch is a collection of technologies that allow search engines to publish search results in a standard format, making it easier for users to search across multiple sites.", "sourceDocuments": [ { "pageContent": "What's this?", "metadata": { "id": 3 } } ] } */
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Neon Postgres
](/v0.1/docs/integrations/vectorstores/neon/)[
Next
PGVector
](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Setup](#setup)
* [Index docs](#index-docs)
* [Query docs](#query-docs)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/neon/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Neon Postgres
On this page
Neon Postgres
=============
Neon is a fully managed serverless PostgreSQL database. It separates storage and compute to offer features such as instant branching and automatic scaling.
With the `pgvector` extension, Neon provides a vector store that can be used with LangChain.js to store and query embeddings.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Select a Neon project[](#select-a-neon-project "Direct link to Select a Neon project")
If you do not have a Neon account, sign up for one at [Neon](https://neon.tech). After logging into the Neon Console, proceed to the [Projects](https://console.neon.tech/app/projects) section and select an existing project or create a new one.
Your Neon project comes with a ready-to-use Postgres database named `neondb` that you can use to store embeddings. Navigate to the Connection Details section to find your database connection string. It should look similar to this:
postgres://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require
Keep your connection string handy for later use.
### Application code[](#application-code "Direct link to Application code")
To work with Neon Postgres, you need to install the `@neondatabase/serverless` package which provides a JavaScript/TypeScript driver to connect to the database.
* npm
* Yarn
* pnpm
npm install @neondatabase/serverless
yarn add @neondatabase/serverless
pnpm add @neondatabase/serverless
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
To initialize a `NeonPostgres` vectorstore, you need to provide your Neon database connection string. You can use the connection string we fetched above directly, or store it as an environment variable and use it in your code.
const vectorStore = await NeonPostgres.initialize(embeddings, { connectionString: NEON_POSTGRES_CONNECTION_STRING,});
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { OpenAIEmbeddings } from "@langchain/openai";import { NeonPostgres } from "@langchain/community/vectorstores/neon";// Initialize an embeddings instanceconst embeddings = new OpenAIEmbeddings({ apiKey: process.env.OPENAI_API_KEY, dimensions: 256, model: "text-embedding-3-small",});// Initialize a NeonPostgres instance to store embedding vectorsconst vectorStore = await NeonPostgres.initialize(embeddings, { connectionString: process.env.DATABASE_URL as string,});// You can add documents to the store, strings in the `pageContent` field will be embedded// and stored in the databaseconst documents = [ { pageContent: "Hello world", metadata: { topic: "greeting" } }, { pageContent: "Bye bye", metadata: { topic: "greeting" } }, { pageContent: "Mitochondria is the powerhouse of the cell", metadata: { topic: "science" }, },];const idsInserted = await vectorStore.addDocuments(documents);// You can now query the store for similar documents to the input queryconst resultOne = await vectorStore.similaritySearch("hola", 1);console.log(resultOne);/*[ Document { pageContent: 'Hello world', metadata: { topic: 'greeting' } }]*/// You can also filter by metadataconst resultTwo = await vectorStore.similaritySearch("Irrelevant query", 2, { topic: "science",});console.log(resultTwo);/*[ Document { pageContent: 'Mitochondria is the powerhouse of the cell', metadata: { topic: 'science' } }]*/// Metadata filtering with IN-filters works as wellconst resultsThree = await vectorStore.similaritySearch("Irrelevant query", 2, { topic: { in: ["greeting"] },});console.log(resultsThree);/*[ Document { pageContent: 'Bye bye', metadata: { topic: 'greeting' } }, Document { pageContent: 'Hello world', metadata: { topic: 'greeting' } }]*/// Upserting is supported as wellawait vectorStore.addDocuments( [ { pageContent: "ATP is the powerhouse of the cell", metadata: { topic: "science" }, }, ], { ids: [idsInserted[2]] });const resultsFour = await vectorStore.similaritySearch( "powerhouse of the cell", 1);console.log(resultsFour);/*[ Document { pageContent: 'ATP is the powerhouse of the cell', metadata: { topic: 'science' } }]*/
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [NeonPostgres](https://api.js.langchain.com/classes/langchain_community_vectorstores_neon.NeonPostgres.html) from `@langchain/community/vectorstores/neon`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Neo4j Vector Index
](/v0.1/docs/integrations/vectorstores/neo4jvector/)[
Next
OpenSearch
](/v0.1/docs/integrations/vectorstores/opensearch/)
* [Setup](#setup)
* [Select a Neon project](#select-a-neon-project)
* [Application code](#application-code)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/pgvector/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* PGVector
On this page
PGVector
========
To enable vector search in a generic PostgreSQL database, LangChain.js supports using the [`pgvector`](https://github.com/pgvector/pgvector) Postgres extension.
Setup[](#setup "Direct link to Setup")
---------------------------------------
To work with PGVector, you need to install the `pg` package:
* npm
* Yarn
* pnpm
npm install pg
yarn add pg
pnpm add pg
### Setup a `pgvector` self hosted instance with `docker-compose`[](#setup-a-pgvector-self-hosted-instance-with-docker-compose "Direct link to setup-a-pgvector-self-hosted-instance-with-docker-compose")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
`pgvector` provides a prebuilt Docker image that can be used to quickly setup a self-hosted Postgres instance. Create a file below named `docker-compose.yml`:
# Run this command to start the database:# docker-compose up --buildversion: "3"services: db: hostname: 127.0.0.1 image: ankane/pgvector ports: - 5432:5432 restart: always environment: - POSTGRES_DB=api - POSTGRES_USER=myuser - POSTGRES_PASSWORD=ChangeMe volumes: - ./init.sql:/docker-entrypoint-initdb.d/init.sql
And then in the same directory, run `docker compose up` to start the container.
You can find more information on how to setup `pgvector` in the [official repository](https://github.com/pgvector/pgvector).
Usage[](#usage "Direct link to Usage")
---------------------------------------
Security
User-generated data such as usernames should not be used as input for table and column names.
**This may lead to SQL Injection!**
One complete example of using `PGVectorStore` is the following:
import { OpenAIEmbeddings } from "@langchain/openai";import { DistanceStrategy, PGVectorStore,} from "@langchain/community/vectorstores/pgvector";import { PoolConfig } from "pg";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/pgvectorconst config = { postgresConnectionOptions: { type: "postgres", host: "127.0.0.1", port: 5433, user: "myuser", password: "ChangeMe", database: "api", } as PoolConfig, tableName: "testlangchain", columns: { idColumnName: "id", vectorColumnName: "vector", contentColumnName: "content", metadataColumnName: "metadata", }, // supported distance strategies: cosine (default), innerProduct, or euclidean distanceStrategy: "cosine" as DistanceStrategy,};const pgvectorStore = await PGVectorStore.initialize( new OpenAIEmbeddings(), config);await pgvectorStore.addDocuments([ { pageContent: "what's this", metadata: { a: 2 } }, { pageContent: "Cat drinks milk", metadata: { a: 1 } },]);const results = await pgvectorStore.similaritySearch("water", 1);console.log(results);/* [ Document { pageContent: 'Cat drinks milk', metadata: { a: 1 } } ]*/// Filtering is supportedconst results2 = await pgvectorStore.similaritySearch("water", 1, { a: 2,});console.log(results2);/* [ Document { pageContent: 'what's this', metadata: { a: 2 } } ]*/// Filtering on multiple values using "in" is supported tooconst results3 = await pgvectorStore.similaritySearch("water", 1, { a: { in: [2], },});console.log(results3);/* [ Document { pageContent: 'what's this', metadata: { a: 2 } } ]*/await pgvectorStore.delete({ filter: { a: 1, },});const results4 = await pgvectorStore.similaritySearch("water", 1);console.log(results4);/* [ Document { pageContent: 'what's this', metadata: { a: 2 } } ]*/await pgvectorStore.end();
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [DistanceStrategy](https://api.js.langchain.com/types/langchain_community_vectorstores_pgvector.DistanceStrategy.html) from `@langchain/community/vectorstores/pgvector`
* [PGVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_pgvector.PGVectorStore.html) from `@langchain/community/vectorstores/pgvector`
You can also specify a `collectionTableName` and a `collectionName` to partition vectors between multiple users or namespaces.
### Advanced: reusing connections[](#advanced-reusing-connections "Direct link to Advanced: reusing connections")
You can reuse connections by creating a pool, then creating new `PGVectorStore` instances directly via the constructor.
Note that you should call `.initialize()` to set up your database at least once to set up your tables properly before using the constructor.
import { OpenAIEmbeddings } from "@langchain/openai";import { PGVectorStore } from "@langchain/community/vectorstores/pgvector";import pg from "pg";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/pgvectorconst reusablePool = new pg.Pool({ host: "127.0.0.1", port: 5433, user: "myuser", password: "ChangeMe", database: "api",});const originalConfig = { pool: reusablePool, tableName: "testlangchain", collectionName: "sample", collectionTableName: "collections", columns: { idColumnName: "id", vectorColumnName: "vector", contentColumnName: "content", metadataColumnName: "metadata", },};// Set up the DB.// Can skip this step if you've already initialized the DB.// await PGVectorStore.initialize(new OpenAIEmbeddings(), originalConfig);const pgvectorStore = new PGVectorStore(new OpenAIEmbeddings(), originalConfig);await pgvectorStore.addDocuments([ { pageContent: "what's this", metadata: { a: 2 } }, { pageContent: "Cat drinks milk", metadata: { a: 1 } },]);const results = await pgvectorStore.similaritySearch("water", 1);console.log(results);/* [ Document { pageContent: 'Cat drinks milk', metadata: { a: 1 } } ]*/const pgvectorStore2 = new PGVectorStore(new OpenAIEmbeddings(), { pool: reusablePool, tableName: "testlangchain", collectionTableName: "collections", collectionName: "some_other_collection", columns: { idColumnName: "id", vectorColumnName: "vector", contentColumnName: "content", metadataColumnName: "metadata", },});const results2 = await pgvectorStore2.similaritySearch("water", 1);console.log(results2);/* []*/await reusablePool.end();
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [PGVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_pgvector.PGVectorStore.html) from `@langchain/community/vectorstores/pgvector`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
OpenSearch
](/v0.1/docs/integrations/vectorstores/opensearch/)[
Next
Pinecone
](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Setup](#setup)
* [Setup a `pgvector` self hosted instance with `docker-compose`](#setup-a-pgvector-self-hosted-instance-with-docker-compose)
* [Usage](#usage)
* [Advanced: reusing connections](#advanced-reusing-connections)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |