url
stringlengths
25
141
content
stringlengths
2.14k
402k
https://js.langchain.com/v0.2/docs/integrations/vectorstores/astradb
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Astra DB On this page Astra DB ======== Compatibility Only available on Node.js. DataStax [Astra DB](https://astra.datastax.com/register) is a serverless vector-capable database built on [Apache Cassandra](https://cassandra.apache.org/_/index.html) and made conveniently available through an easy-to-use JSON API. Setup[​](#setup "Direct link to Setup") --------------------------------------- 1. Create an [Astra DB account](https://astra.datastax.com/register). 2. Create a [vector enabled database](https://astra.datastax.com/createDatabase). 3. Grab your `API Endpoint` and `Token` from the Database Details. 4. Set up the following env vars: export ASTRA_DB_APPLICATION_TOKEN=YOUR_ASTRA_DB_APPLICATION_TOKEN_HEREexport ASTRA_DB_ENDPOINT=YOUR_ASTRA_DB_ENDPOINT_HEREexport ASTRA_DB_COLLECTION=YOUR_ASTRA_DB_COLLECTION_HEREexport OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HERE Where `ASTRA_DB_COLLECTION` is the desired name of your collection 6. Install the Astra TS Client & the LangChain community package tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @datastax/astra-db-ts @langchain/community yarn add @langchain/openai @datastax/astra-db-ts @langchain/community pnpm add @langchain/openai @datastax/astra-db-ts @langchain/community Indexing docs[​](#indexing-docs "Direct link to Indexing docs") --------------------------------------------------------------- import { OpenAIEmbeddings } from "@langchain/openai";import { AstraDBVectorStore, AstraLibArgs,} from "@langchain/community/vectorstores/astradb";const astraConfig: AstraLibArgs = { token: process.env.ASTRA_DB_APPLICATION_TOKEN as string, endpoint: process.env.ASTRA_DB_ENDPOINT as string, collection: process.env.ASTRA_DB_COLLECTION ?? "langchain_test", collectionOptions: { vector: { dimension: 1536, metric: "cosine", }, },};const vectorStore = await AstraDBVectorStore.fromTexts( [ "AstraDB is built on Apache Cassandra", "AstraDB is a NoSQL DB", "AstraDB supports vector search", ], [{ foo: "foo" }, { foo: "bar" }, { foo: "baz" }], new OpenAIEmbeddings(), astraConfig);// Querying docs:const results = await vectorStore.similaritySearch("Cassandra", 1);// or filtered query:const filteredQueryResults = await vectorStore.similaritySearch("A", 1, { foo: "bar",}); #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [AstraDBVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_astradb.AstraDBVectorStore.html) from `@langchain/community/vectorstores/astradb` * [AstraLibArgs](https://v02.api.js.langchain.com/interfaces/langchain_community_vectorstores_astradb.AstraLibArgs.html) from `@langchain/community/vectorstores/astradb` Vector Types[​](#vector-types "Direct link to Vector Types") ------------------------------------------------------------ Astra DB supports `cosine` (the default), `dot_product`, and `euclidean` similarity search; this is defined when the vector store is first created as part of the `CreateCollectionOptions`: vector: { dimension: number; metric?: "cosine" | "euclidean" | "dot_product"; }; * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous AnalyticDB ](/v0.2/docs/integrations/vectorstores/analyticdb)[ Next Azure AI Search ](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Setup](#setup) * [Indexing docs](#indexing-docs) * [Vector Types](#vector-types) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/azure_aisearch
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Azure AI Search On this page Azure AI Search =============== [Azure AI Search](https://azure.microsoft.com/products/ai-services/ai-search) (formerly known as Azure Search and Azure Cognitive Search) is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads on Azure. It supports also vector search using the [k-nearest neighbor](https://en.wikipedia.org/wiki/Nearest_neighbor_search) (kNN) algorithm and also [semantic search](https://learn.microsoft.com/azure/search/semantic-search-overview). This vector store integration supports full text search, vector search and [hybrid search for best ranking performance](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/azure-cognitive-search-outperforming-vector-search-with-hybrid/ba-p/3929167). Learn how to leverage the vector search capabilities of Azure AI Search from [this page](https://learn.microsoft.com/azure/search/vector-search-overview). If you don't have an Azure account, you can [create a free account](https://azure.microsoft.com/free/) to get started. Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll first need to install the `@azure/search-documents` SDK and the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install -S @langchain/community @azure/search-documents yarn add @langchain/community @azure/search-documents pnpm add @langchain/community @azure/search-documents You'll also need to have an Azure AI Search instance running. You can deploy a free version on Azure Portal without any cost, following [this guide](https://learn.microsoft.com/azure/search/search-create-service-portal). Once you have your instance running, make sure you have the endpoint and the admin key (query keys can be used only to search document, not to index, update or delete). The endpoint is the URL of your instance which you can find in the Azure Portal, under the "Overview" section of your instance. The admin key can be found under the "Keys" section of your instance. Then you need to set the following environment variables: # Azure AI Search connection settingsAZURE_AISEARCH_ENDPOINT=AZURE_AISEARCH_KEY=# If you're using Azure OpenAI API, you'll need to set these variablesAZURE_OPENAI_API_KEY=AZURE_OPENAI_API_INSTANCE_NAME=AZURE_OPENAI_API_DEPLOYMENT_NAME=AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=AZURE_OPENAI_API_VERSION=# Or you can use the OpenAI API directlyOPENAI_API_KEY= #### API Reference: About hybrid search[​](#about-hybrid-search "Direct link to About hybrid search") --------------------------------------------------------------------------------- Hybrid search is a feature that combines the strengths of full text search and vector search to provide the best ranking performance. It's enabled by default in Azure AI Search vector stores, but you can select a different search query type by setting the `search.type` property when creating the vector store. You can read more about hybrid search and how it may improve your search results in the [official documentation](https://learn.microsoft.com/azure/search/hybrid-search-overview). In some scenarios like retrieval-augmented generation (RAG), you may want to enable **semantic ranking** in addition to hybrid search to improve the relevance of the search results. You can enable semantic ranking by setting the `search.type` property to `AzureAISearchQueryType.SemanticHybrid` when creating the vector store. Note that semantic ranking capabilities are only available in the Basic and higher pricing tiers, and subject to [regional availability](https://azure.microsoft.com/en-us/explore/global-infrastructure/products-by-region/?products=search). You can read more about the performance of using semantic ranking with hybrid search in [this blog post](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/azure-cognitive-search-outperforming-vector-search-with-hybrid/ba-p/3929167). Example: index docs, vector search and LLM integration[​](#example-index-docs-vector-search-and-llm-integration "Direct link to Example: index docs, vector search and LLM integration") ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Below is an example that indexes documents from a file in Azure AI Search, runs a hybrid search query, and finally uses a chain to answer a question in natural language based on the retrieved documents. import { AzureAISearchVectorStore, AzureAISearchQueryType,} from "@langchain/community/vectorstores/azure_aisearch";import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { createRetrievalChain } from "langchain/chains/retrieval";import { TextLoader } from "langchain/document_loaders/fs/text";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";// Load documents from fileconst loader = new TextLoader("./state_of_the_union.txt");const rawDocuments = await loader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 0,});const documents = await splitter.splitDocuments(rawDocuments);// Create Azure AI Search vector storeconst store = await AzureAISearchVectorStore.fromDocuments( documents, new OpenAIEmbeddings(), { search: { type: AzureAISearchQueryType.SimilarityHybrid, }, });// The first time you run this, the index will be created.// You may need to wait a bit for the index to be created before you can perform// a search, or you can create the index manually beforehand.// Performs a similarity searchconst resultDocuments = await store.similaritySearch( "What did the president say about Ketanji Brown Jackson?");console.log("Similarity search results:");console.log(resultDocuments[0].pageContent);/* Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.*/// Use the store as part of a chainconst model = new ChatOpenAI({ model: "gpt-3.5-turbo-1106" });const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm: model, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: store.asRetriever(), combineDocsChain,});const response = await chain.invoke({ input: "What is the president's top priority regarding prices?",});console.log("Chain response:");console.log(response.answer);/* The president's top priority is getting prices under control.*/ #### API Reference: * [AzureAISearchVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_azure_aisearch.AzureAISearchVectorStore.html) from `@langchain/community/vectorstores/azure_aisearch` * [AzureAISearchQueryType](https://v02.api.js.langchain.com/types/langchain_community_vectorstores_azure_aisearch.AzureAISearchQueryType.html) from `@langchain/community/vectorstores/azure_aisearch` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [createStuffDocumentsChain](https://v02.api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents` * [createRetrievalChain](https://v02.api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval` * [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text` * [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Astra DB ](/v0.2/docs/integrations/vectorstores/astradb)[ Next Azure Cosmos DB ](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Setup](#setup) * [About hybrid search](#about-hybrid-search) * [Example: index docs, vector search and LLM integration](#example-index-docs-vector-search-and-llm-integration) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/cassandra
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Cassandra On this page Cassandra ========= Compatibility Only available on Node.js. [Apache Cassandra®](https://cassandra.apache.org/_/index.html) is a NoSQL, row-oriented, highly scalable and highly available database. The [latest version](https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-30%3A+Approximate+Nearest+Neighbor(ANN)+Vector+Search+via+Storage-Attached+Indexes) of Apache Cassandra natively supports Vector Similarity Search. Setup[​](#setup "Direct link to Setup") --------------------------------------- First, install the Cassandra Node.js driver: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install cassandra-driver @langchain/community @langchain/openai yarn add cassandra-driver @langchain/community @langchain/openai pnpm add cassandra-driver @langchain/community @langchain/openai Depending on your database providers, the specifics of how to connect to the database will vary. We will create a document `configConnection` which will be used as part of the vector store configuration. ### Apache Cassandra®[​](#apache-cassandra "Direct link to Apache Cassandra®") Vector search is supported in [Apache Cassandra® 5.0](https://cassandra.apache.org/_/Apache-Cassandra-5.0-Moving-Toward-an-AI-Driven-Future.html) and above. You can use a standard connection document, for example: const configConnection = { contactPoints: ['h1', 'h2'], localDataCenter: 'datacenter1', credentials: { username: <...> as string, password: <...> as string, },}; ### Astra DB[​](#astra-db "Direct link to Astra DB") Astra DB is a cloud-native Cassandra-as-a-Service platform. 1. Create an [Astra DB account](https://astra.datastax.com/register). 2. Create a [vector enabled database](https://astra.datastax.com/createDatabase). 3. Create a [token](https://docs.datastax.com/en/astra/docs/manage-application-tokens.html) for your database. const configConnection = { serviceProviderArgs: { astra: { token: <...> as string, endpoint: <...> as string, }, },}; Instead of `endpoint:`, you many provide property `datacenterID:` and optionally `regionName:`. Indexing docs[​](#indexing-docs "Direct link to Indexing docs") --------------------------------------------------------------- import { CassandraStore } from "langchain/vectorstores/cassandra";import { OpenAIEmbeddings } from "@langchain/openai";// The configConnection document is defined aboveconst config = { ...configConnection, keyspace: "test", dimensions: 1536, table: "test", indices: [{ name: "name", value: "(name)" }], primaryKey: { name: "id", type: "int", }, metadataColumns: [ { name: "name", type: "text", }, ],};const vectorStore = await CassandraStore.fromTexts( ["I am blue", "Green yellow purple", "Hello there hello"], [ { id: 2, name: "2" }, { id: 1, name: "1" }, { id: 3, name: "3" }, ], new OpenAIEmbeddings(), cassandraConfig); Querying docs[​](#querying-docs "Direct link to Querying docs") --------------------------------------------------------------- const results = await vectorStore.similaritySearch("Green yellow purple", 1); or filtered query: const results = await vectorStore.similaritySearch("B", 1, { name: "Bubba" }); Vector Types[​](#vector-types "Direct link to Vector Types") ------------------------------------------------------------ Cassandra supports `cosine` (the default), `dot_product`, and `euclidean` similarity search; this is defined when the vector store is first created, and specifed in the constructor parameter `vectorType`, for example: ..., vectorType: "dot_product", ... Indices[​](#indices "Direct link to Indices") --------------------------------------------- With Version 5, Cassandra introduced Storage Attached Indexes, or SAIs. These allow `WHERE` filtering without specifying the partition key, and allow for additional operator types such as non-equalities. You can define these with the `indices` parameter, which accepts zero or more dictionaries each containing `name` and `value` entries. Indices are optional, though required if using filtered queries on non-partition columns. * The `name` entry is part of the object name; on a table named `test_table` an index with `name: "some_column"` would be `idx_test_table_some_column`. * The `value` entry is the column on which the index is created, surrounded by `(` and `)`. With the above column `some_column` it would be specified as `value: "(some_column)"`. * An optional `options` entry is a map passed to the `WITH OPTIONS =` clause of the `CREATE CUSTOM INDEX` statement. The specific entries on this map are index type specific. indices: [{ name: "some_column", value: "(some_column)" }], Advanced Filtering[​](#advanced-filtering "Direct link to Advanced Filtering") ------------------------------------------------------------------------------ By default, filters are applied with an equality `=`. For those fields that have an `indices` entry, you may provide an `operator` with a string of a value supported by the index; in this case, you specify one or more filters, as either a singleton or in a list (which will be `AND`\-ed together). For example: { name: "create_datetime", operator: ">", value: some_datetime_variable } or [ { userid: userid_variable }, { name: "create_datetime", operator: ">", value: some_date_variable },]; `value` can be a single value or an array. If it is not an array, or there is only one element in `value`, the resulting query will be along the lines of `${name} ${operator} ?` with `value` bound to the `?`. If there is more than one element in the `value` array, the number of unquoted `?` in `name` are counted and subtracted from the length of `value`, and this number of `?` is put on the right side of the operator; if there are more than one `?` then they will be encapsulated in `(` and `)`, e.g. `(?, ?, ?)`. This faciliates bind values on the left of the operator, which is useful for some functions; for example a geo-distance filter: { name: "GEO_DISTANCE(coord, ?)", operator: "<", value: [new Float32Array([53.3730617,-6.3000515]), 10000],}, Data Partitioning and Composite Keys[​](#data-partitioning-and-composite-keys "Direct link to Data Partitioning and Composite Keys") ------------------------------------------------------------------------------------------------------------------------------------ In some systems, you may wish to partition the data for various reasons, perhaps by user or by session. Data in Cassandra is always partitioned; by default this library will partition by the first primary key field. You may specify multiple columns which comprise the primary (unique) key of a record, and optionally indicate those fields which should be part of the partition key. For example, the vector store could be partitioned by both `userid` and `collectionid`, with additional fields `docid` and `docpart` making an individual entry unique: ..., primaryKey: [ {name: "userid", type: "text", partition: true}, {name: "collectionid", type: "text", partition: true}, {name: "docid", type: "text"}, {name: "docpart", type: "int"}, ], ... When searching, you may include partition keys on the filter without defining `indices` for these columns; you do not need to specify all partition keys, but must specify those in the key first. In the above example, you could specify a filter of `{userid: userid_variable}` and `{userid: userid_variable, collectionid: collectionid_variable}`, but if you wanted to specify a filter of only `{collectionid: collectionid_variable}` you would have to include `collectionid` on the `indices` list. Additional Configuration Options[​](#additional-configuration-options "Direct link to Additional Configuration Options") ------------------------------------------------------------------------------------------------------------------------ In the configuration document, further optional parameters are provided; their defaults are: ..., maxConcurrency: 25, batchSize: 1, withClause: "", ... Parameter Usage `maxConcurrency` How many concurrent requests will be sent to Cassandra at a given time. `batchSize` How many documents will be sent on a single request to Cassandra. When using a value > 1, you should ensure your batch size will not exceed the Cassandra parameter `batch_size_fail_threshold_in_kb`. Batches are unlogged. `withClause` Cassandra tables may be created with an optional `WITH` clause; this is generally not needed but provided for completeness. * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Azure Cosmos DB ](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)[ Next Chroma ](/v0.2/docs/integrations/vectorstores/chroma) * [Setup](#setup) * [Apache Cassandra®](#apache-cassandra) * [Astra DB](#astra-db) * [Indexing docs](#indexing-docs) * [Querying docs](#querying-docs) * [Vector Types](#vector-types) * [Indices](#indices) * [Advanced Filtering](#advanced-filtering) * [Data Partitioning and Composite Keys](#data-partitioning-and-composite-keys) * [Additional Configuration Options](#additional-configuration-options) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/azure_cosmosdb
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Azure Cosmos DB On this page Azure Cosmos DB =============== > [Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/) makes it easy to create a database with full native MongoDB support. You can apply your MongoDB experience and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the API for MongoDB vCore account’s connection string. Use vector search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate your AI-based applications with your data that’s stored in Azure Cosmos DB. Azure Cosmos DB for MongoDB vCore provides developers with a fully managed MongoDB-compatible database service for building modern applications with a familiar architecture. Learn how to leverage the vector search capabilities of Azure Cosmos DB for MongoDB vCore from [this page](https://learn.microsoft.com/azure/cosmos-db/mongodb/vcore/vector-search). If you don't have an Azure account, you can [create a free account](https://azure.microsoft.com/free/) to get started. Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll first need to install the `mongodb` SDK and the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai mongodb @langchain/community yarn add @langchain/openai mongodb @langchain/community pnpm add @langchain/openai mongodb @langchain/community You'll also need to have an Azure Cosmos DB for MongoDB vCore instance running. You can deploy a free version on Azure Portal without any cost, following [this guide](https://learn.microsoft.com/azure/cosmos-db/mongodb/vcore/quickstart-portal). Once you have your instance running, make sure you have the connection string and the admin key. You can find them in the Azure Portal, under the "Connection strings" section of your instance. Then you need to set the following environment variables: # Azure CosmosDB for MongoDB vCore connection stringAZURE_COSMOSDB_CONNECTION_STRING=# If you're using Azure OpenAI API, you'll need to set these variablesAZURE_OPENAI_API_KEY=AZURE_OPENAI_API_INSTANCE_NAME=AZURE_OPENAI_API_DEPLOYMENT_NAME=AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=AZURE_OPENAI_API_VERSION=# Or you can use the OpenAI API directlyOPENAI_API_KEY= #### API Reference: Example[​](#example "Direct link to Example") --------------------------------------------- Below is an example that indexes documents from a file in Azure Cosmos DB for MongoDB vCore, runs a vector search query, and finally uses a chain to answer a question in natural language based on the retrieved documents. import { AzureCosmosDBVectorStore, AzureCosmosDBSimilarityType,} from "@langchain/community/vectorstores/azure_cosmosdb";import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { createRetrievalChain } from "langchain/chains/retrieval";import { TextLoader } from "langchain/document_loaders/fs/text";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";// Load documents from fileconst loader = new TextLoader("./state_of_the_union.txt");const rawDocuments = await loader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 0,});const documents = await splitter.splitDocuments(rawDocuments);// Create Azure Cosmos DB vector storeconst store = await AzureCosmosDBVectorStore.fromDocuments( documents, new OpenAIEmbeddings(), { databaseName: "langchain", collectionName: "documents", indexOptions: { numLists: 100, dimensions: 1536, similarity: AzureCosmosDBSimilarityType.COS, }, });// Performs a similarity searchconst resultDocuments = await store.similaritySearch( "What did the president say about Ketanji Brown Jackson?");console.log("Similarity search results:");console.log(resultDocuments[0].pageContent);/* Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.*/// Use the store as part of a chainconst model = new ChatOpenAI({ model: "gpt-3.5-turbo-1106" });const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm: model, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: store.asRetriever(), combineDocsChain,});const res = await chain.invoke({ input: "What is the president's top priority regarding prices?",});console.log("Chain response:");console.log(res.answer);/* The president's top priority is getting prices under control.*/// Clean upawait store.delete();await store.close(); #### API Reference: * [AzureCosmosDBVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_azure_cosmosdb.AzureCosmosDBVectorStore.html) from `@langchain/community/vectorstores/azure_cosmosdb` * [AzureCosmosDBSimilarityType](https://v02.api.js.langchain.com/types/langchain_community_vectorstores_azure_cosmosdb.AzureCosmosDBSimilarityType.html) from `@langchain/community/vectorstores/azure_cosmosdb` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [createStuffDocumentsChain](https://v02.api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents` * [createRetrievalChain](https://v02.api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval` * [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text` * [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Azure AI Search ](/v0.2/docs/integrations/vectorstores/azure_aisearch)[ Next Cassandra ](/v0.2/docs/integrations/vectorstores/cassandra) * [Setup](#setup) * [Example](#example) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/convex
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Convex On this page Convex ====== LangChain.js supports [Convex](https://convex.dev/) as a [vector store](https://docs.convex.dev/vector-search), and supports the standard similarity search. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Create project[​](#create-project "Direct link to Create project") Get a working [Convex](https://docs.convex.dev/) project set up, for example by using: npm create convex@latest ### Add database accessors[​](#add-database-accessors "Direct link to Add database accessors") Add query and mutation helpers to `convex/langchain/db.ts`: convex/langchain/db.ts export * from "langchain/util/convex"; ### Configure your schema[​](#configure-your-schema "Direct link to Configure your schema") Set up your schema (for vector indexing): convex/schema.ts import { defineSchema, defineTable } from "convex/server";import { v } from "convex/values";export default defineSchema({ documents: defineTable({ embedding: v.array(v.number()), text: v.string(), metadata: v.any(), }).vectorIndex("byEmbedding", { vectorField: "embedding", dimensions: 1536, }),}); Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community ### Ingestion[​](#ingestion "Direct link to Ingestion") convex/myActions.ts "use node";import { ConvexVectorStore } from "@langchain/community/vectorstores/convex";import { OpenAIEmbeddings } from "@langchain/openai";import { action } from "./_generated/server.js";export const ingest = action({ args: {}, handler: async (ctx) => { await ConvexVectorStore.fromTexts( ["Hello world", "Bye bye", "What's this?"], [{ prop: 2 }, { prop: 1 }, { prop: 3 }], new OpenAIEmbeddings(), { ctx } ); },}); #### API Reference: * [ConvexVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_convex.ConvexVectorStore.html) from `@langchain/community/vectorstores/convex` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` ### Search[​](#search "Direct link to Search") convex/myActions.ts "use node";import { ConvexVectorStore } from "@langchain/community/vectorstores/convex";import { OpenAIEmbeddings } from "@langchain/openai";import { v } from "convex/values";import { action } from "./_generated/server.js";export const search = action({ args: { query: v.string(), }, handler: async (ctx, args) => { const vectorStore = new ConvexVectorStore(new OpenAIEmbeddings(), { ctx }); const resultOne = await vectorStore.similaritySearch(args.query, 1); console.log(resultOne); },}); #### API Reference: * [ConvexVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_convex.ConvexVectorStore.html) from `@langchain/community/vectorstores/convex` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Cloudflare Vectorize ](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)[ Next Couchbase ](/v0.2/docs/integrations/vectorstores/couchbase) * [Setup](#setup) * [Create project](#create-project) * [Add database accessors](#add-database-accessors) * [Configure your schema](#configure-your-schema) * [Usage](#usage) * [Ingestion](#ingestion) * [Search](#search) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/couchbase
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Couchbase Couchbase ========= [Couchbase](http://couchbase.com/) is an award-winning distributed NoSQL cloud database that delivers unmatched versatility, performance, scalability, and financial value for all of your cloud, mobile, AI, and edge computing applications. Couchbase embraces AI with coding assistance for developers and vector search for their applications. Vector Search is a part of the [Full Text Search Service](https://docs.couchbase.com/server/current/learn/services-and-indexes/services/search-service.html) (Search Service) in Couchbase. This tutorial explains how to use Vector Search in Couchbase. You can work with both [Couchbase Capella](https://www.couchbase.com/products/capella/) and your self-managed Couchbase Server. Installation[​](#installation "Direct link to Installation") ------------------------------------------------------------ You will need couchbase and langchain community to use couchbase vector store. For this tutorial, we will use OpenAI embeddings * npm * Yarn * pnpm npm install couchbase @langchain/openai @langchain/community yarn add couchbase @langchain/openai @langchain/community pnpm add couchbase @langchain/openai @langchain/community Create Couchbase Connection Object[​](#create-couchbase-connection-object "Direct link to Create Couchbase Connection Object") ------------------------------------------------------------------------------------------------------------------------------ We create a connection to the Couchbase cluster initially and then pass the cluster object to the Vector Store. Here, we are connecting using the username and password. You can also connect using any other supported way to your cluster. For more information on connecting to the Couchbase cluster, please check the [Node SDK documentation](https://docs.couchbase.com/nodejs-sdk/current/hello-world/start-using-sdk.html#connect). import { Cluster } from "couchbase";const connectionString = "couchbase://localhost"; // or couchbases://localhost if you are using TLSconst dbUsername = "Administrator"; // valid database user with read access to the bucket being queriedconst dbPassword = "Password"; // password for the database userconst couchbaseClient = await Cluster.connect(connectionString, { username: dbUsername, password: dbPassword, configProfile: "wanDevelopment",}); Create the Search Index[​](#create-the-search-index "Direct link to Create the Search Index") --------------------------------------------------------------------------------------------- Currently, the Search index needs to be created from the Couchbase Capella or Server UI or using the REST interface. For this example, let us use the Import Index feature on the Search Service on the UI. Let us define a Search index with the name `vector-index` on the testing bucket. We are defining an index on the `testing` bucket's `_default` scope on the `_default` collection with the vector field set to `embedding` with 1536 dimensions and the text field set to `text`. We are also indexing and storing all the fields under `metadata` in the document as a dynamic mapping to account for varying document structures. The similarity metric is set to `dot_product`. ### How to Import an Index to the Full Text Search service?[​](#how-to-import-an-index-to-the-full-text-search-service "Direct link to How to Import an Index to the Full Text Search service?") * [Couchbase Server](https://docs.couchbase.com/server/current/search/import-search-index.html) * Click on Search -> Add Index -> Import * Copy the following Index definition in the Import screen * Click on Create Index to create the index. * [Couchbase Capella](https://docs.couchbase.com/cloud/search/import-search-index.html) * Copy the following index definition to a new file `index.json` * Import the file in Capella using the instructions in the documentation. * Click on Create Index to create the index. ### Index Definition[​](#index-definition "Direct link to Index Definition") { "name": "vector-index", "type": "fulltext-index", "params": { "doc_config": { "docid_prefix_delim": "", "docid_regexp": "", "mode": "type_field", "type_field": "type" }, "mapping": { "default_analyzer": "standard", "default_datetime_parser": "dateTimeOptional", "default_field": "_all", "default_mapping": { "dynamic": true, "enabled": true, "properties": { "metadata": { "dynamic": true, "enabled": true }, "embedding": { "enabled": true, "dynamic": false, "fields": [ { "dims": 1536, "index": true, "name": "embedding", "similarity": "dot_product", "type": "vector", "vector_index_optimized_for": "recall" } ] }, "text": { "enabled": true, "dynamic": false, "fields": [ { "index": true, "name": "text", "store": true, "type": "text" } ] } } }, "default_type": "_default", "docvalues_dynamic": false, "index_dynamic": true, "store_dynamic": true, "type_field": "_type" }, "store": { "indexType": "scorch", "segmentVersion": 16 } }, "sourceType": "gocbcore", "sourceName": "testing", "sourceParams": {}, "planParams": { "maxPartitionsPerPIndex": 103, "indexPartitions": 10, "numReplicas": 0 }} For more details on how to create a search index with support for Vector fields, please refer to the documentation: * [Couchbase Capella](https://docs.couchbase.com/cloud/search/create-search-indexes.html) * [Couchbase Server](https://docs.couchbase.com/server/current/search/create-search-indexes.html) For using this vector store, CouchbaseVectorStoreArgs needs to be configured. textKey and embeddingKey are optional fields, required if you want to use specific keys const couchbaseConfig: CouchbaseVectorStoreArgs = { cluster: couchbaseClient, bucketName: "testing", scopeName: "_default", collectionName: "_default", indexName: "vector-index", textKey: "text", embeddingKey: "embedding",}; Create Vector Store[​](#create-vector-store "Direct link to Create Vector Store") --------------------------------------------------------------------------------- We create the vector store object with the cluster information and the search index name. const store = await CouchbaseVectorStore.initialize( embeddings, // embeddings object to create embeddings from text couchbaseConfig); Basic Vector Search Example[​](#basic-vector-search-example "Direct link to Basic Vector Search Example") --------------------------------------------------------------------------------------------------------- The following example showcases how to use couchbase vector search and perform similarity search. For this example, we are going to load the "state\_of\_the\_union.txt" file via the TextLoader, chunk the text into 500 character chunks with no overlaps and index all these chunks into Couchbase. After the data is indexed, we perform a simple query to find the top 4 chunks that are similar to the query "What did president say about Ketanji Brown Jackson". At the emd, also shows how to get similarity score import { OpenAIEmbeddings } from "@langchain/openai";import { CouchbaseVectorStoreArgs, CouchbaseVectorStore,} from "@langchain/community/vectorstores/couchbase";import { Cluster } from "couchbase";import { TextLoader } from "langchain/document_loaders/fs/text";import { CharacterTextSplitter } from "@langchain/textsplitters";const connectionString = process.env.COUCHBASE_DB_CONN_STR ?? "couchbase://localhost";const databaseUsername = process.env.COUCHBASE_DB_USERNAME ?? "Administrator";const databasePassword = process.env.COUCHBASE_DB_PASSWORD ?? "Password";// Load documents from fileconst loader = new TextLoader("./state_of_the_union.txt");const rawDocuments = await loader.load();const splitter = new CharacterTextSplitter({ chunkSize: 500, chunkOverlap: 0,});const docs = await splitter.splitDocuments(rawDocuments);const couchbaseClient = await Cluster.connect(connectionString, { username: databaseUsername, password: databasePassword, configProfile: "wanDevelopment",});// Open AI API Key is required to use OpenAIEmbeddings, some other embeddings may also be usedconst embeddings = new OpenAIEmbeddings({ apiKey: process.env.OPENAI_API_KEY,});const couchbaseConfig: CouchbaseVectorStoreArgs = { cluster: couchbaseClient, bucketName: "testing", scopeName: "_default", collectionName: "_default", indexName: "vector-index", textKey: "text", embeddingKey: "embedding",};const store = await CouchbaseVectorStore.fromDocuments( docs, embeddings, couchbaseConfig);const query = "What did president say about Ketanji Brown Jackson";const resultsSimilaritySearch = await store.similaritySearch(query);console.log("resulting documents: ", resultsSimilaritySearch[0]);// Similarity Search With Scoreconst resultsSimilaritySearchWithScore = await store.similaritySearchWithScore( query, 1);console.log("resulting documents: ", resultsSimilaritySearchWithScore[0][0]);console.log("resulting scores: ", resultsSimilaritySearchWithScore[0][1]);const result = await store.similaritySearch(query, 1, { fields: ["metadata.source"],});console.log(result[0]); #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [CouchbaseVectorStoreArgs](https://v02.api.js.langchain.com/interfaces/langchain_community_vectorstores_couchbase.CouchbaseVectorStoreArgs.html) from `@langchain/community/vectorstores/couchbase` * [CouchbaseVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_couchbase.CouchbaseVectorStore.html) from `@langchain/community/vectorstores/couchbase` * [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text` * [CharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.CharacterTextSplitter.html) from `@langchain/textsplitters` Specifying Fields to Return[​](#specifying-fields-to-return "Direct link to Specifying Fields to Return") --------------------------------------------------------------------------------------------------------- You can specify the fields to return from the document using `fields` parameter in the filter during searches. These fields are returned as part of the `metadata` object. You can fetch any field that is stored in the index. The `textKey` of the document is returned as part of the document's `pageContent`. If you do not specify any fields to be fetched, all the fields stored in the index are returned. If you want to fetch one of the fields in the metadata, you need to specify it using `.` For example, to fetch the `source` field in the metadata, you need to use `metadata.source`. const result = await store.similaritySearch(query, 1, { fields: ["metadata.source"],});console.log(result[0]); Hybrid Search[​](#hybrid-search "Direct link to Hybrid Search") --------------------------------------------------------------- Couchbase allows you to do hybrid searches by combining vector search results with searches on non-vector fields of the document like the `metadata` object. The results will be based on the combination of the results from both vector search and the searches supported by full text search service. The scores of each of the component searches are added up to get the total score of the result. To perform hybrid searches, there is an optional key, `searchOptions` in `fields` parameter that can be passed to all the similarity searches. The different search/query possibilities for the `searchOptions` can be found [here](https://docs.couchbase.com/server/current/search/search-request-params.html#query-object). ### Create Diverse Metadata for Hybrid Search[​](#create-diverse-metadata-for-hybrid-search "Direct link to Create Diverse Metadata for Hybrid Search") In order to simulate hybrid search, let us create some random metadata from the existing documents. We uniformly add three fields to the metadata, `date` between 2010 & 2020, `rating` between 1 & 5 and `author` set to either John Doe or Jane Doe. We will also declare few sample queries. for (let i = 0; i < docs.length; i += 1) { docs[i].metadata.date = `${2010 + (i % 10)}-01-01`; docs[i].metadata.rating = 1 + (i % 5); docs[i].metadata.author = ["John Doe", "Jane Doe"][i % 2];}const store = await CouchbaseVectorStore.fromDocuments( docs, embeddings, couchbaseConfig);const query = "What did the president say about Ketanji Brown Jackson";const independenceQuery = "Any mention about independence?"; ### Example: Search by Exact Value[​](#example-search-by-exact-value "Direct link to Example: Search by Exact Value") We can search for exact matches on a textual field like the author in the `metadata` object. const exactValueResult = await store.similaritySearch(query, 4, { fields: ["metadata.author"], searchOptions: { query: { field: "metadata.author", match: "John Doe" }, },});console.log(exactValueResult[0]); ### Example: Search by Partial Match[​](#example-search-by-partial-match "Direct link to Example: Search by Partial Match") We can search for partial matches by specifying a fuzziness for the search. This is useful when you want to search for slight variations or misspellings of a search query. Here, "Johny" is close (fuzziness of 1) to "John Doe". const partialMatchResult = await store.similaritySearch(query, 4, { fields: ["metadata.author"], searchOptions: { query: { field: "metadata.author", match: "Johny", fuzziness: 1 }, },});console.log(partialMatchResult[0]); ### Example: Search by Date Range Query[​](#example-search-by-date-range-query "Direct link to Example: Search by Date Range Query") We can search for documents that are within a date range query on a date field like `metadata.date`. const dateRangeResult = await store.similaritySearch(independenceQuery, 4, { fields: ["metadata.date", "metadata.author"], searchOptions: { query: { start: "2016-12-31", end: "2017-01-02", inclusiveStart: true, inclusiveEnd: false, field: "metadata.date", }, },});console.log(dateRangeResult[0]); ### Example: Search by Numeric Range Query[​](#example-search-by-numeric-range-query "Direct link to Example: Search by Numeric Range Query") We can search for documents that are within a range for a numeric field like `metadata.rating`. const ratingRangeResult = await store.similaritySearch(independenceQuery, 4, { fields: ["metadata.rating"], searchOptions: { query: { min: 3, max: 5, inclusiveMin: false, inclusiveMax: true, field: "metadata.rating", }, },});console.log(ratingRangeResult[0]); ### Example: Combining Multiple Search Conditions[​](#example-combining-multiple-search-conditions "Direct link to Example: Combining Multiple Search Conditions") Different queries can by combined using AND (conjuncts) or OR (disjuncts) operators. In this example, we are checking for documents with a rating between 3 & 4 and dated between 2015 & 2018. const multipleConditionsResult = await store.similaritySearch(texts[0], 4, { fields: ["metadata.rating", "metadata.date"], searchOptions: { query: { conjuncts: [ { min: 3, max: 4, inclusive_max: true, field: "metadata.rating" }, { start: "2016-12-31", end: "2017-01-02", field: "metadata.date" }, ], }, },});console.log(multipleConditionsResult[0]); ### Other Queries[​](#other-queries "Direct link to Other Queries") Similarly, you can use any of the supported Query methods like Geo Distance, Polygon Search, Wildcard, Regular Expressions, etc in the `searchOptions` Key of `filter` parameter. Please refer to the documentation for more details on the available query methods and their syntax. * [Couchbase Capella](https://docs.couchbase.com/cloud/search/search-request-params.html#query-object) * [Couchbase Server](https://docs.couchbase.com/server/current/search/search-request-params.html#query-object) Frequently Asked Questions ========================== Question: Should I create the Search index before creating the CouchbaseVectorStore object?[​](#question-should-i-create-the-search-index-before-creating-the-couchbasevectorstore-object "Direct link to Question: Should I create the Search index before creating the CouchbaseVectorStore object?") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Yes, currently you need to create the Search index before creating the `CouchbaseVectorStore` object. Question: I am not seeing all the fields that I specified in my search results.[​](#question-i-am-not-seeing-all-the-fields-that-i-specified-in-my-search-results "Direct link to Question: I am not seeing all the fields that I specified in my search results.") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- In Couchbase, we can only return the fields stored in the Search index. Please ensure that the field that you are trying to access in the search results is part of the Search index. One way to handle this is to index and store a document's fields dynamically in the index. * In Capella, you need to go to "Advanced Mode" then under the chevron "General Settings" you can check "\[X\] Store Dynamic Fields" or "\[X\] Index Dynamic Fields" * In Couchbase Server, in the Index Editor (not Quick Editor) under the chevron "Advanced" you can check "\[X\] Store Dynamic Fields" or "\[X\] Index Dynamic Fields" Note that these options will increase the size of the index. For more details on dynamic mappings, please refer to the [documentation](https://docs.couchbase.com/cloud/search/customize-index.html). Question: I am unable to see the metadata object in my search results.[​](#question-i-am-unable-to-see-the-metadata-object-in-my-search-results "Direct link to Question: I am unable to see the metadata object in my search results.") ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- This is most likely due to the `metadata` field in the document not being indexed and/or stored by the Couchbase Search index. In order to index the `metadata` field in the document, you need to add it to the index as a child mapping. If you select to map all the fields in the mapping, you will be able to search by all metadata fields. Alternatively, to optimize the index, you can select the specific fields inside `metadata` object to be indexed. You can refer to the [docs](https://docs.couchbase.com/cloud/search/customize-index.html) to learn more about indexing child mappings. To create Child Mappings, you can refer to the following docs - * [Couchbase Capella](https://docs.couchbase.com/cloud/search/create-child-mapping.html) * [Couchbase Server](https://docs.couchbase.com/server/current/fts/fts-creating-index-from-UI-classic-editor-dynamic.html) * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Convex ](/v0.2/docs/integrations/vectorstores/convex)[ Next Elasticsearch ](/v0.2/docs/integrations/vectorstores/elasticsearch) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/hanavector
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * SAP HANA Cloud Vector Engine On this page SAP HANA Cloud Vector Engine ============================ [SAP HANA Cloud Vector Engine](https://www.sap.com/events/teched/news-guide/ai.html#article8) is a vector store fully integrated into the `SAP HANA Cloud database`. Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll first need to install either the [`@sap/hana-client`](https://www.npmjs.com/package/@sap/hana-client) or the [`hdb`](https://www.npmjs.com/package/hdb) package, and the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install -S @langchain/community @sap/hana-client# ornpm install -S @langchain/community hdb yarn add @langchain/community @sap/hana-client# oryarn add @langchain/community hdb pnpm add @langchain/community @sap/hana-client# orpnpm add @langchain/community hdb You'll also need to have database connection to a HANA Cloud instance. OPENAI_API_KEY = "Your OpenAI API key"HANA_HOST = "HANA_DB_ADDRESS"HANA_PORT = "HANA_DB_PORT"HANA_UID = "HANA_DB_USER"HANA_PWD = "HANA_DB_PASSWORD" #### API Reference: Create a new index from texts[​](#create-a-new-index-from-texts "Direct link to Create a new index from texts") --------------------------------------------------------------------------------------------------------------- import { OpenAIEmbeddings } from "@langchain/openai";import hanaClient from "hdb";import { HanaDB, HanaDBArgs,} from "@langchain/community/vectorstores/hanavector";const connectionParams = { host: process.env.HANA_HOST, port: process.env.HANA_PORT, user: process.env.HANA_UID, password: process.env.HANA_PWD, // useCesu8 : false};const client = hanaClient.createClient(connectionParams);// connet to hanaDBawait new Promise<void>((resolve, reject) => { client.connect((err: Error) => { // Use arrow function here if (err) { reject(err); } else { console.log("Connected to SAP HANA successfully."); resolve(); } });});const embeddings = new OpenAIEmbeddings();const args: HanaDBArgs = { connection: client, tableName: "test_fromTexts",};// This function will create a table "test_fromTexts" if not exist, if exists,// then the value will be appended to the table.const vectorStore = await HanaDB.fromTexts( ["Bye bye", "Hello world", "hello nice world"], [ { id: 2, name: "2" }, { id: 1, name: "1" }, { id: 3, name: "3" }, ], embeddings, args);const response = await vectorStore.similaritySearch("hello world", 2);console.log(response);/* This result is based on no table "test_fromTexts" existing in the database. [ { pageContent: 'Hello world', metadata: { id: 1, name: '1' } }, { pageContent: 'hello nice world', metadata: { id: 3, name: '3' } } ]*/client.disconnect(); #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [HanaDB](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hanavector.HanaDB.html) from `@langchain/community/vectorstores/hanavector` * [HanaDBArgs](https://v02.api.js.langchain.com/interfaces/langchain_community_vectorstores_hanavector.HanaDBArgs.html) from `@langchain/community/vectorstores/hanavector` Create a new index from a loader and perform similarity searches[​](#create-a-new-index-from-a-loader-and-perform-similarity-searches "Direct link to Create a new index from a loader and perform similarity searches") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ import hanaClient from "hdb";import { HanaDB, HanaDBArgs,} from "@langchain/community/vectorstores/hanavector";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import { CharacterTextSplitter } from "@langchain/textsplitters";const connectionParams = { host: process.env.HANA_HOST, port: process.env.HANA_PORT, user: process.env.HANA_UID, password: process.env.HANA_PWD, // useCesu8 : false};const client = hanaClient.createClient(connectionParams);// connet to hanaDBawait new Promise<void>((resolve, reject) => { client.connect((err: Error) => { // Use arrow function here if (err) { reject(err); } else { console.log("Connected to SAP HANA successfully."); resolve(); } });});const embeddings = new OpenAIEmbeddings();const args: HanaDBArgs = { connection: client, tableName: "test_fromDocs",};// Load documents from fileconst loader = new TextLoader("./state_of_the_union.txt");const rawDocuments = await loader.load();const splitter = new CharacterTextSplitter({ chunkSize: 500, chunkOverlap: 0,});const documents = await splitter.splitDocuments(rawDocuments);// Create a LangChain VectorStore interface for the HANA database and specify the table (collection) to use in args.const vectorStore = new HanaDB(embeddings, args);await vectorStore.initialize();// Delete already existing documents from the tableawait vectorStore.delete({ filter: {} });// add the loaded document chunksawait vectorStore.addDocuments(documents);// similarity search (default:“Cosine Similarity”, options:["euclidean", "cosine"])const query = "What did the president say about Ketanji Brown Jackson";const docs = await vectorStore.similaritySearch(query, 2);docs.forEach((doc) => { console.log("-".repeat(80)); console.log(doc.pageContent);});/* -------------------------------------------------------------------------------- One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice*/// similiarity search using euclidean distance methodconst argsL2d: HanaDBArgs = { connection: client, tableName: "test_fromDocs", distanceStrategy: "euclidean",};const vectorStoreL2d = new HanaDB(embeddings, argsL2d);const docsL2d = await vectorStoreL2d.similaritySearch(query, 2);docsL2d.forEach((docsL2d) => { console.log("-".repeat(80)); console.log(docsL2d.pageContent);});// Output should be the same as the cosine similarity search method.// Maximal Marginal Relevance Search (MMR)const docsMMR = await vectorStore.maxMarginalRelevanceSearch(query, { k: 2, fetchK: 20,});docsMMR.forEach((docsMMR) => { console.log("-".repeat(80)); console.log(docsMMR.pageContent);});/* -------------------------------------------------------------------------------- One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world.*/client.disconnect(); #### API Reference: * [HanaDB](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hanavector.HanaDB.html) from `@langchain/community/vectorstores/hanavector` * [HanaDBArgs](https://v02.api.js.langchain.com/interfaces/langchain_community_vectorstores_hanavector.HanaDBArgs.html) from `@langchain/community/vectorstores/hanavector` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text` * [CharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.CharacterTextSplitter.html) from `@langchain/textsplitters` Basic Vectorstore Operations[​](#basic-vectorstore-operations "Direct link to Basic Vectorstore Operations") ------------------------------------------------------------------------------------------------------------ import { OpenAIEmbeddings } from "@langchain/openai";import hanaClient from "hdb";// or import another node.js driver// import hanaClient from "@sap/haha-client";import { Document } from "@langchain/core/documents";import { HanaDB, HanaDBArgs,} from "@langchain/community/vectorstores/hanavector";const connectionParams = { host: process.env.HANA_HOST, port: process.env.HANA_PORT, user: process.env.HANA_UID, password: process.env.HANA_PWD, // useCesu8 : false};const client = hanaClient.createClient(connectionParams);// connet to hanaDBawait new Promise<void>((resolve, reject) => { client.connect((err: Error) => { // Use arrow function here if (err) { reject(err); } else { console.log("Connected to SAP HANA successfully."); resolve(); } });});const embeddings = new OpenAIEmbeddings();// define instance argsconst args: HanaDBArgs = { connection: client, tableName: "testBasics",};// Add documents with metadata.const docs: Document[] = [ { pageContent: "foo", metadata: { start: 100, end: 150, docName: "foo.txt", quality: "bad" }, }, { pageContent: "bar", metadata: { start: 200, end: 250, docName: "bar.txt", quality: "good" }, },];// Create a LangChain VectorStore interface for the HANA database and specify the table (collection) to use in args.const vectorStore = new HanaDB(embeddings, args);// need to initialize once an instance is created.await vectorStore.initialize();// Delete already existing documents from the tableawait vectorStore.delete({ filter: {} });await vectorStore.addDocuments(docs);// Query documents with specific metadata.const filterMeta = { quality: "bad" };const query = "foobar";// With filtering on {"quality": "bad"}, only one document should be returnedconst results = await vectorStore.similaritySearch(query, 1, filterMeta);console.log(results);/* [ { pageContent: "foo", metadata: { start: 100, end: 150, docName: "foo.txt", quality: "bad" } } ]*/// Delete documents with specific metadata.await vectorStore.delete({ filter: filterMeta });// Now the similarity search with the same filter will return no resultsconst resultsAfterFilter = await vectorStore.similaritySearch( query, 1, filterMeta);console.log(resultsAfterFilter);/* []*/client.disconnect(); #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` * [HanaDB](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hanavector.HanaDB.html) from `@langchain/community/vectorstores/hanavector` * [HanaDBArgs](https://v02.api.js.langchain.com/interfaces/langchain_community_vectorstores_hanavector.HanaDBArgs.html) from `@langchain/community/vectorstores/hanavector` Using a VectorStore as a retriever in chains for retrieval augmented generation (RAG)[​](#using-a-vectorstore-as-a-retriever-in-chains-for-retrieval-augmented-generation-rag "Direct link to Using a VectorStore as a retriever in chains for retrieval augmented generation (RAG)") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { createRetrievalChain } from "langchain/chains/retrieval";import hanaClient from "hdb";import { HanaDB, HanaDBArgs,} from "@langchain/community/vectorstores/hanavector";// Connection parametersconst connectionParams = { host: process.env.HANA_HOST, port: process.env.HANA_PORT, user: process.env.HANA_UID, password: process.env.HANA_PWD, // useCesu8 : false};const client = hanaClient.createClient(connectionParams);// connet to hanaDBawait new Promise<void>((resolve, reject) => { client.connect((err: Error) => { // Use arrow function here if (err) { reject(err); } else { console.log("Connected to SAP HANA successfully."); resolve(); } });});const embeddings = new OpenAIEmbeddings();const args: HanaDBArgs = { connection: client, tableName: "test_fromDocs",};const vectorStore = new HanaDB(embeddings, args);await vectorStore.initialize();// Use the store as part of a chain, under the premise that "test_fromDocs" exists and contains the chunked docs.const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo-1106" });const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "You are an expert in state of the union topics. You are provided multiple context items that are related to the prompt you have to answer. Use the following pieces of context to answer the question at the end.\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm: model, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: vectorStore.asRetriever(), combineDocsChain,});// Ask the first question (and verify how many text chunks have been used).const response = await chain.invoke({ input: "What about Mexico and Guatemala?",});console.log("Chain response:");console.log(response.answer);console.log( `Number of used source document chunks: ${response.context.length}`);/* The United States has set up joint patrols with Mexico and Guatemala to catch more human traffickers. Number of used source document chunks: 4*/const responseOther = await chain.invoke({ input: "What about other countries?",});console.log("Chain response:");console.log(responseOther.answer);/* Ask another question on the same conversational chain. The answer should relate to the previous answer given.....including members of NATO, the European Union, and other allies such as Canada....*/client.disconnect(); #### API Reference: * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [createStuffDocumentsChain](https://v02.api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents` * [createRetrievalChain](https://v02.api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval` * [HanaDB](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hanavector.HanaDB.html) from `@langchain/community/vectorstores/hanavector` * [HanaDBArgs](https://v02.api.js.langchain.com/interfaces/langchain_community_vectorstores_hanavector.HanaDBArgs.html) from `@langchain/community/vectorstores/hanavector` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Google Vertex AI Matching Engine ](/v0.2/docs/integrations/vectorstores/googlevertexai)[ Next HNSWLib ](/v0.2/docs/integrations/vectorstores/hnswlib) * [Setup](#setup) * [Create a new index from texts](#create-a-new-index-from-texts) * [Create a new index from a loader and perform similarity searches](#create-a-new-index-from-a-loader-and-perform-similarity-searches) * [Basic Vectorstore Operations](#basic-vectorstore-operations) * [Using a VectorStore as a retriever in chains for retrieval augmented generation (RAG)](#using-a-vectorstore-as-a-retriever-in-chains-for-retrieval-augmented-generation-rag) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/googlevertexai
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Google Vertex AI Matching Engine On this page Google Vertex AI Matching Engine ================================ Compatibility Only available on Node.js. The Google Vertex AI Matching Engine "provides the industry's leading high-scale low latency vector database. These vector databases are commonly referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service." Setup[​](#setup "Direct link to Setup") --------------------------------------- caution This module expects an endpoint and deployed index already created as the creation time takes close to one hour. To learn more, see the LangChain python documentation [Create Index and deploy it to an Endpoint](https://python.langchain.com/docs/integrations/vectorstores/matchingengine#create-index-and-deploy-it-to-an-endpoint). Before running this code, you should make sure the Vertex AI API is enabled for the relevant project in your Google Cloud dashboard and that you've authenticated to Google Cloud using one of these methods: * You are logged into an account (using `gcloud auth application-default login`) permitted to that project. * You are running on a machine using a service account that is permitted to the project. * You have downloaded the credentials for a service account that is permitted to the project and set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the path of this file. Install the authentication library with: * npm * Yarn * pnpm npm install google-auth-library yarn add google-auth-library pnpm add google-auth-library The Matching Engine does not store the actual document contents, only embeddings. Therefore, you'll need a docstore. The below example uses Google Cloud Storage, which requires the following: * npm * Yarn * pnpm npm install @google-cloud/storage yarn add @google-cloud/storage pnpm add @google-cloud/storage Usage[​](#usage "Direct link to Usage") --------------------------------------- ### Initializing the engine[​](#initializing-the-engine "Direct link to Initializing the engine") When creating the `MatchingEngine` object, you'll need some information about the matching engine configuration. You can get this information from the Cloud Console for Matching Engine: * The id for the Index * The id for the Index Endpoint You will also need a document store. While an `InMemoryDocstore` is ok for initial testing, you will want to use something like a [GoogleCloudStorageDocstore](https://v02.api.js.langchain.com/classes/langchain_stores_doc_gcs.GoogleCloudStorageDocstore.html) to store it more permanently. import { MatchingEngine } from "langchain/vectorstores/googlevertexai";import { Document } from "langchain/document";import { SyntheticEmbeddings } from "langchain/embeddings/fake";import { GoogleCloudStorageDocstore } from "langchain/stores/doc/gcs";const embeddings = new SyntheticEmbeddings({ vectorSize: Number.parseInt( process.env.SYNTHETIC_EMBEDDINGS_VECTOR_SIZE ?? "768", 10 ),});const store = new GoogleCloudStorageDocstore({ bucket: process.env.GOOGLE_CLOUD_STORAGE_BUCKET!,});const config = { index: process.env.GOOGLE_VERTEXAI_MATCHINGENGINE_INDEX!, indexEndpoint: process.env.GOOGLE_VERTEXAI_MATCHINGENGINE_INDEXENDPOINT!, apiVersion: "v1beta1", docstore: store,};const engine = new MatchingEngine(embeddings, config); ### Adding documents[​](#adding-documents "Direct link to Adding documents") const doc = new Document({ pageContent: "this" });await engine.addDocuments([doc]); Any metadata in a document is converted into Matching Engine "allow list" values that can be used to filter during a query. const documents = [ new Document({ pageContent: "this apple", metadata: { color: "red", category: "edible", }, }), new Document({ pageContent: "this blueberry", metadata: { color: "blue", category: "edible", }, }), new Document({ pageContent: "this firetruck", metadata: { color: "red", category: "machine", }, }),];// Add all our documentsawait engine.addDocuments(documents); The documents are assumed to have an "id" parameter available as well. If this is not set, then an ID will be assigned and returned as part of the Document. ### Querying documents[​](#querying-documents "Direct link to Querying documents") Doing a straightforward k-nearest-neighbor search which returns all results is done using any of the standard methods: const results = await engine.similaritySearch("this"); ### Querying documents with a filter / restriction[​](#querying-documents-with-a-filter--restriction "Direct link to Querying documents with a filter / restriction") We can limit what documents are returned based on the metadata that was set for the document. So if we just wanted to limit the results to those with a red color, we can do: import { Restriction } from `langchain/vectorstores/googlevertexai`;const redFilter: Restriction[] = [ { namespace: "color", allowList: ["red"], },];const redResults = await engine.similaritySearch("this", 4, redFilter); If we wanted to do something more complicated, like things that are red, but not edible: const filter: Restriction[] = [ { namespace: "color", allowList: ["red"], }, { namespace: "category", denyList: ["edible"], },];const results = await engine.similaritySearch("this", 4, filter); ### Deleting documents[​](#deleting-documents "Direct link to Deleting documents") Deleting documents are done using ID. import { IdDocument } from `langchain/vectorstores/googlevertexai`;const oldResults: IdDocument[] = await engine.similaritySearch("this", 10);const oldIds = oldResults.map( doc => doc.id! );await engine.delete({ids: oldIds}); * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Faiss ](/v0.2/docs/integrations/vectorstores/faiss)[ Next SAP HANA Cloud Vector Engine ](/v0.2/docs/integrations/vectorstores/hanavector) * [Setup](#setup) * [Usage](#usage) * [Initializing the engine](#initializing-the-engine) * [Adding documents](#adding-documents) * [Querying documents](#querying-documents) * [Querying documents with a filter / restriction](#querying-documents-with-a-filter--restriction) * [Deleting documents](#deleting-documents) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/mongodb_atlas
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * MongoDB Atlas On this page MongoDB Atlas ============= Compatibility Only available on Node.js. You can still create API routes that use MongoDB with Next.js by setting the `runtime` variable to `nodejs` like so: export const runtime = "nodejs"; You can read more about Edge runtimes in the Next.js documentation [here](https://nextjs.org/docs/app/building-your-application/rendering/edge-and-nodejs-runtimes). LangChain.js supports MongoDB Atlas as a vector store, and supports both standard similarity search and maximal marginal relevance search, which takes a combination of documents are most similar to the inputs, then reranks and optimizes for diversity. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Installation[​](#installation "Direct link to Installation") First, add the Node MongoDB SDK to your project: * npm * Yarn * pnpm npm install -S mongodb yarn add mongodb pnpm add mongodb ### Initial Cluster Configuration[​](#initial-cluster-configuration "Direct link to Initial Cluster Configuration") Next, you'll need create a MongoDB Atlas cluster. Navigate to the [MongoDB Atlas website](https://www.mongodb.com/atlas/database) and create an account if you don't already have one. Create and name a cluster when prompted, then find it under `Database`. Select `Collections` and create either a blank collection or one from the provided sample data. **Note** The cluster created must be MongoDB 7.0 or higher. If you are using a pre-7.0 version of MongoDB, you must use a version of langchainjs<=0.0.163. ### Creating an Index[​](#creating-an-index "Direct link to Creating an Index") After configuring your cluster, you'll need to create an index on the collection field you want to search over. Switch to the `Atlas Search` tab and click `Create Search Index`. From there, make sure you select `Atlas Vector Search - JSON Editor`, then select the appropriate database and collection and paste the following into the textbox: { "fields": [ { "numDimensions": 1024, "path": "embedding", "similarity": "euclidean", "type": "vector" } ]} Note that the `dimensions` property should match the dimensionality of the embeddings you are using. For example, Cohere embeddings have 1024 dimensions, and by default OpenAI embeddings have 1536: **Note:** By default the vector store expects an index name of `default`, an indexed collection field name of `embedding`, and a raw text field name of `text`. You should initialize the vector store with field names matching your index name collection schema as shown below. Finally, proceed to build the index. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/community yarn add @langchain/community pnpm add @langchain/community ### Ingestion[​](#ingestion "Direct link to Ingestion") import { MongoDBAtlasVectorSearch } from "@langchain/mongodb";import { CohereEmbeddings } from "@langchain/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(".");const collection = client.db(dbName).collection(collectionName);const vectorstore = await MongoDBAtlasVectorSearch.fromTexts( ["Hello world", "Bye bye", "What's this?"], [{ id: 2 }, { id: 1 }, { id: 3 }], new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. Defaults to "embedding" });const assignedIds = await vectorstore.addDocuments([ { pageContent: "upsertable", metadata: {} },]);const upsertedDocs = [{ pageContent: "overwritten", metadata: {} }];await vectorstore.addDocuments(upsertedDocs, { ids: assignedIds });await client.close(); #### API Reference: * [MongoDBAtlasVectorSearch](https://v02.api.js.langchain.com/classes/langchain_mongodb.MongoDBAtlasVectorSearch.html) from `@langchain/mongodb` * [CohereEmbeddings](https://v02.api.js.langchain.com/classes/langchain_cohere.CohereEmbeddings.html) from `@langchain/cohere` ### Search[​](#search "Direct link to Search") import { MongoDBAtlasVectorSearch } from "@langchain/mongodb";import { CohereEmbeddings } from "@langchain/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(".");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. Defaults to "embedding"});const resultOne = await vectorStore.similaritySearch("Hello world", 1);console.log(resultOne);await client.close(); #### API Reference: * [MongoDBAtlasVectorSearch](https://v02.api.js.langchain.com/classes/langchain_mongodb.MongoDBAtlasVectorSearch.html) from `@langchain/mongodb` * [CohereEmbeddings](https://v02.api.js.langchain.com/classes/langchain_cohere.CohereEmbeddings.html) from `@langchain/cohere` ### Maximal marginal relevance[​](#maximal-marginal-relevance "Direct link to Maximal marginal relevance") import { MongoDBAtlasVectorSearch } from "@langchain/mongodb";import { CohereEmbeddings } from "@langchain/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(".");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. Defaults to "embedding"});const resultOne = await vectorStore.maxMarginalRelevanceSearch("Hello world", { k: 4, fetchK: 20, // The number of documents to return on initial fetch});console.log(resultOne);// Using MMR in a vector store retrieverconst retriever = await vectorStore.asRetriever({ searchType: "mmr", searchKwargs: { fetchK: 20, lambda: 0.1, },});const retrieverOutput = await retriever.invoke("Hello world");console.log(retrieverOutput);await client.close(); #### API Reference: * [MongoDBAtlasVectorSearch](https://v02.api.js.langchain.com/classes/langchain_mongodb.MongoDBAtlasVectorSearch.html) from `@langchain/mongodb` * [CohereEmbeddings](https://v02.api.js.langchain.com/classes/langchain_cohere.CohereEmbeddings.html) from `@langchain/cohere` ### Metadata filtering[​](#metadata-filtering "Direct link to Metadata filtering") MongoDB Atlas supports pre-filtering of results on other fields. They require you to define which metadata fields you plan to filter on by updating the index. Here's an example: { "fields": [ { "numDimensions": 1024, "path": "embedding", "similarity": "euclidean", "type": "vector" }, { "path": "docstore_document_id", "type": "filter" } ]} Above, the first item in `fields` is the vector index, and the second item is the metadata property you want to filter on. The name of the property is `path`, so the above index would allow us to search on a metadata field named `docstore_document_id`. Then, in your code you can use [MQL Query Operators](https://www.mongodb.com/docs/manual/reference/operator/query/) for filtering. Here's an example: import { MongoDBAtlasVectorSearch } from "@langchain/mongodb";import { CohereEmbeddings } from "@langchain/cohere";import { MongoClient } from "mongodb";import { sleep } from "langchain/util/time";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(".");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. Defaults to "embedding"});await vectorStore.addDocuments([ { pageContent: "Hey hey hey", metadata: { docstore_document_id: "somevalue" }, },]);const retriever = vectorStore.asRetriever({ filter: { preFilter: { docstore_document_id: { $eq: "somevalue", }, }, },});// Mongo has a slight processing delay between ingest and availabilityawait sleep(2000);const results = await retriever.invoke("goodbye");console.log(results);await client.close(); #### API Reference: * [MongoDBAtlasVectorSearch](https://v02.api.js.langchain.com/classes/langchain_mongodb.MongoDBAtlasVectorSearch.html) from `@langchain/mongodb` * [CohereEmbeddings](https://v02.api.js.langchain.com/classes/langchain_cohere.CohereEmbeddings.html) from `@langchain/cohere` * [sleep](https://v02.api.js.langchain.com/functions/langchain_util_time.sleep.html) from `langchain/util/time` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Momento Vector Index (MVI) ](/v0.2/docs/integrations/vectorstores/momento_vector_index)[ Next MyScale ](/v0.2/docs/integrations/vectorstores/myscale) * [Setup](#setup) * [Installation](#installation) * [Initial Cluster Configuration](#initial-cluster-configuration) * [Creating an Index](#creating-an-index) * [Usage](#usage) * [Ingestion](#ingestion) * [Search](#search) * [Maximal marginal relevance](#maximal-marginal-relevance) * [Metadata filtering](#metadata-filtering) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/neo4jvector
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Neo4j Vector Index On this page Neo4j Vector Index ================== Neo4j is an open-source graph database with integrated support for vector similarity search. It supports: * approximate nearest neighbor search * Euclidean similarity and cosine similarity * Hybrid search combining vector and keyword searches Setup[​](#setup "Direct link to Setup") --------------------------------------- To work with Neo4j Vector Index, you need to install the `neo4j-driver` package: * npm * Yarn * pnpm npm install neo4j-driver yarn add neo4j-driver pnpm add neo4j-driver tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community ### Setup a `Neo4j` self hosted instance with `docker-compose`[​](#setup-a-neo4j-self-hosted-instance-with-docker-compose "Direct link to setup-a-neo4j-self-hosted-instance-with-docker-compose") `Neo4j` provides a prebuilt Docker image that can be used to quickly setup a self-hosted Neo4j database instance. Create a file below named `docker-compose.yml`: export default {services:{database:{image:'neo4j',ports:['7687:7687','7474:7474'],environment:['NEO4J_AUTH=neo4j/pleaseletmein']}}}; #### API Reference: And then in the same directory, run `docker compose up` to start the container. You can find more information on how to setup `Neo4j` on their [website](https://neo4j.com/docs/operations-manual/current/installation/). Usage[​](#usage "Direct link to Usage") --------------------------------------- One complete example of using `Neo4jVectorStore` is the following: import { OpenAIEmbeddings } from "@langchain/openai";import { Neo4jVectorStore } from "@langchain/community/vectorstores/neo4j_vector";// Configuration object for Neo4j connection and other related settingsconst config = { url: "bolt://localhost:7687", // URL for the Neo4j instance username: "neo4j", // Username for Neo4j authentication password: "pleaseletmein", // Password for Neo4j authentication indexName: "vector", // Name of the vector index keywordIndexName: "keyword", // Name of the keyword index if using hybrid search searchType: "vector" as const, // Type of search (e.g., vector, hybrid) nodeLabel: "Chunk", // Label for the nodes in the graph textNodeProperty: "text", // Property of the node containing text embeddingNodeProperty: "embedding", // Property of the node containing embedding};const documents = [ { pageContent: "what's this", metadata: { a: 2 } }, { pageContent: "Cat drinks milk", metadata: { a: 1 } },];const neo4jVectorIndex = await Neo4jVectorStore.fromDocuments( documents, new OpenAIEmbeddings(), config);const results = await neo4jVectorIndex.similaritySearch("water", 1);console.log(results);/* [ Document { pageContent: 'Cat drinks milk', metadata: { a: 1 } } ]*/await neo4jVectorIndex.close(); #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [Neo4jVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_neo4j_vector.Neo4jVectorStore.html) from `@langchain/community/vectorstores/neo4j_vector` ### Use retrievalQuery parameter to customize responses[​](#use-retrievalquery-parameter-to-customize-responses "Direct link to Use retrievalQuery parameter to customize responses") import { OpenAIEmbeddings } from "@langchain/openai";import { Neo4jVectorStore } from "@langchain/community/vectorstores/neo4j_vector";/* * The retrievalQuery is a customizable Cypher query fragment used in the Neo4jVectorStore class to define how * search results should be retrieved and presented from the Neo4j database. It allows developers to specify * the format and structure of the data returned after a similarity search. * Mandatory columns for `retrievalQuery`: * * 1. text: * - Description: Represents the textual content of the node. * - Type: String * * 2. score: * - Description: Represents the similarity score of the node in relation to the search query. A * higher score indicates a closer match. * - Type: Float (ranging between 0 and 1, where 1 is a perfect match) * * 3. metadata: * - Description: Contains additional properties and information about the node. This can include * any other attributes of the node that might be relevant to the application. * - Type: Object (key-value pairs) * - Example: { "id": "12345", "category": "Books", "author": "John Doe" } * * Note: While you can customize the `retrievalQuery` to fetch additional columns or perform * transformations, never omit the mandatory columns. The names of these columns (`text`, `score`, * and `metadata`) should remain consistent. Renaming them might lead to errors or unexpected behavior. */// Configuration object for Neo4j connection and other related settingsconst config = { url: "bolt://localhost:7687", // URL for the Neo4j instance username: "neo4j", // Username for Neo4j authentication password: "pleaseletmein", // Password for Neo4j authentication retrievalQuery: ` RETURN node.text AS text, score, {a: node.a * 2} AS metadata `,};const documents = [ { pageContent: "what's this", metadata: { a: 2 } }, { pageContent: "Cat drinks milk", metadata: { a: 1 } },];const neo4jVectorIndex = await Neo4jVectorStore.fromDocuments( documents, new OpenAIEmbeddings(), config);const results = await neo4jVectorIndex.similaritySearch("water", 1);console.log(results);/* [ Document { pageContent: 'Cat drinks milk', metadata: { a: 2 } } ]*/await neo4jVectorIndex.close(); #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [Neo4jVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_neo4j_vector.Neo4jVectorStore.html) from `@langchain/community/vectorstores/neo4j_vector` ### Instantiate Neo4jVectorStore from existing graph[​](#instantiate-neo4jvectorstore-from-existing-graph "Direct link to Instantiate Neo4jVectorStore from existing graph") import { OpenAIEmbeddings } from "@langchain/openai";import { Neo4jVectorStore } from "@langchain/community/vectorstores/neo4j_vector";/** * `fromExistingGraph` Method: * * Description: * This method initializes a `Neo4jVectorStore` instance using an existing graph in the Neo4j database. * It's designed to work with nodes that already have textual properties but might not have embeddings. * The method will compute and store embeddings for nodes that lack them. * * Note: * This method is particularly useful when you have a pre-existing graph with textual data and you want * to enhance it with vector embeddings for similarity searches without altering the original data structure. */// Configuration object for Neo4j connection and other related settingsconst config = { url: "bolt://localhost:7687", // URL for the Neo4j instance username: "neo4j", // Username for Neo4j authentication password: "pleaseletmein", // Password for Neo4j authentication indexName: "wikipedia", nodeLabel: "Wikipedia", textNodeProperties: ["title", "description"], embeddingNodeProperty: "embedding", searchType: "hybrid" as const,};// You should have a populated Neo4j database to use this methodconst neo4jVectorIndex = await Neo4jVectorStore.fromExistingGraph( new OpenAIEmbeddings(), config);await neo4jVectorIndex.close(); #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [Neo4jVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_neo4j_vector.Neo4jVectorStore.html) from `@langchain/community/vectorstores/neo4j_vector` Disclaimer ⚠️ ============= _Security note_: Make sure that the database connection uses credentials that are narrowly-scoped to only include necessary permissions. Failure to do so may result in data corruption or loss, since the calling code may attempt commands that would result in deletion, mutation of data if appropriately prompted or reading sensitive data if such data is present in the database. The best way to guard against such negative outcomes is to (as appropriate) limit the permissions granted to the credentials used with this tool. For example, creating read only users for the database is a good way to ensure that the calling code cannot mutate or delete data. See the [security page](/v0.2/docs/security) for more information. * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous MyScale ](/v0.2/docs/integrations/vectorstores/myscale)[ Next Neon Postgres ](/v0.2/docs/integrations/vectorstores/neon) * [Setup](#setup) * [Setup a `Neo4j` self hosted instance with `docker-compose`](#setup-a-neo4j-self-hosted-instance-with-docker-compose) * [Usage](#usage) * [Use retrievalQuery parameter to customize responses](#use-retrievalquery-parameter-to-customize-responses) * [Instantiate Neo4jVectorStore from existing graph](#instantiate-neo4jvectorstore-from-existing-graph) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/neon
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Neon Postgres On this page Neon Postgres ============= Neon is a fully managed serverless PostgreSQL database. It separates storage and compute to offer features such as instant branching and automatic scaling. With the `pgvector` extension, Neon provides a vector store that can be used with LangChain.js to store and query embeddings. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Select a Neon project[​](#select-a-neon-project "Direct link to Select a Neon project") If you do not have a Neon account, sign up for one at [Neon](https://neon.tech). After logging into the Neon Console, proceed to the [Projects](https://console.neon.tech/app/projects) section and select an existing project or create a new one. Your Neon project comes with a ready-to-use Postgres database named `neondb` that you can use to store embeddings. Navigate to the Connection Details section to find your database connection string. It should look similar to this: postgres://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require Keep your connection string handy for later use. ### Application code[​](#application-code "Direct link to Application code") To work with Neon Postgres, you need to install the `@neondatabase/serverless` package which provides a JavaScript/TypeScript driver to connect to the database. * npm * Yarn * pnpm npm install @neondatabase/serverless yarn add @neondatabase/serverless pnpm add @neondatabase/serverless tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/community yarn add @langchain/community pnpm add @langchain/community To initialize a `NeonPostgres` vectorstore, you need to provide your Neon database connection string. You can use the connection string we fetched above directly, or store it as an environment variable and use it in your code. const vectorStore = await NeonPostgres.initialize(embeddings, { connectionString: NEON_POSTGRES_CONNECTION_STRING,}); Usage[​](#usage "Direct link to Usage") --------------------------------------- import { OpenAIEmbeddings } from "@langchain/openai";import { NeonPostgres } from "@langchain/community/vectorstores/neon";// Initialize an embeddings instanceconst embeddings = new OpenAIEmbeddings({ apiKey: process.env.OPENAI_API_KEY, dimensions: 256, model: "text-embedding-3-small",});// Initialize a NeonPostgres instance to store embedding vectorsconst vectorStore = await NeonPostgres.initialize(embeddings, { connectionString: process.env.DATABASE_URL as string,});// You can add documents to the store, strings in the `pageContent` field will be embedded// and stored in the databaseconst documents = [ { pageContent: "Hello world", metadata: { topic: "greeting" } }, { pageContent: "Bye bye", metadata: { topic: "greeting" } }, { pageContent: "Mitochondria is the powerhouse of the cell", metadata: { topic: "science" }, },];const idsInserted = await vectorStore.addDocuments(documents);// You can now query the store for similar documents to the input queryconst resultOne = await vectorStore.similaritySearch("hola", 1);console.log(resultOne);/*[ Document { pageContent: 'Hello world', metadata: { topic: 'greeting' } }]*/// You can also filter by metadataconst resultTwo = await vectorStore.similaritySearch("Irrelevant query", 2, { topic: "science",});console.log(resultTwo);/*[ Document { pageContent: 'Mitochondria is the powerhouse of the cell', metadata: { topic: 'science' } }]*/// Metadata filtering with IN-filters works as wellconst resultsThree = await vectorStore.similaritySearch("Irrelevant query", 2, { topic: { in: ["greeting"] },});console.log(resultsThree);/*[ Document { pageContent: 'Bye bye', metadata: { topic: 'greeting' } }, Document { pageContent: 'Hello world', metadata: { topic: 'greeting' } }]*/// Upserting is supported as wellawait vectorStore.addDocuments( [ { pageContent: "ATP is the powerhouse of the cell", metadata: { topic: "science" }, }, ], { ids: [idsInserted[2]] });const resultsFour = await vectorStore.similaritySearch( "powerhouse of the cell", 1);console.log(resultsFour);/*[ Document { pageContent: 'ATP is the powerhouse of the cell', metadata: { topic: 'science' } }]*/ #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [NeonPostgres](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_neon.NeonPostgres.html) from `@langchain/community/vectorstores/neon` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Neo4j Vector Index ](/v0.2/docs/integrations/vectorstores/neo4jvector)[ Next OpenSearch ](/v0.2/docs/integrations/vectorstores/opensearch) * [Setup](#setup) * [Select a Neon project](#select-a-neon-project) * [Application code](#application-code) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/opensearch
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * OpenSearch On this page OpenSearch ========== Compatibility Only available on Node.js. [OpenSearch](https://opensearch.org/) is a fork of [Elasticsearch](https://www.elastic.co/elasticsearch/) that is fully compatible with the Elasticsearch API. Read more about their support for Approximate Nearest Neighbors [here](https://opensearch.org/docs/latest/search-plugins/knn/approximate-knn/). Langchain.js accepts [@opensearch-project/opensearch](https://opensearch.org/docs/latest/clients/javascript/index/) as the client for OpenSearch vectorstore. Setup[​](#setup "Direct link to Setup") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install -S @langchain/openai @opensearch-project/opensearch yarn add @langchain/openai @opensearch-project/opensearch pnpm add @langchain/openai @opensearch-project/opensearch You'll also need to have an OpenSearch instance running. You can use the [official Docker image](https://opensearch.org/docs/latest/opensearch/install/docker/) to get started. You can also find an example docker-compose file [here](https://github.com/langchain-ai/langchainjs/blob/main/examples/src/indexes/vector_stores/opensearch/docker-compose.yml). Index docs[​](#index-docs "Direct link to Index docs") ------------------------------------------------------ import { Client } from "@opensearch-project/opensearch";import { Document } from "langchain/document";import { OpenAIEmbeddings } from "@langchain/openai";import { OpenSearchVectorStore } from "langchain/vectorstores/opensearch";const client = new Client({ nodes: [process.env.OPENSEARCH_URL ?? "http://127.0.0.1:9200"],});const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "opensearch is also a vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications", }),];await OpenSearchVectorStore.fromDocuments(docs, new OpenAIEmbeddings(), { client, indexName: process.env.OPENSEARCH_INDEX, // Will default to `documents`}); Query docs[​](#query-docs "Direct link to Query docs") ------------------------------------------------------ import { Client } from "@opensearch-project/opensearch";import { VectorDBQAChain } from "langchain/chains";import { OpenAIEmbeddings } from "@langchain/openai";import { OpenAI } from "@langchain/openai";import { OpenSearchVectorStore } from "langchain/vectorstores/opensearch";const client = new Client({ nodes: [process.env.OPENSEARCH_URL ?? "http://127.0.0.1:9200"],});const vectorStore = new OpenSearchVectorStore(new OpenAIEmbeddings(), { client,});/* Search the vector DB independently with meta filters */const results = await vectorStore.similaritySearch("hello world", 1);console.log(JSON.stringify(results, null, 2));/* [ { "pageContent": "Hello world", "metadata": { "id": 2 } } ] *//* Use as part of a chain (currently no metadata filters) */const model = new OpenAI();const chain = VectorDBQAChain.fromLLM(model, vectorStore, { k: 1, returnSourceDocuments: true,});const response = await chain.call({ query: "What is opensearch?" });console.log(JSON.stringify(response, null, 2));/* { "text": " Opensearch is a collection of technologies that allow search engines to publish search results in a standard format, making it easier for users to search across multiple sites.", "sourceDocuments": [ { "pageContent": "What's this?", "metadata": { "id": 3 } } ] } */ * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Neon Postgres ](/v0.2/docs/integrations/vectorstores/neon)[ Next PGVector ](/v0.2/docs/integrations/vectorstores/pgvector) * [Setup](#setup) * [Index docs](#index-docs) * [Query docs](#query-docs) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/qdrant
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Qdrant On this page Qdrant ====== [Qdrant](https://qdrant.tech/) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Setup[​](#setup "Direct link to Setup") --------------------------------------- 1. Run a Qdrant instance with Docker on your computer by following the [Qdrant setup instructions](https://qdrant.tech/documentation/quick-start/). 2. Install the Qdrant Node.js SDK. * npm * Yarn * pnpm npm install -S @langchain/qdrant yarn add @langchain/qdrant pnpm add @langchain/qdrant 3. Setup Env variables for Qdrant before running the code export OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HEREexport QDRANT_URL=YOUR_QDRANT_URL_HERE # for example http://localhost:6333 Usage[​](#usage "Direct link to Usage") --------------------------------------- ### Create a new index from texts[​](#create-a-new-index-from-texts "Direct link to Create a new index from texts") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community import { QdrantVectorStore } from "@langchain/qdrant";import { OpenAIEmbeddings } from "@langchain/openai";// text sample from Godel, Escher, Bachconst vectorStore = await QdrantVectorStore.fromTexts( [ `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious LittleHarmonic Labyrinth of the dreaded Majotaur?`, `Achilles: Yiikes! What is that?`, `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, `Achilles: Oh, no!`, `Tortoise: But it's only a myth. Courage, Achilles.`, ], [{ id: 2 }, { id: 1 }, { id: 3 }, { id: 4 }, { id: 5 }], new OpenAIEmbeddings(), { url: process.env.QDRANT_URL, collectionName: "goldel_escher_bach", });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no!', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that?', metadata: { id: 1 } }]*/ #### API Reference: * [QdrantVectorStore](https://v02.api.js.langchain.com/classes/langchain_qdrant.QdrantVectorStore.html) from `@langchain/qdrant` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` ### Create a new index from docs[​](#create-a-new-index-from-docs "Direct link to Create a new index from docs") import { QdrantVectorStore } from "@langchain/qdrant";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();const vectorStore = await QdrantVectorStore.fromDocuments( docs, new OpenAIEmbeddings(), { url: process.env.QDRANT_URL, collectionName: "a_test_collection", });// Search for the most similar documentconst response = await vectorStore.similaritySearch("hello", 1);console.log(response);/*[ Document { pageContent: 'Foo\nBar\nBaz\n\n', metadata: { source: 'src/document_loaders/example_data/example.txt' } }]*/ #### API Reference: * [QdrantVectorStore](https://v02.api.js.langchain.com/classes/langchain_qdrant.QdrantVectorStore.html) from `@langchain/qdrant` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text` ### Query docs from existing collection[​](#query-docs-from-existing-collection "Direct link to Query docs from existing collection") import { QdrantVectorStore } from "@langchain/qdrant";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await QdrantVectorStore.fromExistingCollection( new OpenAIEmbeddings(), { url: process.env.QDRANT_URL, collectionName: "goldel_escher_bach", });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no!', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that?', metadata: { id: 1 } }]*/ #### API Reference: * [QdrantVectorStore](https://v02.api.js.langchain.com/classes/langchain_qdrant.QdrantVectorStore.html) from `@langchain/qdrant` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Prisma ](/v0.2/docs/integrations/vectorstores/prisma)[ Next Redis ](/v0.2/docs/integrations/vectorstores/redis) * [Setup](#setup) * [Usage](#usage) * [Create a new index from texts](#create-a-new-index-from-texts) * [Create a new index from docs](#create-a-new-index-from-docs) * [Query docs from existing collection](#query-docs-from-existing-collection) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/prisma
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Prisma On this page Prisma ====== For augmenting existing models in PostgreSQL database with vector search, Langchain supports using [Prisma](https://www.prisma.io/) together with PostgreSQL and [`pgvector`](https://github.com/pgvector/pgvector) Postgres extension. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Setup database instance with Supabase[​](#setup-database-instance-with-supabase "Direct link to Setup database instance with Supabase") Refer to the [Prisma and Supabase integration guide](https://supabase.com/docs/guides/integrations/prisma) to setup a new database instance with Supabase and Prisma. ### Install Prisma[​](#install-prisma "Direct link to Install Prisma") * npm * Yarn * pnpm npm install prisma yarn add prisma pnpm add prisma ### Setup `pgvector` self hosted instance with `docker-compose`[​](#setup-pgvector-self-hosted-instance-with-docker-compose "Direct link to setup-pgvector-self-hosted-instance-with-docker-compose") `pgvector` provides a prebuilt Docker image that can be used to quickly setup a self-hosted Postgres instance. services: db: image: ankane/pgvector ports: - 5432:5432 volumes: - db:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD= - POSTGRES_USER= - POSTGRES_DB=volumes: db: ### Create a new schema[​](#create-a-new-schema "Direct link to Create a new schema") Assuming you haven't created a schema yet, create a new model with a `vector` field of type `Unsupported("vector")`: model Document { id String @id @default(cuid()) content String vector Unsupported("vector")?} Afterwards, create a new migration with `--create-only` to avoid running the migration directly. * npm * Yarn * pnpm npx prisma migrate dev --create-only npx prisma migrate dev --create-only npx prisma migrate dev --create-only Add the following line to the newly created migration to enable `pgvector` extension if it hasn't been enabled yet: CREATE EXTENSION IF NOT EXISTS vector; Run the migration afterwards: * npm * Yarn * pnpm npx prisma migrate dev npx prisma migrate dev npx prisma migrate dev Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community danger Table names and column names (in fields such as `tableName`, `vectorColumnName`, `columns` and `filter`) are passed into SQL queries directly without parametrisation. These fields must be sanitized beforehand to avoid SQL injection. import { PrismaVectorStore } from "@langchain/community/vectorstores/prisma";import { OpenAIEmbeddings } from "@langchain/openai";import { PrismaClient, Prisma, Document } from "@prisma/client";export const run = async () => { const db = new PrismaClient(); // Use the `withModel` method to get proper type hints for `metadata` field: const vectorStore = PrismaVectorStore.withModel<Document>(db).create( new OpenAIEmbeddings(), { prisma: Prisma, tableName: "Document", vectorColumnName: "vector", columns: { id: PrismaVectorStore.IdColumn, content: PrismaVectorStore.ContentColumn, }, } ); const texts = ["Hello world", "Bye bye", "What's this?"]; await vectorStore.addModels( await db.$transaction( texts.map((content) => db.document.create({ data: { content } })) ) ); const resultOne = await vectorStore.similaritySearch("Hello world", 1); console.log(resultOne); // create an instance with default filter const vectorStore2 = PrismaVectorStore.withModel<Document>(db).create( new OpenAIEmbeddings(), { prisma: Prisma, tableName: "Document", vectorColumnName: "vector", columns: { id: PrismaVectorStore.IdColumn, content: PrismaVectorStore.ContentColumn, }, filter: { content: { equals: "default", }, }, } ); await vectorStore2.addModels( await db.$transaction( texts.map((content) => db.document.create({ data: { content } })) ) ); // Use the default filter a.k.a {"content": "default"} const resultTwo = await vectorStore.similaritySearch("Hello world", 1); console.log(resultTwo);}; #### API Reference: * [PrismaVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_prisma.PrismaVectorStore.html) from `@langchain/community/vectorstores/prisma` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` The following SQL operators are available as filters: `equals`, `in`, `isNull`, `isNotNull`, `like`, `lt`, `lte`, `gt`, `gte`, `not`. The samples above uses the following schema: // This is your Prisma schema file,// learn more about it in the docs: https://pris.ly/d/prisma-schemagenerator client { provider = "prisma-client-js"}datasource db { provider = "postgresql" url = env("DATABASE_URL")}model Document { id String @id @default(cuid()) content String namespace String? @default("default") vector Unsupported("vector")?} #### API Reference: You can remove `namespace` if you don't need it. * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Pinecone ](/v0.2/docs/integrations/vectorstores/pinecone)[ Next Qdrant ](/v0.2/docs/integrations/vectorstores/qdrant) * [Setup](#setup) * [Setup database instance with Supabase](#setup-database-instance-with-supabase) * [Install Prisma](#install-prisma) * [Setup `pgvector` self hosted instance with `docker-compose`](#setup-pgvector-self-hosted-instance-with-docker-compose) * [Create a new schema](#create-a-new-schema) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/redis
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Redis On this page Redis ===== [Redis](https://redis.io/) is a fast open source, in-memory data store. As part of the [Redis Stack](https://redis.io/docs/stack/get-started/), [RediSearch](https://redis.io/docs/stack/search/) is the module that enables vector similarity semantic search, as well as many other types of searching. Compatibility Only available on Node.js. LangChain.js accepts [node-redis](https://github.com/redis/node-redis) as the client for Redis vectorstore. Setup[​](#setup "Direct link to Setup") --------------------------------------- 1. Run Redis with Docker on your computer following [the docs](https://redis.io/docs/stack/get-started/install/docker/#redisredis-stack) 2. Install the node-redis JS client * npm * Yarn * pnpm npm install -S redis yarn add redis pnpm add redis tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community Index docs[​](#index-docs "Direct link to Index docs") ------------------------------------------------------ import { createClient } from "redis";import { OpenAIEmbeddings } from "@langchain/openai";import { RedisVectorStore } from "@langchain/redis";import { Document } from "@langchain/core/documents";const client = createClient({ url: process.env.REDIS_URL ?? "redis://localhost:6379",});await client.connect();const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "redis is fast", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "consectetur adipiscing elit", }),];const vectorStore = await RedisVectorStore.fromDocuments( docs, new OpenAIEmbeddings(), { redisClient: client, indexName: "docs", });await client.disconnect(); #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [RedisVectorStore](https://v02.api.js.langchain.com/classes/langchain_redis.RedisVectorStore.html) from `@langchain/redis` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` Query docs[​](#query-docs "Direct link to Query docs") ------------------------------------------------------ import { createClient } from "redis";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { RedisVectorStore } from "@langchain/redis";import { ChatPromptTemplate } from "@langchain/core/prompts";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { createRetrievalChain } from "langchain/chains/retrieval";const client = createClient({ url: process.env.REDIS_URL ?? "redis://localhost:6379",});await client.connect();const vectorStore = new RedisVectorStore(new OpenAIEmbeddings(), { redisClient: client, indexName: "docs",});/* Simple standalone search in the vector DB */const simpleRes = await vectorStore.similaritySearch("redis", 1);console.log(simpleRes);/*[ Document { pageContent: "redis is fast", metadata: { foo: "bar" } }]*//* Search in the vector DB using filters */const filterRes = await vectorStore.similaritySearch("redis", 3, ["qux"]);console.log(filterRes);/*[ Document { pageContent: "consectetur adipiscing elit", metadata: { baz: "qux" }, }, Document { pageContent: "lorem ipsum dolor sit amet", metadata: { baz: "qux" }, }]*//* Usage as part of a chain */const model = new ChatOpenAI({ model: "gpt-3.5-turbo-1106" });const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm: model, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: vectorStore.asRetriever(), combineDocsChain,});const chainRes = await chain.invoke({ input: "What did the fox do?" });console.log(chainRes);/* { input: 'What did the fox do?', chat_history: [], context: [ Document { pageContent: 'the quick brown fox jumped over the lazy dog', metadata: [Object] }, Document { pageContent: 'lorem ipsum dolor sit amet', metadata: [Object] }, Document { pageContent: 'consectetur adipiscing elit', metadata: [Object] }, Document { pageContent: 'redis is fast', metadata: [Object] } ], answer: 'The fox jumped over the lazy dog.' }*/await client.disconnect(); #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [RedisVectorStore](https://v02.api.js.langchain.com/classes/langchain_redis.RedisVectorStore.html) from `@langchain/redis` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [createStuffDocumentsChain](https://v02.api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents` * [createRetrievalChain](https://v02.api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval` Create index with options[​](#create-index-with-options "Direct link to Create index with options") --------------------------------------------------------------------------------------------------- To pass arguments for [index creation](https://redis.io/commands/ft.create/), you can utilize the [available options](https://github.com/redis/node-redis/blob/294cbf8367295ac81cbe51ce2932493ab80493f1/packages/search/lib/commands/CREATE.ts#L4) offered by [node-redis](https://github.com/redis/node-redis) through `createIndexOptions` parameter. import { createClient } from "redis";import { OpenAIEmbeddings } from "@langchain/openai";import { RedisVectorStore } from "@langchain/redis";import { Document } from "@langchain/core/documents";const client = createClient({ url: process.env.REDIS_URL ?? "redis://localhost:6379",});await client.connect();const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "redis is fast", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "consectetur adipiscing elit", }),];const vectorStore = await RedisVectorStore.fromDocuments( docs, new OpenAIEmbeddings(), { redisClient: client, indexName: "docs", createIndexOptions: { TEMPORARY: 1000, }, });await client.disconnect(); #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [RedisVectorStore](https://v02.api.js.langchain.com/classes/langchain_redis.RedisVectorStore.html) from `@langchain/redis` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` Delete an index[​](#delete-an-index "Direct link to Delete an index") --------------------------------------------------------------------- import { createClient } from "redis";import { OpenAIEmbeddings } from "@langchain/openai";import { RedisVectorStore } from "@langchain/redis";import { Document } from "@langchain/core/documents";const client = createClient({ url: process.env.REDIS_URL ?? "redis://localhost:6379",});await client.connect();const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "redis is fast", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "consectetur adipiscing elit", }),];const vectorStore = await RedisVectorStore.fromDocuments( docs, new OpenAIEmbeddings(), { redisClient: client, indexName: "docs", });await vectorStore.delete({ deleteAll: true });await client.disconnect(); #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [RedisVectorStore](https://v02.api.js.langchain.com/classes/langchain_redis.RedisVectorStore.html) from `@langchain/redis` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Qdrant ](/v0.2/docs/integrations/vectorstores/qdrant)[ Next Rockset ](/v0.2/docs/integrations/vectorstores/rockset) * [Setup](#setup) * [Index docs](#index-docs) * [Query docs](#query-docs) * [Create index with options](#create-index-with-options) * [Delete an index](#delete-an-index) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/rockset
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Rockset On this page Rockset ======= [Rockset](https://rockset.com) is a real-time analyitics SQL database that runs in the cloud. Rockset provides vector search capabilities, in the form of [SQL functions](https://rockset.com/docs/vector-functions/#vector-distance-functions), to support AI applications that rely on text similarity. Setup[​](#setup "Direct link to Setup") --------------------------------------- Install the rockset client. yarn add @rockset/client ### Usage[​](#usage "Direct link to Usage") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community Below is an example showcasing how to use OpenAI and Rockset to answer questions about a text file: import * as rockset from "@rockset/client";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { RocksetStore } from "@langchain/community/vectorstores/rockset";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { readFileSync } from "fs";import { ChatPromptTemplate } from "@langchain/core/prompts";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { createRetrievalChain } from "langchain/chains/retrieval";const store = await RocksetStore.withNewCollection(new OpenAIEmbeddings(), { client: rockset.default.default( process.env.ROCKSET_API_KEY ?? "", `https://api.${process.env.ROCKSET_API_REGION ?? "usw2a1"}.rockset.com` ), collectionName: "langchain_demo",});const model = new ChatOpenAI({ model: "gpt-3.5-turbo-1106" });const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm: model, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: store.asRetriever(), combineDocsChain,});const text = readFileSync("state_of_the_union.txt", "utf8");const docs = await new RecursiveCharacterTextSplitter().createDocuments([text]);await store.addDocuments(docs);const response = await chain.invoke({ input: "When was America founded?",});console.log(response.answer);await store.destroy(); #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [RocksetStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_rockset.RocksetStore.html) from `@langchain/community/vectorstores/rockset` * [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [createStuffDocumentsChain](https://v02.api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents` * [createRetrievalChain](https://v02.api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Redis ](/v0.2/docs/integrations/vectorstores/redis)[ Next SingleStore ](/v0.2/docs/integrations/vectorstores/singlestore) * [Setup](#setup) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/tigris
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Tigris On this page Tigris ====== Tigris makes it easy to build AI applications with vector embeddings. It is a fully managed cloud-native database that allows you store and index documents and vector embeddings for fast and scalable vector search. Compatibility Only available on Node.js. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### 1\. Install the Tigris SDK[​](#1-install-the-tigris-sdk "Direct link to 1. Install the Tigris SDK") Install the SDK as follows * npm * Yarn * pnpm npm install -S @tigrisdata/vector yarn add @tigrisdata/vector pnpm add @tigrisdata/vector ### 2\. Fetch Tigris API credentials[​](#2-fetch-tigris-api-credentials "Direct link to 2. Fetch Tigris API credentials") You can sign up for a free Tigris account [here](https://www.tigrisdata.com/). Once you have signed up for the Tigris account, create a new project called `vectordemo`. Next, make a note of the `clientId` and `clientSecret`, which you can get from the Application Keys section of the project. Index docs[​](#index-docs "Direct link to Index docs") ------------------------------------------------------ tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install -S @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { VectorDocumentStore } from "@tigrisdata/vector";import { Document } from "langchain/document";import { OpenAIEmbeddings } from "@langchain/openai";import { TigrisVectorStore } from "langchain/vectorstores/tigris";const index = new VectorDocumentStore({ connection: { serverUrl: "api.preview.tigrisdata.cloud", projectName: process.env.TIGRIS_PROJECT, clientId: process.env.TIGRIS_CLIENT_ID, clientSecret: process.env.TIGRIS_CLIENT_SECRET, }, indexName: "examples_index", numDimensions: 1536, // match the OpenAI embedding size});const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "tigris is a cloud-native vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "tigris is a river", }),];await TigrisVectorStore.fromDocuments(docs, new OpenAIEmbeddings(), { index }); Query docs[​](#query-docs "Direct link to Query docs") ------------------------------------------------------ import { VectorDocumentStore } from "@tigrisdata/vector";import { OpenAIEmbeddings } from "@langchain/openai";import { TigrisVectorStore } from "langchain/vectorstores/tigris";const index = new VectorDocumentStore({ connection: { serverUrl: "api.preview.tigrisdata.cloud", projectName: process.env.TIGRIS_PROJECT, clientId: process.env.TIGRIS_CLIENT_ID, clientSecret: process.env.TIGRIS_CLIENT_SECRET, }, indexName: "examples_index", numDimensions: 1536, // match the OpenAI embedding size});const vectorStore = await TigrisVectorStore.fromExistingIndex( new OpenAIEmbeddings(), { index });/* Search the vector DB independently with metadata filters */const results = await vectorStore.similaritySearch("tigris", 1, { "metadata.foo": "bar",});console.log(JSON.stringify(results, null, 2));/*[ Document { pageContent: 'tigris is a cloud-native vector db', metadata: { foo: 'bar' } }]*/ * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Supabase ](/v0.2/docs/integrations/vectorstores/supabase)[ Next Turbopuffer ](/v0.2/docs/integrations/vectorstores/turbopuffer) * [Setup](#setup) * [1\. Install the Tigris SDK](#1-install-the-tigris-sdk) * [2\. Fetch Tigris API credentials](#2-fetch-tigris-api-credentials) * [Index docs](#index-docs) * [Query docs](#query-docs) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/turbopuffer
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Turbopuffer On this page Turbopuffer =========== Setup[​](#setup "Direct link to Setup") --------------------------------------- First you must sign up for a Turbopuffer account [here](https://turbopuffer.com/join). Then, once you have an account you can create an API key. Set your API key as an environment variable: export TURBOPUFFER_API_KEY=<YOUR_API_KEY> Usage[​](#usage "Direct link to Usage") --------------------------------------- Here are some examples of how to use the class. You can filter your queries by previous specified metadata, but keep in mind that currently only string values are supported. See [here for more information](https://turbopuffer.com/docs/reference/query#filter-parameters) on acceptable filter formats. import { OpenAIEmbeddings } from "@langchain/openai";import { TurbopufferVectorStore } from "@langchain/community/vectorstores/turbopuffer";const embeddings = new OpenAIEmbeddings();const store = new TurbopufferVectorStore(embeddings, { apiKey: process.env.TURBOPUFFER_API_KEY, namespace: "my-namespace",});const createdAt = new Date().getTime();// Add some documents to your store.// Currently, only string metadata values are supported.const ids = await store.addDocuments([ { pageContent: "some content", metadata: { created_at: createdAt.toString() }, }, { pageContent: "hi", metadata: { created_at: (createdAt + 1).toString() } }, { pageContent: "bye", metadata: { created_at: (createdAt + 2).toString() } }, { pageContent: "what's this", metadata: { created_at: (createdAt + 3).toString() }, },]);// Retrieve documents from the storeconst results = await store.similaritySearch("hello", 1);console.log(results);/* [ Document { pageContent: 'hi', metadata: { created_at: '1705519164987' } } ]*/// Filter by metadata// See https://turbopuffer.com/docs/reference/query#filter-parameters for more on// allowed filtersconst results2 = await store.similaritySearch("hello", 1, { created_at: [["Eq", (createdAt + 3).toString()]],});console.log(results2);/* [ Document { pageContent: "what's this", metadata: { created_at: '1705519164989' } } ]*/// Upsert by passing idsawait store.addDocuments( [ { pageContent: "changed", metadata: { created_at: createdAt.toString() } }, { pageContent: "hi changed", metadata: { created_at: (createdAt + 1).toString() }, }, { pageContent: "bye changed", metadata: { created_at: (createdAt + 2).toString() }, }, { pageContent: "what's this changed", metadata: { created_at: (createdAt + 3).toString() }, }, ], { ids });// Filter by metadataconst results3 = await store.similaritySearch("hello", 10, { created_at: [["Eq", (createdAt + 3).toString()]],});console.log(results3);/* [ Document { pageContent: "what's this changed", metadata: { created_at: '1705519164989' } } ]*/// Remove all vectors from the namespace.await store.delete({ deleteIndex: true,}); #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [TurbopufferVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_turbopuffer.TurbopufferVectorStore.html) from `@langchain/community/vectorstores/turbopuffer` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Tigris ](/v0.2/docs/integrations/vectorstores/tigris)[ Next TypeORM ](/v0.2/docs/integrations/vectorstores/typeorm) * [Setup](#setup) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/typeorm
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * TypeORM On this page TypeORM ======= To enable vector search in a generic PostgreSQL database, LangChain.js supports using [TypeORM](https://typeorm.io/) with the [`pgvector`](https://github.com/pgvector/pgvector) Postgres extension. Setup[​](#setup "Direct link to Setup") --------------------------------------- To work with TypeORM, you need to install the `typeorm` and `pg` packages: * npm * Yarn * pnpm npm install typeorm yarn add typeorm pnpm add typeorm * npm * Yarn * pnpm npm install pg yarn add pg pnpm add pg tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community ### Setup a `pgvector` self hosted instance with `docker-compose`[​](#setup-a-pgvector-self-hosted-instance-with-docker-compose "Direct link to setup-a-pgvector-self-hosted-instance-with-docker-compose") `pgvector` provides a prebuilt Docker image that can be used to quickly setup a self-hosted Postgres instance. Create a file below named `docker-compose.yml`: export default {services:{db:{image:'ankane/pgvector',ports:['5432:5432'],volumes:['./data:/var/lib/postgresql/data'],environment:['POSTGRES_PASSWORD=ChangeMe','POSTGRES_USER=myuser','POSTGRES_DB=api']}}}; #### API Reference: And then in the same directory, run `docker compose up` to start the container. You can find more information on how to setup `pgvector` in the [official repository](https://github.com/pgvector/pgvector). Usage[​](#usage "Direct link to Usage") --------------------------------------- One complete example of using `TypeORMVectorStore` is the following: import { DataSourceOptions } from "typeorm";import { OpenAIEmbeddings } from "@langchain/openai";import { TypeORMVectorStore } from "@langchain/community/vectorstores/typeorm";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/typeormexport const run = async () => { const args = { postgresConnectionOptions: { type: "postgres", host: "localhost", port: 5432, username: "myuser", password: "ChangeMe", database: "api", } as DataSourceOptions, }; const typeormVectorStore = await TypeORMVectorStore.fromDataSource( new OpenAIEmbeddings(), args ); await typeormVectorStore.ensureTableInDatabase(); await typeormVectorStore.addDocuments([ { pageContent: "what's this", metadata: { a: 2 } }, { pageContent: "Cat drinks milk", metadata: { a: 1 } }, ]); const results = await typeormVectorStore.similaritySearch("hello", 2); console.log(results);}; #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [TypeORMVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_typeorm.TypeORMVectorStore.html) from `@langchain/community/vectorstores/typeorm` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Turbopuffer ](/v0.2/docs/integrations/vectorstores/turbopuffer)[ Next Typesense ](/v0.2/docs/integrations/vectorstores/typesense) * [Setup](#setup) * [Setup a `pgvector` self hosted instance with `docker-compose`](#setup-a-pgvector-self-hosted-instance-with-docker-compose) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/typesense
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Typesense On this page Typesense ========= Vector store that utilizes the Typesense search engine. ### Basic Usage[​](#basic-usage "Direct link to Basic Usage") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { Typesense, TypesenseConfig } from "langchain/vectorstores/typesense";import { OpenAIEmbeddings } from "@langchain/openai";import { Client } from "typesense";import { Document } from "langchain/document";const vectorTypesenseClient = new Client({ nodes: [ { // Ideally should come from your .env file host: "...", port: 123, protocol: "https", }, ], // Ideally should come from your .env file apiKey: "...", numRetries: 3, connectionTimeoutSeconds: 60,});const typesenseVectorStoreConfig = { // Typesense client typesenseClient: vectorTypesenseClient, // Name of the collection to store the vectors in schemaName: "your_schema_name", // Optional column names to be used in Typesense columnNames: { // "vec" is the default name for the vector column in Typesense but you can change it to whatever you want vector: "vec", // "text" is the default name for the text column in Typesense but you can change it to whatever you want pageContent: "text", // Names of the columns that you will save in your typesense schema and need to be retrieved as metadata when searching metadataColumnNames: ["foo", "bar", "baz"], }, // Optional search parameters to be passed to Typesense when searching searchParams: { q: "*", filter_by: "foo:[fooo]", query_by: "", }, // You can override the default Typesense import function if you want to do something more complex // Default import function: // async importToTypesense< // T extends Record<string, unknown> = Record<string, unknown> // >(data: T[], collectionName: string) { // const chunkSize = 2000; // for (let i = 0; i < data.length; i += chunkSize) { // const chunk = data.slice(i, i + chunkSize); // await this.caller.call(async () => { // await this.client // .collections<T>(collectionName) // .documents() // .import(chunk, { action: "emplace", dirty_values: "drop" }); // }); // } // } import: async (data, collectionName) => { await vectorTypesenseClient .collections(collectionName) .documents() .import(data, { action: "emplace", dirty_values: "drop" }); },} satisfies TypesenseConfig;/** * Creates a Typesense vector store from a list of documents. * Will update documents if there is a document with the same id, at least with the default import function. * @param documents list of documents to create the vector store from * @returns Typesense vector store */const createVectorStoreWithTypesense = async (documents: Document[] = []) => Typesense.fromDocuments( documents, new OpenAIEmbeddings(), typesenseVectorStoreConfig );/** * Returns a Typesense vector store from an existing index. * @returns Typesense vector store */const getVectorStoreWithTypesense = async () => new Typesense(new OpenAIEmbeddings(), typesenseVectorStoreConfig);// Do a similarity searchconst vectorStore = await getVectorStoreWithTypesense();const documents = await vectorStore.similaritySearch("hello world");// Add filters based on metadata with the search parameters of Typesense// will exclude documents with author:JK Rowling, so if Joe Rowling & JK Rowling exists, only Joe Rowling will be returnedvectorStore.similaritySearch("Rowling", undefined, { filter_by: "author:!=JK Rowling",});// Delete a documentvectorStore.deleteDocuments(["document_id_1", "document_id_2"]); ### Constructor[​](#constructor "Direct link to Constructor") Before starting, create a schema in Typesense with an id, a field for the vector and a field for the text. Add as many other fields as needed for the metadata. * `constructor(embeddings: Embeddings, config: TypesenseConfig)`: Constructs a new instance of the `Typesense` class. * `embeddings`: An instance of the `Embeddings` class used for embedding documents. * `config`: Configuration object for the Typesense vector store. * `typesenseClient`: Typesense client instance. * `schemaName`: Name of the Typesense schema in which documents will be stored and searched. * `searchParams` (optional): Typesense search parameters. Default is `{ q: '*', per_page: 5, query_by: '' }`. * `columnNames` (optional): Column names configuration. * `vector` (optional): Vector column name. Default is `'vec'`. * `pageContent` (optional): Page content column name. Default is `'text'`. * `metadataColumnNames` (optional): Metadata column names. Default is an empty array `[]`. * `import` (optional): Replace the default import function for importing data to Typesense. This can affect the functionality of updating documents. ### Methods[​](#methods "Direct link to Methods") * `async addDocuments(documents: Document[]): Promise<void>`: Adds documents to the vector store. The documents will be updated if there is a document with the same ID. * `static async fromDocuments(docs: Document[], embeddings: Embeddings, config: TypesenseConfig): Promise<Typesense>`: Creates a Typesense vector store from a list of documents. Documents are added to the vector store during construction. * `static async fromTexts(texts: string[], metadatas: object[], embeddings: Embeddings, config: TypesenseConfig): Promise<Typesense>`: Creates a Typesense vector store from a list of texts and associated metadata. Texts are converted to documents and added to the vector store during construction. * `async similaritySearch(query: string, k?: number, filter?: Record<string, unknown>): Promise<Document[]>`: Searches for similar documents based on a query. Returns an array of similar documents. * `async deleteDocuments(documentIds: string[]): Promise<void>`: Deletes documents from the vector store based on their IDs. * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous TypeORM ](/v0.2/docs/integrations/vectorstores/typeorm)[ Next Upstash Vector ](/v0.2/docs/integrations/vectorstores/upstash) * [Basic Usage](#basic-usage) * [Constructor](#constructor) * [Methods](#methods) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/upstash
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Upstash Vector On this page Upstash Vector ============== Upstash Vector is a REST based serverless vector database, designed for working with vector embeddings. Setup[​](#setup "Direct link to Setup") --------------------------------------- 1. Create Upstash Vector Index You can create an index from [Upstash Console](https://console.upstash.com/vector). For further reference, see [docs](https://upstash.com/docs/vector/overall/getstarted). 2. Install Upstash Vector SDK. * npm * Yarn * pnpm npm install -S @upstash/vector yarn add @upstash/vector pnpm add @upstash/vector We use OpenAI for the embeddings of the below examples. However, you can also create the embeddings using the model of your choice, that is available in the LangChain. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community Create Upstash Vector Client[​](#create-upstash-vector-client "Direct link to Create Upstash Vector Client") ------------------------------------------------------------------------------------------------------------ There are two ways to create the client. You can either pass the credentials as string manually from the `.env` file (or as string variables), or you can retrieve the credentials from the environment automatically. import { Index } from "@upstash/vector";import { OpenAIEmbeddings } from "@langchain/openai";import { UpstashVectorStore } from "@langchain/community/vectorstores/upstash";const embeddings = new OpenAIEmbeddings({});// Creating the index with the provided credentials.const indexWithCredentials = new Index({ url: process.env.UPSTASH_VECTOR_REST_URL as string, token: process.env.UPSTASH_VECTOR_REST_TOKEN as string,});const storeWithCredentials = new UpstashVectorStore(embeddings, { index: indexWithCredentials,});// Creating the index from the environment variables automatically.const indexFromEnv = new Index();const storeFromEnv = new UpstashVectorStore(embeddings, { index: indexFromEnv,}); #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [UpstashVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_upstash.UpstashVectorStore.html) from `@langchain/community/vectorstores/upstash` Index and Query Documents[​](#index-and-query-documents "Direct link to Index and Query Documents") --------------------------------------------------------------------------------------------------- You can index the LangChain documents with any model of your choice, and perform a search over these documents. It's possible to apply metadata filtering to the search results. See [the related docs here](https://upstash.com/docs/vector/features/filtering). import { Index } from "@upstash/vector";import { OpenAIEmbeddings } from "@langchain/openai";import { Document } from "@langchain/core/documents";import { UpstashVectorStore } from "@langchain/community/vectorstores/upstash";const index = new Index({ url: process.env.UPSTASH_VECTOR_REST_URL as string, token: process.env.UPSTASH_VECTOR_REST_TOKEN as string,});const embeddings = new OpenAIEmbeddings({});const UpstashVector = new UpstashVectorStore(embeddings, { index });// Creating the docs to be indexed.const id = new Date().getTime();const documents = [ new Document({ metadata: { name: id }, pageContent: "Hello there!", }), new Document({ metadata: { name: id }, pageContent: "What are you building?", }), new Document({ metadata: { time: id }, pageContent: "Upstash Vector is great for building AI applications.", }), new Document({ metadata: { time: id }, pageContent: "To be, or not to be, that is the question.", }),];// Creating embeddings from the provided documents, and adding them to Upstash database.await UpstashVector.addDocuments(documents);// Waiting vectors to be indexed in the vector store.// eslint-disable-next-line no-promise-executor-returnawait new Promise((resolve) => setTimeout(resolve, 1000));const queryResult = await UpstashVector.similaritySearchWithScore( "Vector database", 2);console.log(queryResult);/**[ [ Document { pageContent: 'Upstash Vector is great for building AI applications.', metadata: [Object] }, 0.9016147 ], [ Document { pageContent: 'What are you building?', metadata: [Object] }, 0.8613077 ]] */ #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` * [UpstashVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_upstash.UpstashVectorStore.html) from `@langchain/community/vectorstores/upstash` Delete Documents[​](#delete-documents "Direct link to Delete Documents") ------------------------------------------------------------------------ You can also delete the documents you've indexed previously. import { Index } from "@upstash/vector";import { OpenAIEmbeddings } from "@langchain/openai";import { UpstashVectorStore } from "@langchain/community/vectorstores/upstash";const index = new Index({ url: process.env.UPSTASH_VECTOR_REST_URL as string, token: process.env.UPSTASH_VECTOR_REST_TOKEN as string,});const embeddings = new OpenAIEmbeddings({});const UpstashVector = new UpstashVectorStore(embeddings, { index });// Creating the docs to be indexed.const createdAt = new Date().getTime();const IDs = await UpstashVector.addDocuments([ { pageContent: "hello", metadata: { a: createdAt + 1 } }, { pageContent: "car", metadata: { a: createdAt } }, { pageContent: "adjective", metadata: { a: createdAt } }, { pageContent: "hi", metadata: { a: createdAt } },]);// Waiting vectors to be indexed in the vector store.// eslint-disable-next-line no-promise-executor-returnawait new Promise((resolve) => setTimeout(resolve, 1000));await UpstashVector.delete({ ids: [IDs[0], IDs[2], IDs[3]] }); #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [UpstashVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_upstash.UpstashVectorStore.html) from `@langchain/community/vectorstores/upstash` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Typesense ](/v0.2/docs/integrations/vectorstores/typesense)[ Next USearch ](/v0.2/docs/integrations/vectorstores/usearch) * [Setup](#setup) * [Create Upstash Vector Client](#create-upstash-vector-client) * [Index and Query Documents](#index-and-query-documents) * [Delete Documents](#delete-documents) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/usearch
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * USearch On this page USearch ======= Compatibility Only available on Node.js. [USearch](https://github.com/unum-cloud/usearch) is a library for efficient similarity search and clustering of dense vectors. Setup[​](#setup "Direct link to Setup") --------------------------------------- Install the [usearch](https://github.com/unum-cloud/usearch/tree/main/javascript) package, which is a Node.js binding for [USearch](https://github.com/unum-cloud/usearch). * npm * Yarn * pnpm npm install -S usearch yarn add usearch pnpm add usearch tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community Usage[​](#usage "Direct link to Usage") --------------------------------------- ### Create a new index from texts[​](#create-a-new-index-from-texts "Direct link to Create a new index from texts") import { USearch } from "@langchain/community/vectorstores/usearch";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await USearch.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne); #### API Reference: * [USearch](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_usearch.USearch.html) from `@langchain/community/vectorstores/usearch` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` ### Create a new index from a loader[​](#create-a-new-index-from-a-loader "Direct link to Create a new index from a loader") import { USearch } from "@langchain/community/vectorstores/usearch";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await USearch.fromDocuments(docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne); #### API Reference: * [USearch](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_usearch.USearch.html) from `@langchain/community/vectorstores/usearch` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Upstash Vector ](/v0.2/docs/integrations/vectorstores/upstash)[ Next Vectara ](/v0.2/docs/integrations/vectorstores/vectara) * [Setup](#setup) * [Usage](#usage) * [Create a new index from texts](#create-a-new-index-from-texts) * [Create a new index from a loader](#create-a-new-index-from-a-loader) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/vectara
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Vectara On this page Vectara ======= Vectara is a platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. You can use Vectara as a vector store with LangChain.js. 👉 Embeddings Included[​](#-embeddings-included "Direct link to 👉 Embeddings Included") ---------------------------------------------------------------------------------------- Vectara uses its own embeddings under the hood, so you don't have to provide any yourself or call another service to obtain embeddings. This also means that if you provide your own embeddings, they'll be a no-op. const store = await VectaraStore.fromTexts( ["hello world", "hi there"], [{ foo: "bar" }, { foo: "baz" }], // This won't have an effect. Provide a FakeEmbeddings instance instead for clarity. new OpenAIEmbeddings(), args); Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll need to: * Create a [free Vectara account](https://vectara.com/integrations/langchain). * Create a [corpus](https://docs.vectara.com/docs/console-ui/creating-a-corpus) to store your data * Create an [API key](https://docs.vectara.com/docs/common-use-cases/app-authn-authz/api-keys) with QueryService and IndexService access so you can access this corpus Configure your `.env` file or provide args to connect LangChain to your Vectara corpus: VECTARA_CUSTOMER_ID=your_customer_idVECTARA_CORPUS_ID=your_corpus_idVECTARA_API_KEY=your-vectara-api-key Note that you can provide multiple corpus IDs separated by commas for querying multiple corpora at once. For example: `VECTARA_CORPUS_ID=3,8,9,43`. For indexing multiple corpora, you'll need to create a separate VectaraStore instance for each corpus. Usage[​](#usage "Direct link to Usage") --------------------------------------- import { VectaraStore } from "@langchain/community/vectorstores/vectara";import { VectaraSummaryRetriever } from "@langchain/community/retrievers/vectara_summary";import { Document } from "@langchain/core/documents";// Create the Vectara store.const store = new VectaraStore({ customerId: Number(process.env.VECTARA_CUSTOMER_ID), corpusId: Number(process.env.VECTARA_CORPUS_ID), apiKey: String(process.env.VECTARA_API_KEY), verbose: true,});// Add two documents with some metadata.const doc_ids = await store.addDocuments([ new Document({ pageContent: "Do I dare to eat a peach?", metadata: { foo: "baz", }, }), new Document({ pageContent: "In the room the women come and go talking of Michelangelo", metadata: { foo: "bar", }, }),]);// Perform a similarity search.const resultsWithScore = await store.similaritySearchWithScore( "What were the women talking about?", 1, { lambda: 0.025, });// Print the results.console.log(JSON.stringify(resultsWithScore, null, 2));/*[ [ { "pageContent": "In the room the women come and go talking of Michelangelo", "metadata": { "lang": "eng", "offset": "0", "len": "57", "foo": "bar" } }, 0.4678752 ]]*/const retriever = new VectaraSummaryRetriever({ vectara: store, topK: 3 });const documents = await retriever.invoke("What were the women talking about?");console.log(JSON.stringify(documents, null, 2));/*[ { "pageContent": "<b>In the room the women come and go talking of Michelangelo</b>", "metadata": { "lang": "eng", "offset": "0", "len": "57", "foo": "bar" } }, { "pageContent": "<b>In the room the women come and go talking of Michelangelo</b>", "metadata": { "lang": "eng", "offset": "0", "len": "57", "foo": "bar" } }, { "pageContent": "<b>In the room the women come and go talking of Michelangelo</b>", "metadata": { "lang": "eng", "offset": "0", "len": "57", "foo": "bar" } }]*/// Delete the documents.await store.deleteDocuments(doc_ids); #### API Reference: * [VectaraStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_vectara.VectaraStore.html) from `@langchain/community/vectorstores/vectara` * [VectaraSummaryRetriever](https://v02.api.js.langchain.com/classes/langchain_community_retrievers_vectara_summary.VectaraSummaryRetriever.html) from `@langchain/community/retrievers/vectara_summary` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` Note that `lambda` is a parameter related to Vectara's hybrid search capbility, providing a tradeoff between neural search and boolean/exact match as described [here](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching). We recommend the value of 0.025 as a default, while providing a way for advanced users to customize this value if needed. APIs[​](#apis "Direct link to APIs") ------------------------------------ Vectara's LangChain vector store consumes Vectara's core APIs: * [Indexing API](https://docs.vectara.com/docs/indexing-apis/indexing) for storing documents in a Vectara corpus. * [Search API](https://docs.vectara.com/docs/search-apis/search) for querying this data. This API supports hybrid search. * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous USearch ](/v0.2/docs/integrations/vectorstores/usearch)[ Next Vercel Postgres ](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [👉 Embeddings Included](#-embeddings-included) * [Setup](#setup) * [Usage](#usage) * [APIs](#apis) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/document_transformers/openai_metadata_tagger
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [html-to-text](/v0.2/docs/integrations/document_transformers/html-to-text) * [@mozilla/readability](/v0.2/docs/integrations/document_transformers/mozilla_readability) * [OpenAI functions metadata tagger](/v0.2/docs/integrations/document_transformers/openai_metadata_tagger) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Document transformers](/v0.2/docs/integrations/document_transformers) * OpenAI functions metadata tagger On this page OpenAI functions metadata tagger ================================ It can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious. The `MetadataTagger` document transformer automates this process by extracting metadata from each provided document according to a provided schema. It uses a configurable OpenAI Functions-powered chain under the hood, so if you pass a custom LLM instance, it must be an OpenAI model with functions support. **Note:** This document transformer works best with complete documents, so it's best to run it first with whole documents before doing any other splitting or processing! ### Usage[​](#usage "Direct link to Usage") For example, let's say you wanted to index a set of movie reviews. You could initialize the document transformer as follows: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { z } from "zod";import { createMetadataTaggerFromZod } from "langchain/document_transformers/openai_functions";import { ChatOpenAI } from "@langchain/openai";import { Document } from "@langchain/core/documents";const zodSchema = z.object({ movie_title: z.string(), critic: z.string(), tone: z.enum(["positive", "negative"]), rating: z .optional(z.number()) .describe("The number of stars the critic rated the movie"),});const metadataTagger = createMetadataTaggerFromZod(zodSchema, { llm: new ChatOpenAI({ model: "gpt-3.5-turbo" }),});const documents = [ new Document({ pageContent: "Review of The Bee Movie\nBy Roger Ebert\nThis is the greatest movie ever made. 4 out of 5 stars.", }), new Document({ pageContent: "Review of The Godfather\nBy Anonymous\n\nThis movie was super boring. 1 out of 5 stars.", metadata: { reliable: false }, }),];const taggedDocuments = await metadataTagger.transformDocuments(documents);console.log(taggedDocuments);/* [ Document { pageContent: 'Review of The Bee Movie\n' + 'By Roger Ebert\n' + 'This is the greatest movie ever made. 4 out of 5 stars.', metadata: { movie_title: 'The Bee Movie', critic: 'Roger Ebert', tone: 'positive', rating: 4 } }, Document { pageContent: 'Review of The Godfather\n' + 'By Anonymous\n' + '\n' + 'This movie was super boring. 1 out of 5 stars.', metadata: { movie_title: 'The Godfather', critic: 'Anonymous', tone: 'negative', rating: 1, reliable: false } } ]*/ #### API Reference: * [createMetadataTaggerFromZod](https://v02.api.js.langchain.com/functions/langchain_document_transformers_openai_functions.createMetadataTaggerFromZod.html) from `langchain/document_transformers/openai_functions` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` There is an additional `createMetadataTagger` method that accepts a valid JSON Schema object as well. ### Customization[​](#customization "Direct link to Customization") You can pass the underlying tagging chain the standard LLMChain arguments in the second options parameter. For example, if you wanted to ask the LLM to focus specific details in the input documents, or extract metadata in a certain style, you could pass in a custom prompt: import { z } from "zod";import { createMetadataTaggerFromZod } from "langchain/document_transformers/openai_functions";import { ChatOpenAI } from "@langchain/openai";import { Document } from "@langchain/core/documents";import { PromptTemplate } from "@langchain/core/prompts";const taggingChainTemplate = `Extract the desired information from the following passage.Anonymous critics are actually Roger Ebert.Passage:{input}`;const zodSchema = z.object({ movie_title: z.string(), critic: z.string(), tone: z.enum(["positive", "negative"]), rating: z .optional(z.number()) .describe("The number of stars the critic rated the movie"),});const metadataTagger = createMetadataTaggerFromZod(zodSchema, { llm: new ChatOpenAI({ model: "gpt-3.5-turbo" }), prompt: PromptTemplate.fromTemplate(taggingChainTemplate),});const documents = [ new Document({ pageContent: "Review of The Bee Movie\nBy Roger Ebert\nThis is the greatest movie ever made. 4 out of 5 stars.", }), new Document({ pageContent: "Review of The Godfather\nBy Anonymous\n\nThis movie was super boring. 1 out of 5 stars.", metadata: { reliable: false }, }),];const taggedDocuments = await metadataTagger.transformDocuments(documents);console.log(taggedDocuments);/* [ Document { pageContent: 'Review of The Bee Movie\n' + 'By Roger Ebert\n' + 'This is the greatest movie ever made. 4 out of 5 stars.', metadata: { movie_title: 'The Bee Movie', critic: 'Roger Ebert', tone: 'positive', rating: 4 } }, Document { pageContent: 'Review of The Godfather\n' + 'By Anonymous\n' + '\n' + 'This movie was super boring. 1 out of 5 stars.', metadata: { movie_title: 'The Godfather', critic: 'Roger Ebert', tone: 'negative', rating: 1, reliable: false } } ]*/ #### API Reference: * [createMetadataTaggerFromZod](https://v02.api.js.langchain.com/functions/langchain_document_transformers_openai_functions.createMetadataTaggerFromZod.html) from `langchain/document_transformers/openai_functions` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` * [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous @mozilla/readability ](/v0.2/docs/integrations/document_transformers/mozilla_readability)[ Next Vector stores ](/v0.2/docs/integrations/vectorstores) * [Usage](#usage) * [Customization](#customization) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/vectorstores/voy
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Memory](/v0.2/docs/integrations/vectorstores/memory) * [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) * [Astra DB](/v0.2/docs/integrations/vectorstores/astradb) * [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch) * [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb) * [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra) * [Chroma](/v0.2/docs/integrations/vectorstores/chroma) * [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse) * [CloseVector](/v0.2/docs/integrations/vectorstores/closevector) * [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) * [Convex](/v0.2/docs/integrations/vectorstores/convex) * [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase) * [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch) * [Faiss](/v0.2/docs/integrations/vectorstores/faiss) * [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai) * [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector) * [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) * [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) * [Milvus](/v0.2/docs/integrations/vectorstores/milvus) * [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index) * [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas) * [MyScale](/v0.2/docs/integrations/vectorstores/myscale) * [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector) * [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon) * [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch) * [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) * [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone) * [Prisma](/v0.2/docs/integrations/vectorstores/prisma) * [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant) * [Redis](/v0.2/docs/integrations/vectorstores/redis) * [Rockset](/v0.2/docs/integrations/vectorstores/rockset) * [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) * [Supabase](/v0.2/docs/integrations/vectorstores/supabase) * [Tigris](/v0.2/docs/integrations/vectorstores/tigris) * [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer) * [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm) * [Typesense](/v0.2/docs/integrations/vectorstores/typesense) * [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash) * [USearch](/v0.2/docs/integrations/vectorstores/usearch) * [Vectara](/v0.2/docs/integrations/vectorstores/vectara) * [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres) * [Voy](/v0.2/docs/integrations/vectorstores/voy) * [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate) * [Xata](/v0.2/docs/integrations/vectorstores/xata) * [Zep](/v0.2/docs/integrations/vectorstores/zep) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Vector stores](/v0.2/docs/integrations/vectorstores) * Voy On this page Voy === [Voy](https://github.com/tantaraio/voy) is a WASM vector similarity search engine written in Rust. It's supported in non-Node environments like browsers. You can use Voy as a vector store with LangChain.js. ### Install Voy[​](#install-voy "Direct link to Install Voy") tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai voy-search @langchain/community yarn add @langchain/openai voy-search @langchain/community pnpm add @langchain/openai voy-search @langchain/community Usage[​](#usage "Direct link to Usage") --------------------------------------- import { VoyVectorStore } from "@langchain/community/vectorstores/voy";import { Voy as VoyClient } from "voy-search";import { OpenAIEmbeddings } from "@langchain/openai";import { Document } from "@langchain/core/documents";// Create Voy client using the library.const voyClient = new VoyClient();// Create embeddingsconst embeddings = new OpenAIEmbeddings();// Create the Voy store.const store = new VoyVectorStore(voyClient, embeddings);// Add two documents with some metadata.await store.addDocuments([ new Document({ pageContent: "How has life been treating you?", metadata: { foo: "Mike", }, }), new Document({ pageContent: "And I took it personally...", metadata: { foo: "Testing", }, }),]);const model = new OpenAIEmbeddings();const query = await model.embedQuery("And I took it personally");// Perform a similarity search.const resultsWithScore = await store.similaritySearchVectorWithScore(query, 1);// Print the results.console.log(JSON.stringify(resultsWithScore, null, 2));/* [ [ { "pageContent": "And I took it personally...", "metadata": { "foo": "Testing" } }, 0 ] ]*/ #### API Reference: * [VoyVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_voy.VoyVectorStore.html) from `@langchain/community/vectorstores/voy` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Vercel Postgres ](/v0.2/docs/integrations/vectorstores/vercel_postgres)[ Next Weaviate ](/v0.2/docs/integrations/vectorstores/weaviate) * [Install Voy](#install-voy) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/llms/openai
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [LLMs](/v0.2/docs/integrations/llms/) * [AI21](/v0.2/docs/integrations/llms/ai21) * [AlephAlpha](/v0.2/docs/integrations/llms/aleph_alpha) * [AWS SageMakerEndpoint](/v0.2/docs/integrations/llms/aws_sagemaker) * [Azure OpenAI](/v0.2/docs/integrations/llms/azure) * [Bedrock](/v0.2/docs/integrations/llms/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/llms/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/llms/cohere) * [Fireworks](/v0.2/docs/integrations/llms/fireworks) * [Friendli](/v0.2/docs/integrations/llms/friendli) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/llms/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/llms/google_vertex_ai) * [Gradient AI](/v0.2/docs/integrations/llms/gradient_ai) * [HuggingFaceInference](/v0.2/docs/integrations/llms/huggingface_inference) * [Llama CPP](/v0.2/docs/integrations/llms/llama_cpp) * [NIBittensor](/v0.2/docs/integrations/llms/ni_bittensor) * [Ollama](/v0.2/docs/integrations/llms/ollama) * [OpenAI](/v0.2/docs/integrations/llms/openai) * [PromptLayer OpenAI](/v0.2/docs/integrations/llms/prompt_layer_openai) * [RaycastAI](/v0.2/docs/integrations/llms/raycast) * [Replicate](/v0.2/docs/integrations/llms/replicate) * [Together AI](/v0.2/docs/integrations/llms/togetherai) * [WatsonX AI](/v0.2/docs/integrations/llms/watsonx_ai) * [Writer](/v0.2/docs/integrations/llms/writer) * [YandexGPT](/v0.2/docs/integrations/llms/yandex) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [LLMs](/v0.2/docs/integrations/llms/) * OpenAI On this page OpenAI ====== Here's how you can initialize an `OpenAI` LLM instance: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai tip We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys. import { OpenAI } from "@langchain/openai";const model = new OpenAI({ model: "gpt-3.5-turbo-instruct", // Defaults to "gpt-3.5-turbo-instruct" if no model provided. temperature: 0.9, apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY});const res = await model.invoke( "What would be a good company name a company that makes colorful socks?");console.log({ res }); If you're part of an organization, you can set `process.env.OPENAI_ORGANIZATION` to your OpenAI organization id, or pass it in as `organization` when initializing the model. Custom URLs[​](#custom-urls "Direct link to Custom URLs") --------------------------------------------------------- You can customize the base URL the SDK sends requests to by passing a `configuration` parameter like this: const model = new OpenAI({ temperature: 0.9, configuration: { baseURL: "https://your_custom_url.com", },}); You can also pass other `ClientOptions` parameters accepted by the official SDK. If you are hosting on Azure OpenAI, see the [dedicated page instead](/v0.2/docs/integrations/llms/azure). * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Ollama ](/v0.2/docs/integrations/llms/ollama)[ Next PromptLayer OpenAI ](/v0.2/docs/integrations/llms/prompt_layer_openai) * [Custom URLs](#custom-urls) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/openai
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * OpenAI On this page ChatOpenAI ========== You can use OpenAI's chat models as follows: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai tip We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys. import { ChatOpenAI } from "@langchain/openai";import { HumanMessage } from "@langchain/core/messages";const model = new ChatOpenAI({ temperature: 0.9, apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY});// You can also pass tools or functions to the model, learn more here// https://platform.openai.com/docs/guides/gpt/function-callingconst modelForFunctionCalling = new ChatOpenAI({ model: "gpt-4", temperature: 0,});await modelForFunctionCalling.invoke( [new HumanMessage("What is the weather in New York?")], { functions: [ { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, ], // You can set the `function_call` arg to force the model to use a function function_call: { name: "get_current_weather", }, });/*AIMessage { text: '', name: undefined, additional_kwargs: { function_call: { name: 'get_current_weather', arguments: '{\n "location": "New York"\n}' } }}*/// Coerce response type with JSON mode.// Requires "gpt-4-1106-preview" or laterconst jsonModeModel = new ChatOpenAI({ model: "gpt-4-1106-preview", maxTokens: 128,}).bind({ response_format: { type: "json_object", },});// Must be invoked with a system message containing the string "JSON":// https://platform.openai.com/docs/guides/text-generation/json-modeconst res = await jsonModeModel.invoke([ ["system", "Only return JSON"], ["human", "Hi there!"],]);console.log(res);/* AIMessage { content: '{\n "response": "How can I assist you today?"\n}', name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined } }*/ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` If you're part of an organization, you can set `process.env.OPENAI_ORGANIZATION` with your OpenAI organization id, or pass it in as `organization` when initializing the model. Multimodal messages[​](#multimodal-messages "Direct link to Multimodal messages") --------------------------------------------------------------------------------- info This feature is currently in preview. The message schema may change in future releases. OpenAI supports interleaving images with text in input messages with their `gpt-4-vision-preview`. Here's an example of how this looks: import * as fs from "node:fs/promises";import { ChatOpenAI } from "@langchain/openai";import { HumanMessage } from "@langchain/core/messages";const imageData = await fs.readFile("./hotdog.jpg");const chat = new ChatOpenAI({ model: "gpt-4-vision-preview", maxTokens: 1024,});const message = new HumanMessage({ content: [ { type: "text", text: "What's in this image?", }, { type: "image_url", image_url: { url: `data:image/jpeg;base64,${imageData.toString("base64")}`, }, }, ],});const res = await chat.invoke([message]);console.log({ res });/* { res: AIMessage { content: 'The image shows a hot dog, which consists of a grilled or steamed sausage served in the slit of a partially sliced bun. This particular hot dog appears to be plain, without any visible toppings or condiments.', additional_kwargs: { function_call: undefined } } }*/const hostedImageMessage = new HumanMessage({ content: [ { type: "text", text: "What does this image say?", }, { type: "image_url", image_url: "https://www.freecodecamp.org/news/content/images/2023/05/Screenshot-2023-05-29-at-5.40.38-PM.png", }, ],});const res2 = await chat.invoke([hostedImageMessage]);console.log({ res2 });/* { res2: AIMessage { content: 'The image contains the text "LangChain" with a graphical depiction of a parrot on the left and two interlocked rings on the left side of the text.', additional_kwargs: { function_call: undefined } } }*/const lowDetailImage = new HumanMessage({ content: [ { type: "text", text: "Summarize the contents of this image.", }, { type: "image_url", image_url: { url: "https://blog.langchain.dev/content/images/size/w1248/format/webp/2023/10/Screenshot-2023-10-03-at-4.55.29-PM.png", detail: "low", }, }, ],});const res3 = await chat.invoke([lowDetailImage]);console.log({ res3 });/* { res3: AIMessage { content: 'The image shows a user interface for a service named "WebLangChain," which appears to be powered by "Twalv." It includes a text box with the prompt "Ask me anything about anything!" suggesting that users can enter questions on various topics. Below the text box, there are example questions that users might ask, such as "what is langchain?", "history of mesopotamia," "how to build a discord bot," "leonardo dicaprio girlfriend," "fun gift ideas for software engineers," "how does a prism separate light," and "what beer is best." The interface also includes a round blue button with a paper plane icon, presumably to submit the question. The overall theme of the image is dark with blue accents.', additional_kwargs: { function_call: undefined } } }*/ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` Tool calling[​](#tool-calling "Direct link to Tool calling") ------------------------------------------------------------ info This feature is currently only available for `gpt-3.5-turbo-1106` and `gpt-4-1106-preview` models. More recent OpenAI chat models support calling multiple functions to get all required data to answer a question. Here's an example how a conversation turn with this functionality might look: import { ChatOpenAI } from "@langchain/openai";import { ToolMessage } from "@langchain/core/messages";// Mocked out function, could be a database/API call in productionfunction getCurrentWeather(location: string, _unit?: string) { if (location.toLowerCase().includes("tokyo")) { return JSON.stringify({ location, temperature: "10", unit: "celsius" }); } else if (location.toLowerCase().includes("san francisco")) { return JSON.stringify({ location, temperature: "72", unit: "fahrenheit", }); } else { return JSON.stringify({ location, temperature: "22", unit: "celsius" }); }}// Bind function to the model as a toolconst chat = new ChatOpenAI({ model: "gpt-3.5-turbo-1106", maxTokens: 128,}).bind({ tools: [ { type: "function", function: { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, }, ], tool_choice: "auto",});// Ask initial question that requires multiple tool callsconst res = await chat.invoke([ ["human", "What's the weather like in San Francisco, Tokyo, and Paris?"],]);console.log(res.additional_kwargs.tool_calls);/* [ { id: 'call_IiOsjIZLWvnzSh8iI63GieUB', type: 'function', function: { name: 'get_current_weather', arguments: '{"location": "San Francisco", "unit": "celsius"}' } }, { id: 'call_blQ3Oz28zSfvS6Bj6FPEUGA1', type: 'function', function: { name: 'get_current_weather', arguments: '{"location": "Tokyo", "unit": "celsius"}' } }, { id: 'call_Kpa7FaGr3F1xziG8C6cDffsg', type: 'function', function: { name: 'get_current_weather', arguments: '{"location": "Paris", "unit": "celsius"}' } } ]*/// Format the results from calling the tool calls back to OpenAI as ToolMessagesconst toolMessages = res.additional_kwargs.tool_calls?.map((toolCall) => { const toolCallResult = getCurrentWeather( JSON.parse(toolCall.function.arguments).location ); return new ToolMessage({ tool_call_id: toolCall.id, name: toolCall.function.name, content: toolCallResult, });});// Send the results back as the next step in the conversationconst finalResponse = await chat.invoke([ ["human", "What's the weather like in San Francisco, Tokyo, and Paris?"], res, ...(toolMessages ?? []),]);console.log(finalResponse);/* AIMessage { content: 'The current weather in:\n' + '- San Francisco is 72°F\n' + '- Tokyo is 10°C\n' + '- Paris is 22°C', additional_kwargs: { function_call: undefined, tool_calls: undefined } }*/ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ToolMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages_tool.ToolMessage.html) from `@langchain/core/messages` ### `.withStructuredOutput({ ... })`[​](#withstructuredoutput-- "Direct link to withstructuredoutput--") info The `.withStructuredOutput` method is in beta. It is actively being worked on, so the API may change. You can also use the `.withStructuredOutput({ ... })` method to coerce `ChatOpenAI` into returning a structured output. The method allows for passing in either a Zod object, or a valid JSON schema (like what is returned from [`zodToJsonSchema`](https://www.npmjs.com/package/zod-to-json-schema)). Using the method is simple. Just define your LLM and call `.withStructuredOutput({ ... })` on it, passing the desired schema. Here is an example using a Zod schema and the `functionCalling` mode (default mode): import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";import { z } from "zod";const model = new ChatOpenAI({ temperature: 0, model: "gpt-4-turbo-preview",});const calculatorSchema = z.object({ operation: z.enum(["add", "subtract", "multiply", "divide"]), number1: z.number(), number2: z.number(),});const modelWithStructuredOutput = model.withStructuredOutput(calculatorSchema);const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are VERY bad at math and must always use a calculator."], ["human", "Please help me!! What is 2 + 2?"],]);const chain = prompt.pipe(modelWithStructuredOutput);const result = await chain.invoke({});console.log(result);/*{ operation: 'add', number1: 2, number2: 2 } *//** * You can also specify 'includeRaw' to return the parsed * and raw output in the result. */const includeRawModel = model.withStructuredOutput(calculatorSchema, { name: "calculator", includeRaw: true,});const includeRawChain = prompt.pipe(includeRawModel);const includeRawResult = await includeRawChain.invoke({});console.log(JSON.stringify(includeRawResult, null, 2));/*{ "raw": { "kwargs": { "content": "", "additional_kwargs": { "tool_calls": [ { "id": "call_A8yzNBDMiRrCB8dFYqJLhYW7", "type": "function", "function": { "name": "calculator", "arguments": "{\"operation\":\"add\",\"number1\":2,\"number2\":2}" } } ] } } }, "parsed": { "operation": "add", "number1": 2, "number2": 2 }} */ #### API Reference: * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` Additionally, you can pass in an OpenAI function definition or JSON schema directly: info If using `jsonMode` as the `method` you must include context in your prompt about the structured output you want. This _must_ include the keyword: `JSON`. import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ temperature: 0, model: "gpt-4-turbo-preview",});const calculatorSchema = { type: "object", properties: { operation: { type: "string", enum: ["add", "subtract", "multiply", "divide"], }, number1: { type: "number" }, number2: { type: "number" }, }, required: ["operation", "number1", "number2"],};// Default mode is "functionCalling"const modelWithStructuredOutput = model.withStructuredOutput(calculatorSchema);const prompt = ChatPromptTemplate.fromMessages([ [ "system", `You are VERY bad at math and must always use a calculator.Respond with a JSON object containing three keys:'operation': the type of operation to execute, either 'add', 'subtract', 'multiply' or 'divide','number1': the first number to operate on,'number2': the second number to operate on.`, ], ["human", "Please help me!! What is 2 + 2?"],]);const chain = prompt.pipe(modelWithStructuredOutput);const result = await chain.invoke({});console.log(result);/*{ operation: 'add', number1: 2, number2: 2 } *//** * You can also specify 'includeRaw' to return the parsed * and raw output in the result, as well as a "name" field * to give the LLM additional context as to what you are generating. */const includeRawModel = model.withStructuredOutput(calculatorSchema, { name: "calculator", includeRaw: true, method: "jsonMode",});const includeRawChain = prompt.pipe(includeRawModel);const includeRawResult = await includeRawChain.invoke({});console.log(JSON.stringify(includeRawResult, null, 2));/*{ "raw": { "kwargs": { "content": "{\n \"operation\": \"add\",\n \"number1\": 2,\n \"number2\": 2\n}", "additional_kwargs": {} } }, "parsed": { "operation": "add", "number1": 2, "number2": 2 }} */ #### API Reference: * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` Custom URLs[​](#custom-urls "Direct link to Custom URLs") --------------------------------------------------------- You can customize the base URL the SDK sends requests to by passing a `configuration` parameter like this: import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ temperature: 0.9, configuration: { baseURL: "https://your_custom_url.com", },});const message = await model.invoke("Hi there!");console.log(message);/* AIMessage { content: 'Hello! How can I assist you today?', additional_kwargs: { function_call: undefined } }*/ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` You can also pass other `ClientOptions` parameters accepted by the official SDK. If you are hosting on Azure OpenAI, see the [dedicated page instead](/v0.2/docs/integrations/chat/azure). Calling fine-tuned models[​](#calling-fine-tuned-models "Direct link to Calling fine-tuned models") --------------------------------------------------------------------------------------------------- You can call fine-tuned OpenAI models by passing in your corresponding `modelName` parameter. This generally takes the form of `ft:{OPENAI_MODEL_NAME}:{ORG_NAME}::{MODEL_ID}`. For example: import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ temperature: 0.9, model: "ft:gpt-3.5-turbo-0613:{ORG_NAME}::{MODEL_ID}",});const message = await model.invoke("Hi there!");console.log(message);/* AIMessage { content: 'Hello! How can I assist you today?', additional_kwargs: { function_call: undefined } }*/ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` Generation metadata[​](#generation-metadata "Direct link to Generation metadata") --------------------------------------------------------------------------------- If you need additional information like logprobs or token usage, these will be returned directly in the `.invoke` response. tip Requires `@langchain/core` version >=0.1.48. import { ChatOpenAI } from "@langchain/openai";// See https://cookbook.openai.com/examples/using_logprobs for detailsconst model = new ChatOpenAI({ logprobs: true, // topLogprobs: 5,});const responseMessage = await model.invoke("Hi there!");console.log(JSON.stringify(responseMessage, null, 2));/* { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "Hello! How can I assist you today?", "additional_kwargs": {}, "response_metadata": { "tokenUsage": { "completionTokens": 9, "promptTokens": 10, "totalTokens": 19 }, "finish_reason": "stop", "logprobs": { "content": [ { "token": "Hello", "logprob": -0.0006793116, "bytes": [ 72, 101, 108, 108, 111 ], "top_logprobs": [] }, { "token": "!", "logprob": -0.00011725161, "bytes": [ 33 ], "top_logprobs": [] }, { "token": " How", "logprob": -0.000038457987, "bytes": [ 32, 72, 111, 119 ], "top_logprobs": [] }, { "token": " can", "logprob": -0.00094290765, "bytes": [ 32, 99, 97, 110 ], "top_logprobs": [] }, { "token": " I", "logprob": -0.0000013856493, "bytes": [ 32, 73 ], "top_logprobs": [] }, { "token": " assist", "logprob": -0.14702488, "bytes": [ 32, 97, 115, 115, 105, 115, 116 ], "top_logprobs": [] }, { "token": " you", "logprob": -0.000001147242, "bytes": [ 32, 121, 111, 117 ], "top_logprobs": [] }, { "token": " today", "logprob": -0.000067901296, "bytes": [ 32, 116, 111, 100, 97, 121 ], "top_logprobs": [] }, { "token": "?", "logprob": -0.000014974867, "bytes": [ 63 ], "top_logprobs": [] } ] } } } }*/ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` ### With callbacks[​](#with-callbacks "Direct link to With callbacks") You can also use the callbacks system: import { ChatOpenAI } from "@langchain/openai";// See https://cookbook.openai.com/examples/using_logprobs for detailsconst model = new ChatOpenAI({ logprobs: true, // topLogprobs: 5,});const result = await model.invoke("Hi there!", { callbacks: [ { handleLLMEnd(output) { console.log("GENERATION OUTPUT:", JSON.stringify(output, null, 2)); }, }, ],});console.log("FINAL OUTPUT", result);/* GENERATION OUTPUT: { "generations": [ [ { "text": "Hello! How can I assist you today?", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "Hello! How can I assist you today?", "additional_kwargs": {} } }, "generationInfo": { "finish_reason": "stop", "logprobs": { "content": [ { "token": "Hello", "logprob": -0.0010195904, "bytes": [ 72, 101, 108, 108, 111 ], "top_logprobs": [] }, { "token": "!", "logprob": -0.0004447316, "bytes": [ 33 ], "top_logprobs": [] }, { "token": " How", "logprob": -0.00006682846, "bytes": [ 32, 72, 111, 119 ], "top_logprobs": [] }, ... ] } } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 9, "promptTokens": 10, "totalTokens": 19 } } } FINAL OUTPUT AIMessage { content: 'Hello! How can I assist you today?', additional_kwargs: { function_call: undefined, tool_calls: undefined } }*/ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` ### With `.generate()`[​](#with-generate "Direct link to with-generate") import { ChatOpenAI } from "@langchain/openai";import { HumanMessage } from "@langchain/core/messages";// See https://cookbook.openai.com/examples/using_logprobs for detailsconst model = new ChatOpenAI({ logprobs: true, // topLogprobs: 5,});const generations = await model.invoke([new HumanMessage("Hi there!")]);console.log(JSON.stringify(generations, null, 2));/* { "generations": [ [ { "text": "Hello! How can I assist you today?", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "Hello! How can I assist you today?", "additional_kwargs": {} } }, "generationInfo": { "finish_reason": "stop", "logprobs": { "content": [ { "token": "Hello", "logprob": -0.0011337858, "bytes": [ 72, 101, 108, 108, 111 ], "top_logprobs": [] }, { "token": "!", "logprob": -0.00044127836, "bytes": [ 33 ], "top_logprobs": [] }, { "token": " How", "logprob": -0.000065994034, "bytes": [ 32, 72, 111, 119 ], "top_logprobs": [] }, ... ] } } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 9, "promptTokens": 10, "totalTokens": 19 } } }*/ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Ollama Functions ](/v0.2/docs/integrations/chat/ollama_functions)[ Next PremAI ](/v0.2/docs/integrations/chat/premai) * [Multimodal messages](#multimodal-messages) * [Tool calling](#tool-calling) * [`.withStructuredOutput({ ... })`](#withstructuredoutput--) * [Custom URLs](#custom-urls) * [Calling fine-tuned models](#calling-fine-tuned-models) * [Generation metadata](#generation-metadata) * [With callbacks](#with-callbacks) * [With `.generate()`](#with-generate) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/aiplugin-tool
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * ChatGPT Plugins ChatGPT Plugins =============== This example shows how to use ChatGPT Plugins within LangChain abstractions. Note 1: This currently only works for plugins with no auth. Note 2: There are almost certainly other ways to do this, this is just a first pass. If you have better ideas, please open a PR! tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { ChatOpenAI } from "@langchain/openai";import { initializeAgentExecutorWithOptions } from "langchain/agents";import { RequestsGetTool, RequestsPostTool } from "langchain/tools";import { AIPluginTool } from "@langchain/community/tools/aiplugin";export const run = async () => { const tools = [ new RequestsGetTool(), new RequestsPostTool(), await AIPluginTool.fromPluginUrl( "https://www.klarna.com/.well-known/ai-plugin.json" ), ]; const executor = await initializeAgentExecutorWithOptions( tools, new ChatOpenAI({ temperature: 0 }), { agentType: "chat-zero-shot-react-description", verbose: true } ); const result = await executor.invoke({ input: "what t shirts are available in klarna?", }); console.log({ result });}; #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [initializeAgentExecutorWithOptions](https://v02.api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` * [RequestsGetTool](https://v02.api.js.langchain.com/classes/langchain_tools.RequestsGetTool.html) from `langchain/tools` * [RequestsPostTool](https://v02.api.js.langchain.com/classes/langchain_tools.RequestsPostTool.html) from `langchain/tools` * [AIPluginTool](https://v02.api.js.langchain.com/classes/langchain_community_tools_aiplugin.AIPluginTool.html) from `@langchain/community/tools/aiplugin` Entering new agent_executor chain...Thought: Klarna is a payment provider, not a store. I need to check if there is a Klarna Shopping API that I can use to search for t-shirts.Action:```{"action": "KlarnaProducts","action_input": ""}```Usage Guide: Use the Klarna plugin to get relevant product suggestions for any shopping or researching purpose. The query to be sent should not include stopwords like articles, prepositions and determinants. The api works best when searching for words that are related to products, like their name, brand, model or category. Links will always be returned and should be shown to the user.OpenAPI Spec: {"openapi":"3.0.1","info":{"version":"v0","title":"Open AI Klarna product Api"},"servers":[{"url":"https://www.klarna.com/us/shopping"}],"tags":[{"name":"open-ai-product-endpoint","description":"Open AI Product Endpoint. Query for products."}],"paths":{"/public/openai/v0/products":{"get":{"tags":["open-ai-product-endpoint"],"summary":"API for fetching Klarna product information","operationId":"productsUsingGET","parameters":[{"name":"q","in":"query","description":"query, must be between 2 and 100 characters","required":true,"schema":{"type":"string"}},{"name":"size","in":"query","description":"number of products returned","required":false,"schema":{"type":"integer"}},{"name":"budget","in":"query","description":"maximum price of the matching product in local currency, filters results","required":false,"schema":{"type":"integer"}}],"responses":{"200":{"description":"Products found","content":{"application/json":{"schema":{"$ref":"#/components/schemas/ProductResponse"}}}},"503":{"description":"one or more services are unavailable"}},"deprecated":false}}},"components":{"schemas":{"Product":{"type":"object","properties":{"attributes":{"type":"array","items":{"type":"string"}},"name":{"type":"string"},"price":{"type":"string"},"url":{"type":"string"}},"title":"Product"},"ProductResponse":{"type":"object","properties":{"products":{"type":"array","items":{"$ref":"#/components/schemas/Product"}}},"title":"ProductResponse"}}}}Now that I know there is a Klarna Shopping API, I can use it to search for t-shirts. I will make a GET request to the API with the query parameter "t-shirt".Action:```{"action": "requests_get","action_input": "https://www.klarna.com/us/shopping/public/openai/v0/products?q=t-shirt"}```{"products":[{"name":"Psycho Bunny Mens Copa Gradient Logo Graphic Tee","url":"https://www.klarna.com/us/shopping/pl/cl10001/3203663222/Clothing/Psycho-Bunny-Mens-Copa-Gradient-Logo-Graphic-Tee/?source=openai","price":"$35.00","attributes":["Material:Cotton","Target Group:Man","Color:White,Blue,Black,Orange"]},{"name":"T-shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3203506327/Clothing/T-shirt/?source=openai","price":"$20.45","attributes":["Material:Cotton","Target Group:Man","Color:Gray,White,Blue,Black,Orange"]},{"name":"Palm Angels Bear T-shirt - Black","url":"https://www.klarna.com/us/shopping/pl/cl10001/3201090513/Clothing/Palm-Angels-Bear-T-shirt-Black/?source=openai","price":"$168.36","attributes":["Material:Cotton","Target Group:Man","Color:Black"]},{"name":"Tommy Hilfiger Essential Flag Logo T-shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3201840629/Clothing/Tommy-Hilfiger-Essential-Flag-Logo-T-shirt/?source=openai","price":"$22.52","attributes":["Material:Cotton","Target Group:Man","Color:Red,Gray,White,Blue,Black","Pattern:Solid Color","Environmental Attributes :Organic"]},{"name":"Coach Outlet Signature T Shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3203005573/Clothing/Coach-Outlet-Signature-T-Shirt/?source=openai","price":"$75.00","attributes":["Material:Cotton","Target Group:Man","Color:Gray"]}]}Finished chain.{ result: { output: 'The available t-shirts in Klarna are Psycho Bunny Mens Copa Gradient Logo Graphic Tee, T-shirt, Palm Angels Bear T-shirt - Black, Tommy Hilfiger Essential Flag Logo T-shirt, and Coach Outlet Signature T Shirt.', intermediateSteps: [ [Object], [Object] ] }} * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Tools ](/v0.2/docs/integrations/tools)[ Next Connery Action Tool ](/v0.2/docs/integrations/tools/connery) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/dalle
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * Dall-E Tool Dall-E Tool =========== The Dall-E tool allows your agent to create images using OpenAI's Dall-E image generation tool. Setup[​](#setup "Direct link to Setup") --------------------------------------- You will need an OpenAI API Key which you can get from the [OpenAI web site](https://openai.com) and then set the OPENAI\_API\_KEY environment variable to the key you just created. To use the Dall-E Tool you need to install the LangChain OpenAI integration package: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai /* eslint-disable no-process-env */import { DallEAPIWrapper } from "@langchain/openai";const tool = new DallEAPIWrapper({ n: 1, // Default model: "dall-e-3", // Default apiKey: process.env.OPENAI_API_KEY, // Default});const imageURL = await tool.invoke("a painting of a cat");console.log(imageURL); #### API Reference: * [DallEAPIWrapper](https://v02.api.js.langchain.com/classes/langchain_openai.DallEAPIWrapper.html) from `@langchain/openai` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Connery Action Tool ](/v0.2/docs/integrations/tools/connery)[ Next Discord Tool ](/v0.2/docs/integrations/tools/discord) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/connery
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * Connery Action Tool On this page Connery Action Tool =================== Using this tool, you can integrate individual Connery Action into your LangChain agent. note If you want to use more than one Connery Action in your agent, check out the [Connery Toolkit](/v0.2/docs/integrations/toolkits/connery) documentation. What is Connery?[​](#what-is-connery "Direct link to What is Connery?") ----------------------------------------------------------------------- Connery is an open-source plugin infrastructure for AI. With Connery, you can easily create a custom plugin with a set of actions and seamlessly integrate them into your LangChain agent. Connery will take care of critical aspects such as runtime, authorization, secret management, access management, audit logs, and other vital features. Furthermore, Connery, supported by our community, provides a diverse collection of ready-to-use open-source plugins for added convenience. Learn more about Connery: * GitHub: [https://github.com/connery-io/connery](https://github.com/connery-io/connery) * Documentation: [https://docs.connery.io](https://docs.connery.io) Prerequisites[​](#prerequisites "Direct link to Prerequisites") --------------------------------------------------------------- To use Connery Actions in your LangChain agent, you need to do some preparation: 1. Set up the Connery runner using the [Quickstart](https://docs.connery.io/docs/runner/quick-start/) guide. 2. Install all the plugins with the actions you want to use in your agent. 3. Set environment variables `CONNERY_RUNNER_URL` and `CONNERY_RUNNER_API_KEY` so the toolkit can communicate with the Connery Runner. Example of using Connery Action Tool[​](#example-of-using-connery-action-tool "Direct link to Example of using Connery Action Tool") ------------------------------------------------------------------------------------------------------------------------------------ ### Setup[​](#setup "Direct link to Setup") To use the Connery Action Tool you need to install the following official peer dependency: * npm * Yarn * pnpm npm install @langchain/community yarn add @langchain/community pnpm add @langchain/community tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). ### Usage[​](#usage "Direct link to Usage") In the example below, we fetch action by its ID from the Connery Runner and then call it with the specified parameters. Here, we use the ID of the **Send email** action from the [Gmail](https://github.com/connery-io/gmail) plugin. info You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/c4b6723d-f91c-440c-8682-16ec8297a602/r). import { ConneryService } from "@langchain/community/tools/connery";import { ChatOpenAI } from "@langchain/openai";import { initializeAgentExecutorWithOptions } from "langchain/agents";// Specify your Connery Runner credentials.process.env.CONNERY_RUNNER_URL = "";process.env.CONNERY_RUNNER_API_KEY = "";// Specify OpenAI API key.process.env.OPENAI_API_KEY = "";// Specify your email address to receive the emails from examples below.const recepientEmail = "test@example.com";// Get the SendEmail action from the Connery Runner by ID.const conneryService = new ConneryService();const sendEmailAction = await conneryService.getAction( "CABC80BB79C15067CA983495324AE709");// Run the action manually.const manualRunResult = await sendEmailAction.invoke({ recipient: recepientEmail, subject: "Test email", body: "This is a test email sent by Connery.",});console.log(manualRunResult);// Run the action using the OpenAI Functions agent.const llm = new ChatOpenAI({ temperature: 0 });const agent = await initializeAgentExecutorWithOptions([sendEmailAction], llm, { agentType: "openai-functions", verbose: true,});const agentRunResult = await agent.invoke({ input: `Send an email to the ${recepientEmail} and say that I will be late for the meeting.`,});console.log(agentRunResult); #### API Reference: * [ConneryService](https://v02.api.js.langchain.com/classes/langchain_community_tools_connery.ConneryService.html) from `@langchain/community/tools/connery` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [initializeAgentExecutorWithOptions](https://v02.api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` note Connery Action is a structured tool, so you can only use it in the agents supporting structured tools. * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous ChatGPT Plugins ](/v0.2/docs/integrations/tools/aiplugin-tool)[ Next Dall-E Tool ](/v0.2/docs/integrations/tools/dalle) * [What is Connery?](#what-is-connery) * [Prerequisites](#prerequisites) * [Example of using Connery Action Tool](#example-of-using-connery-action-tool) * [Setup](#setup) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/discord
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * Discord Tool Discord Tool ============ The Discord Tool gives your agent the ability to search, read, and write messages to discord channels. It is useful for when you need to interact with a discord channel. Setup[​](#setup "Direct link to Setup") --------------------------------------- To use the Discord Tool you need to install the following official peer depencency: * npm * Yarn * pnpm npm install discord.js yarn add discord.js pnpm add discord.js Usage, standalone[​](#usage-standalone "Direct link to Usage, standalone") -------------------------------------------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { DiscordGetMessagesTool, DiscordChannelSearchTool, DiscordSendMessagesTool, DiscordGetGuildsTool, DiscordGetTextChannelsTool,} from "@langchain/community/tools/discord";// Get messages from a channel given channel IDconst getMessageTool = new DiscordGetMessagesTool();const messageResults = await getMessageTool.invoke("1153400523718938780");console.log(messageResults);// Get guilds/serversconst getGuildsTool = new DiscordGetGuildsTool();const guildResults = await getGuildsTool.invoke("");console.log(guildResults);// Search results in a given channel (case-insensitive)const searchTool = new DiscordChannelSearchTool();const searchResults = await searchTool.invoke("Test");console.log(searchResults);// Get all text channels of a serverconst getChannelsTool = new DiscordGetTextChannelsTool();const channelResults = await getChannelsTool.invoke("1153400523718938775");console.log(channelResults);// Send a messageconst sendMessageTool = new DiscordSendMessagesTool();const sendMessageResults = await sendMessageTool.invoke("test message");console.log(sendMessageResults); #### API Reference: * [DiscordGetMessagesTool](https://v02.api.js.langchain.com/classes/langchain_community_tools_discord.DiscordGetMessagesTool.html) from `@langchain/community/tools/discord` * [DiscordChannelSearchTool](https://v02.api.js.langchain.com/classes/langchain_community_tools_discord.DiscordChannelSearchTool.html) from `@langchain/community/tools/discord` * [DiscordSendMessagesTool](https://v02.api.js.langchain.com/classes/langchain_community_tools_discord.DiscordSendMessagesTool.html) from `@langchain/community/tools/discord` * [DiscordGetGuildsTool](https://v02.api.js.langchain.com/classes/langchain_community_tools_discord.DiscordGetGuildsTool.html) from `@langchain/community/tools/discord` * [DiscordGetTextChannelsTool](https://v02.api.js.langchain.com/classes/langchain_community_tools_discord.DiscordGetTextChannelsTool.html) from `@langchain/community/tools/discord` Usage, in an Agent[​](#usage-in-an-agent "Direct link to Usage, in an Agent") ----------------------------------------------------------------------------- import { ChatOpenAI } from "@langchain/openai";import { initializeAgentExecutorWithOptions } from "langchain/agents";import { DiscordSendMessagesTool } from "@langchain/community/tools/discord";import { DadJokeAPI } from "@langchain/community/tools/dadjokeapi";const model = new ChatOpenAI({ temperature: 0,});const tools = [new DiscordSendMessagesTool(), new DadJokeAPI()];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true,});const res = await executor.invoke({ input: `Tell a joke in the discord channel`,});console.log(res.output);// "What's the best thing about elevator jokes? They work on so many levels." #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [initializeAgentExecutorWithOptions](https://v02.api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` * [DiscordSendMessagesTool](https://v02.api.js.langchain.com/classes/langchain_community_tools_discord.DiscordSendMessagesTool.html) from `@langchain/community/tools/discord` * [DadJokeAPI](https://v02.api.js.langchain.com/classes/langchain_community_tools_dadjokeapi.DadJokeAPI.html) from `@langchain/community/tools/dadjokeapi` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Dall-E Tool ](/v0.2/docs/integrations/tools/dalle)[ Next DuckDuckGoSearch ](/v0.2/docs/integrations/tools/duckduckgo_search) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/duckduckgo_search
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * DuckDuckGoSearch DuckDuckGoSearch ================ DuckDuckGoSearch offers a privacy-focused search API designed for LLM Agents. It provides seamless integration with a wide range of data sources, prioritizing user privacy and relevant search results. Setup[​](#setup "Direct link to Setup") --------------------------------------- Install the `@langchain/community` package, along with the `duck-duck-scrape` dependency: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/community yarn add @langchain/community pnpm add @langchain/community * npm * Yarn * pnpm npm install duck-duck-scrape yarn add duck-duck-scrape pnpm add duck-duck-scrape Usage[​](#usage "Direct link to Usage") --------------------------------------- You can call `.invoke` on `DuckDuckGoSearch` to search for a query: import { DuckDuckGoSearch } from "@langchain/community/tools/duckduckgo_search";// Instantiate the DuckDuckGoSearch tool.const tool = new DuckDuckGoSearch({ maxResults: 1 });// Get the results of a query by calling .invoke on the tool.const result = await tool.invoke( "What is Anthropic's estimated revenue for 2024?");console.log(result);/*[{ "title": "Anthropic forecasts more than $850 mln in annualized revenue rate by ...", "link": "https://www.reuters.com/technology/anthropic-forecasts-more-than-850-mln-annualized-revenue-rate-by-2024-end-report-2023-12-26/", "snippet": "Dec 26 (Reuters) - Artificial intelligence startup <b>Anthropic</b> has projected it will generate more than $850 million in annualized <b>revenue</b> by the end of <b>2024</b>, the Information reported on Tuesday ..."}]*/ #### API Reference: * [DuckDuckGoSearch](https://v02.api.js.langchain.com/classes/langchain_community_tools_duckduckgo_search.DuckDuckGoSearch.html) from `@langchain/community/tools/duckduckgo_search` tip See the LangSmith trace [here](https://smith.langchain.com/public/c352faaf-e617-4779-a943-96f963dc19a5/r) ### With an agent[​](#with-an-agent "Direct link to With an agent") import { DuckDuckGoSearch } from "@langchain/community/tools/duckduckgo_search";import { ChatOpenAI } from "@langchain/openai";import type { ChatPromptTemplate } from "@langchain/core/prompts";import { pull } from "langchain/hub";import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";// Define the tools the agent will have access to.const tools = [new DuckDuckGoSearch({ maxResults: 1 })];// Get the prompt to use - you can modify this!// If you want to see the prompt in full, you can at:// https://smith.langchain.com/hub/hwchase17/openai-functions-agentconst prompt = await pull<ChatPromptTemplate>( "hwchase17/openai-functions-agent");const llm = new ChatOpenAI({ model: "gpt-4-turbo-preview", temperature: 0,});const agent = await createOpenAIFunctionsAgent({ llm, tools, prompt,});const agentExecutor = new AgentExecutor({ agent, tools,});const result = await agentExecutor.invoke({ input: "What is Anthropic's estimated revenue for 2024?",});console.log(result);/*{ input: "What is Anthropic's estimated revenue for 2024?", output: 'Anthropic has projected that it will generate more than $850 million in annualized revenue by the end of 2024. For more details, you can refer to the [Reuters article](https://www.reuters.com/technology/anthropic-forecasts-more-than-850-mln-annualized-revenue-rate-by-2024-end-report-2023-12-26/).'}*/ #### API Reference: * [DuckDuckGoSearch](https://v02.api.js.langchain.com/classes/langchain_community_tools_duckduckgo_search.DuckDuckGoSearch.html) from `@langchain/community/tools/duckduckgo_search` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [pull](https://v02.api.js.langchain.com/functions/langchain_hub.pull.html) from `langchain/hub` * [AgentExecutor](https://v02.api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents` * [createOpenAIFunctionsAgent](https://v02.api.js.langchain.com/functions/langchain_agents.createOpenAIFunctionsAgent.html) from `langchain/agents` tip See the LangSmith trace for the Agent example [here](https://smith.langchain.com/public/48f84a32-4fb5-4863-a8cd-324abebfce91/r) * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Discord Tool ](/v0.2/docs/integrations/tools/discord)[ Next Exa Search ](/v0.2/docs/integrations/tools/exa_search) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/exa_search
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * Exa Search Exa Search ========== Exa (formerly Metaphor Search) is a search engine fully designed for use by LLMs. Search for documents on the internet using natural language queries, then retrieve cleaned HTML content from desired documents. Unlike keyword-based search (Google), Exa's neural search capabilities allow it to semantically understand queries and return relevant documents. For example, we could search `"fascinating article about cats"` and compare the search results from Google and Exa. Google gives us SEO-optimized listicles based on the keyword “fascinating”. Exa just works. This notebook goes over how to use Exa Search with LangChain. First, get an Exa API key and add it as an environment variable. Get 1000 free searches/month by [signing up here.](https://dashboard.exa.ai/login) Usage[​](#usage "Direct link to Usage") --------------------------------------- First, install the LangChain integration package for Exa: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/exa @langchain/openai langchain yarn add @langchain/exa @langchain/openai langchain pnpm add @langchain/exa @langchain/openai langchain You'll need to set your API key as an environment variable. The `Exa` class defaults to `EXASEARCH_API_KEY` when searching for your API key. Usage[​](#usage-1 "Direct link to Usage") ----------------------------------------- import { ExaSearchResults } from "@langchain/exa";import { ChatOpenAI } from "@langchain/openai";import type { ChatPromptTemplate } from "@langchain/core/prompts";import Exa from "exa-js";import { pull } from "langchain/hub";import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";// Define the tools the agent will have access to.const tools = [ new ExaSearchResults({ // @ts-expect-error Some TS Config's will cause this to give a TypeScript error, even though it works. client: new Exa(process.env.EXASEARCH_API_KEY), }),];// Get the prompt to use - you can modify this!// If you want to see the prompt in full, you can at:// https://smith.langchain.com/hub/hwchase17/openai-functions-agentconst prompt = await pull<ChatPromptTemplate>( "hwchase17/openai-functions-agent");const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-1106", temperature: 0,});const agent = await createOpenAIFunctionsAgent({ llm, tools, prompt,});const agentExecutor = new AgentExecutor({ agent, tools,});const result = await agentExecutor.invoke({ input: "what is the weather in wailea?",});console.log(result);/*{ input: 'what is the weather in wailea?', output: 'I found a weather forecast for Wailea-Makena on Windfinder.com. You can check the forecast [here](https://www.windfinder.com/forecast/wailea-makena).'}*/ #### API Reference: * [ExaSearchResults](https://v02.api.js.langchain.com/classes/langchain_exa.ExaSearchResults.html) from `@langchain/exa` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [pull](https://v02.api.js.langchain.com/functions/langchain_hub.pull.html) from `langchain/hub` * [AgentExecutor](https://v02.api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents` * [createOpenAIFunctionsAgent](https://v02.api.js.langchain.com/functions/langchain_agents.createOpenAIFunctionsAgent.html) from `langchain/agents` tip You can see a LangSmith trace for this example [here](https://smith.langchain.com/public/775ea9a8-d54c-405c-9126-a012405d9099/r). Using the Exa SDK as LangChain Agent Tools[​](#using-the-exa-sdk-as-langchain-agent-tools "Direct link to Using the Exa SDK as LangChain Agent Tools") ------------------------------------------------------------------------------------------------------------------------------------------------------ We can create LangChain tools which use the [`ExaRetriever`](/v0.2/docs/integrations/retrievers/exa) and the [`createRetrieverTool`](https://v02.api.js.langchain.com/functions/langchain_tools_retriever.createRetrieverTool.html) Using these tools we can construct a simple search agent that can answer questions about any topic. import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";import Exa from "exa-js";import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";import { createRetrieverTool } from "langchain/tools/retriever";import { ExaRetriever } from "@langchain/exa";// @ts-expect-error Some TS Config's will cause this to give a TypeScript error, even though it works.const client: Exa.default = new Exa(process.env.EXASEARCH_API_KEY);const exaRetriever = new ExaRetriever({ client, searchArgs: { numResults: 2, },});// Convert the ExaRetriever into a toolconst searchTool = createRetrieverTool(exaRetriever, { name: "search", description: "Get the contents of a webpage given a string search query.",});const tools = [searchTool];const llm = new ChatOpenAI({ model: "gpt-4", temperature: 0 });const prompt = ChatPromptTemplate.fromMessages([ [ "system", `You are a web researcher who answers user questions by looking up information on the internet and retrieving contents of helpful documents. Cite your sources.`, ], ["human", "{input}"], new MessagesPlaceholder("agent_scratchpad"),]);const agentExecutor = new AgentExecutor({ agent: await createOpenAIFunctionsAgent({ llm, tools, prompt, }), tools,});console.log( await agentExecutor.invoke({ input: "Summarize for me a fascinating article about cats.", })); #### API Reference: * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [MessagesPlaceholder](https://v02.api.js.langchain.com/classes/langchain_core_prompts.MessagesPlaceholder.html) from `@langchain/core/prompts` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [AgentExecutor](https://v02.api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents` * [createOpenAIFunctionsAgent](https://v02.api.js.langchain.com/functions/langchain_agents.createOpenAIFunctionsAgent.html) from `langchain/agents` * [createRetrieverTool](https://v02.api.js.langchain.com/functions/langchain_tools_retriever.createRetrieverTool.html) from `langchain/tools/retriever` * [ExaRetriever](https://v02.api.js.langchain.com/classes/langchain_exa.ExaRetriever.html) from `@langchain/exa` { input: 'Summarize for me a fascinating article about cats.', output: 'The article discusses the research of biologist Jaroslav Flegr, who has been investigating the effects of a single-celled parasite called Toxoplasma gondii (T. gondii or Toxo), which is excreted by cats in their feces. Flegr began to suspect in the early 1990s that this parasite was subtly manipulating his personality, causing him to behave in strange, often self-destructive ways. He reasoned that if it was affecting him, it was probably doing the same to others.\n' + '\n' + "T. gondii is the microbe that causes toxoplasmosis, a disease that can be transmitted from a pregnant woman to her fetus, potentially resulting in severe brain damage or death. It's also a major threat to people with weakened immunity. However, healthy children and adults usually experience nothing worse than brief flu-like symptoms before quickly fighting off the protozoan, which then lies dormant inside brain cells.\n" + '\n' + "Flegr's research is unconventional and suggests that these tiny organisms carried by house cats could be creeping into our brains, causing everything from car wrecks to schizophrenia.\n" + '\n' + '(Source: [The Atlantic](https://www.theatlantic.com/magazine/archive/2012/03/how-your-cat-is-making-you-crazy/308873/))'} tip You can see a LangSmith trace for this example [here](https://smith.langchain.com/public/d123ba5f-8535-4669-9e43-ac7ab3c6735e/r). * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous DuckDuckGoSearch ](/v0.2/docs/integrations/tools/duckduckgo_search)[ Next Gmail Tool ](/v0.2/docs/integrations/tools/gmail) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/gmail
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * Gmail Tool Gmail Tool ========== The Gmail Tool allows your agent to create and view messages from a linked email account. Setup[​](#setup "Direct link to Setup") --------------------------------------- You will need to get an API key from [Google here](https://developers.google.com/gmail/api/guides) and [enable the new Gmail API](https://console.cloud.google.com/apis/library/gmail.googleapis.com). Then, set the environment variables for `GMAIL_CLIENT_EMAIL`, and either `GMAIL_PRIVATE_KEY`, or `GMAIL_KEYFILE`. To use the Gmail Tool you need to install the following official peer dependency: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai googleapis @langchain/community yarn add @langchain/openai googleapis @langchain/community pnpm add @langchain/openai googleapis @langchain/community Usage[​](#usage "Direct link to Usage") --------------------------------------- import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "@langchain/openai";import { GmailCreateDraft, GmailGetMessage, GmailGetThread, GmailSearch, GmailSendMessage,} from "@langchain/community/tools/gmail";import { StructuredTool } from "@langchain/core/tools";export async function run() { const model = new OpenAI({ temperature: 0, apiKey: process.env.OPENAI_API_KEY, }); // These are the default parameters for the Gmail tools // const gmailParams = { // credentials: { // clientEmail: process.env.GMAIL_CLIENT_EMAIL, // privateKey: process.env.GMAIL_PRIVATE_KEY, // }, // scopes: ["https://mail.google.com/"], // }; // For custom parameters, uncomment the code above, replace the values with your own, and pass it to the tools below const tools: StructuredTool[] = [ new GmailCreateDraft(), new GmailGetMessage(), new GmailGetThread(), new GmailSearch(), new GmailSendMessage(), ]; const gmailAgent = await initializeAgentExecutorWithOptions(tools, model, { agentType: "structured-chat-zero-shot-react-description", verbose: true, }); const createInput = `Create a gmail draft for me to edit of a letter from the perspective of a sentient parrot who is looking to collaborate on some research with her estranged friend, a cat. Under no circumstances may you send the message, however.`; const createResult = await gmailAgent.invoke({ input: createInput }); // Create Result { // output: 'I have created a draft email for you to edit. The draft Id is r5681294731961864018.' // } console.log("Create Result", createResult); const viewInput = `Could you search in my drafts for the latest email?`; const viewResult = await gmailAgent.invoke({ input: viewInput }); // View Result { // output: "The latest email in your drafts is from hopefulparrot@gmail.com with the subject 'Collaboration Opportunity'. The body of the email reads: 'Dear [Friend], I hope this letter finds you well. I am writing to you in the hopes of rekindling our friendship and to discuss the possibility of collaborating on some research together. I know that we have had our differences in the past, but I believe that we can put them aside and work together for the greater good. I look forward to hearing from you. Sincerely, [Parrot]'" // } console.log("View Result", viewResult);} #### API Reference: * [initializeAgentExecutorWithOptions](https://v02.api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [GmailCreateDraft](https://v02.api.js.langchain.com/classes/langchain_community_tools_gmail.GmailCreateDraft.html) from `@langchain/community/tools/gmail` * [GmailGetMessage](https://v02.api.js.langchain.com/classes/langchain_community_tools_gmail.GmailGetMessage.html) from `@langchain/community/tools/gmail` * [GmailGetThread](https://v02.api.js.langchain.com/classes/langchain_community_tools_gmail.GmailGetThread.html) from `@langchain/community/tools/gmail` * [GmailSearch](https://v02.api.js.langchain.com/classes/langchain_community_tools_gmail.GmailSearch.html) from `@langchain/community/tools/gmail` * [GmailSendMessage](https://v02.api.js.langchain.com/classes/langchain_community_tools_gmail.GmailSendMessage.html) from `@langchain/community/tools/gmail` * [StructuredTool](https://v02.api.js.langchain.com/classes/langchain_core_tools.StructuredTool.html) from `@langchain/core/tools` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Exa Search ](/v0.2/docs/integrations/tools/exa_search)[ Next Google Calendar Tool ](/v0.2/docs/integrations/tools/google_calendar) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/google_calendar
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * Google Calendar Tool Google Calendar Tool ==================== The Google Calendar Tools allow your agent to create and view Google Calendar events from a linked calendar. Setup[​](#setup "Direct link to Setup") --------------------------------------- To use the Google Calendar Tools you need to install the following official peer dependency: * npm * Yarn * pnpm npm install googleapis yarn add googleapis pnpm add googleapis Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "@langchain/openai";import { Calculator } from "@langchain/community/tools/calculator";import { GoogleCalendarCreateTool, GoogleCalendarViewTool,} from "@langchain/community/tools/google_calendar";export async function run() { const model = new OpenAI({ temperature: 0, apiKey: process.env.OPENAI_API_KEY, }); const googleCalendarParams = { credentials: { clientEmail: process.env.GOOGLE_CALENDAR_CLIENT_EMAIL, privateKey: process.env.GOOGLE_CALENDAR_PRIVATE_KEY, calendarId: process.env.GOOGLE_CALENDAR_CALENDAR_ID, }, scopes: [ "https://www.googleapis.com/auth/calendar", "https://www.googleapis.com/auth/calendar.events", ], model, }; const tools = [ new Calculator(), new GoogleCalendarCreateTool(googleCalendarParams), new GoogleCalendarViewTool(googleCalendarParams), ]; const calendarAgent = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true, }); const createInput = `Create a meeting with John Doe next Friday at 4pm - adding to the agenda of it the result of 99 + 99`; const createResult = await calendarAgent.invoke({ input: createInput }); // Create Result { // output: 'A meeting with John Doe on 29th September at 4pm has been created and the result of 99 + 99 has been added to the agenda.' // } console.log("Create Result", createResult); const viewInput = `What meetings do I have this week?`; const viewResult = await calendarAgent.invoke({ input: viewInput }); // View Result { // output: "You have no meetings this week between 8am and 8pm." // } console.log("View Result", viewResult);} #### API Reference: * [initializeAgentExecutorWithOptions](https://v02.api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [Calculator](https://v02.api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator` * [GoogleCalendarCreateTool](https://v02.api.js.langchain.com/classes/langchain_community_tools_google_calendar.GoogleCalendarCreateTool.html) from `@langchain/community/tools/google_calendar` * [GoogleCalendarViewTool](https://v02.api.js.langchain.com/classes/langchain_community_tools_google_calendar.GoogleCalendarViewTool.html) from `@langchain/community/tools/google_calendar` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Gmail Tool ](/v0.2/docs/integrations/tools/gmail)[ Next Google Places Tool ](/v0.2/docs/integrations/tools/google_places) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/lambda_agent
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * Agent with AWS Lambda Agent with AWS Lambda Integration ================================= Full docs here: [https://docs.aws.amazon.com/lambda/index.html](https://docs.aws.amazon.com/lambda/index.html) **AWS Lambda** is a serverless computing service provided by Amazon Web Services (AWS), designed to allow developers to build and run applications and services without the need for provisioning or managing servers. This serverless architecture enables you to focus on writing and deploying code, while AWS automatically takes care of scaling, patching, and managing the infrastructure required to run your applications. By including a AWSLambda in the list of tools provided to an Agent, you can grant your Agent the ability to invoke code running in your AWS Cloud for whatever purposes you need. When an Agent uses the AWSLambda tool, it will provide an argument of type `string` which will in turn be passed into the Lambda function via the `event` parameter. This quick start will demonstrate how an Agent could use a Lambda function to send an email via [Amazon Simple Email Service](https://aws.amazon.com/ses/). The lambda code which sends the email is not provided, but if you'd like to learn how this could be done, see [here](https://repost.aws/knowledge-center/lambda-send-email-ses). Keep in mind this is an intentionally simple example; Lambda can used to execute code for a near infinite number of other purposes (including executing more Langchains)! ### Note about credentials:[​](#note-about-credentials "Direct link to Note about credentials:") * If you have not run [`aws configure`](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) via the AWS CLI, the `region`, `accessKeyId`, and `secretAccessKey` must be provided to the AWSLambda constructor. * The IAM role corresponding to those credentials must have permission to invoke the lambda function. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { OpenAI } from "@langchain/openai";import { SerpAPI } from "langchain/tools";import { AWSLambda } from "langchain/tools/aws_lambda";import { initializeAgentExecutorWithOptions } from "langchain/agents";const model = new OpenAI({ temperature: 0 });const emailSenderTool = new AWSLambda({ name: "email-sender", // tell the Agent precisely what the tool does description: "Sends an email with the specified content to testing123@gmail.com", region: "us-east-1", // optional: AWS region in which the function is deployed accessKeyId: "abc123", // optional: access key id for a IAM user with invoke permissions secretAccessKey: "xyz456", // optional: secret access key for that IAM user functionName: "SendEmailViaSES", // the function name as seen in AWS Console});const tools = [emailSenderTool, new SerpAPI("api_key_goes_here")];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description",});const input = `Find out the capital of Croatia. Once you have it, email the answer to testing123@gmail.com.`;const result = await executor.invoke({ input });console.log(result); * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Google Places Tool ](/v0.2/docs/integrations/tools/google_places)[ Next Python interpreter tool ](/v0.2/docs/integrations/tools/pyinterpreter) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/google_places
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * Google Places Tool Google Places Tool ================== The Google Places Tool allows your agent to utilize the Google Places API in order to find addresses, phone numbers, website, etc. from text about a location listed on Google Places. Setup[​](#setup "Direct link to Setup") --------------------------------------- You will need to get an API key from [Google here](https://developers.google.com/maps/documentation/places/web-service/overview) and [enable the new Places API](https://console.cloud.google.com/apis/library/places.googleapis.com). Then, set your API key as `process.env.GOOGLE_PLACES_API_KEY` or pass it in as an `apiKey` constructor argument. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community import { GooglePlacesAPI } from "@langchain/community/tools/google_places";import { OpenAI } from "@langchain/openai";import { initializeAgentExecutorWithOptions } from "langchain/agents";export async function run() { const model = new OpenAI({ temperature: 0, }); const tools = [new GooglePlacesAPI()]; const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true, }); const res = await executor.invoke({ input: "Where is the University of Toronto - Scarborough? ", }); console.log(res.output);} #### API Reference: * [GooglePlacesAPI](https://v02.api.js.langchain.com/classes/langchain_community_tools_google_places.GooglePlacesAPI.html) from `@langchain/community/tools/google_places` * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [initializeAgentExecutorWithOptions](https://v02.api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Google Calendar Tool ](/v0.2/docs/integrations/tools/google_calendar)[ Next Agent with AWS Lambda ](/v0.2/docs/integrations/tools/lambda_agent) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/pyinterpreter
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * Python interpreter tool Python interpreter tool ======================= danger This tool executes code and can potentially perform destructive actions. Be careful that you trust any code passed to it! LangChain offers an experimental tool for executing arbitrary Python code. This can be useful in combination with an LLM that can generate code to perform more powerful computations. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { OpenAI } from "@langchain/openai";import { PythonInterpreterTool } from "langchain/experimental/tools/pyinterpreter";import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";const prompt = ChatPromptTemplate.fromTemplate( `Generate python code that does {input}. Do not generate anything else.`);const model = new OpenAI({});const interpreter = await PythonInterpreterTool.initialize({ indexURL: "../node_modules/pyodide",});// Note: In Deno, it may be easier to initialize the interpreter yourself:// import pyodideModule from "npm:pyodide/pyodide.js";// import { PythonInterpreterTool } from "npm:langchain/experimental/tools/pyinterpreter";// const pyodide = await pyodideModule.loadPyodide();// const pythonTool = new PythonInterpreterTool({instance: pyodide})const chain = prompt .pipe(model) .pipe(new StringOutputParser()) .pipe(interpreter);const result = await chain.invoke({ input: `prints "Hello LangChain"`,});console.log(JSON.parse(result).stdout);// To install python packages:// This uses the loadPackages command.// This works for packages built with pyodide.await interpreter.addPackage("numpy");// But for other packages, you will want to use micropip.// See: https://pyodide.org/en/stable/usage/loading-packages.html// for more informationawait interpreter.addPackage("micropip");// The following is roughly equivalent to:// pyodide.runPython(`import ${pkgname}; ${pkgname}`);const micropip = interpreter.pyodideInstance.pyimport("micropip");await micropip.install("numpy"); #### API Reference: * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [PythonInterpreterTool](https://v02.api.js.langchain.com/classes/langchain_experimental_tools_pyinterpreter.PythonInterpreterTool.html) from `langchain/experimental/tools/pyinterpreter` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Agent with AWS Lambda ](/v0.2/docs/integrations/tools/lambda_agent)[ Next SearchApi tool ](/v0.2/docs/integrations/tools/searchapi) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/searchapi
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * SearchApi tool SearchApi tool ============== The `SearchApi` tool connects your agents and chains to the internet. A wrapper around the Search API. This tool is handy when you need to answer questions about current events. Usage[​](#usage "Direct link to Usage") --------------------------------------- Input should be a search query. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { ChatOpenAI } from "@langchain/openai";import { AgentExecutor } from "langchain/agents";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableSequence } from "@langchain/core/runnables";import { AgentFinish, AgentAction } from "@langchain/core/agents";import { BaseMessageChunk } from "@langchain/core/messages";import { SearchApi } from "@langchain/community/tools/searchapi";const model = new ChatOpenAI({ temperature: 0,});const tools = [ new SearchApi(process.env.SEARCHAPI_API_KEY, { engine: "google_news", }),];const prefix = ChatPromptTemplate.fromMessages([ [ "ai", "Answer the following questions as best you can. In your final answer, use a bulleted list markdown format.", ], ["human", "{input}"],]);// Replace this with your actual output parser.const customOutputParser = ( input: BaseMessageChunk): AgentAction | AgentFinish => ({ log: "test", returnValues: { output: input, },});// Replace this placeholder agent with your actual implementation.const agent = RunnableSequence.from([prefix, model, customOutputParser]);const executor = AgentExecutor.fromAgentAndTools({ agent, tools,});const res = await executor.invoke({ input: "What's happening in Ukraine today?",});console.log(res); #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [AgentExecutor](https://v02.api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables` * [AgentFinish](https://v02.api.js.langchain.com/types/langchain_core_agents.AgentFinish.html) from `@langchain/core/agents` * [AgentAction](https://v02.api.js.langchain.com/types/langchain_core_agents.AgentAction.html) from `@langchain/core/agents` * [BaseMessageChunk](https://v02.api.js.langchain.com/classes/langchain_core_messages.BaseMessageChunk.html) from `@langchain/core/messages` * [SearchApi](https://v02.api.js.langchain.com/classes/langchain_community_tools_searchapi.SearchApi.html) from `@langchain/community/tools/searchapi` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Python interpreter tool ](/v0.2/docs/integrations/tools/pyinterpreter)[ Next Searxng Search tool ](/v0.2/docs/integrations/tools/searxng) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/searxng
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * Searxng Search tool Searxng Search tool =================== The `SearxngSearch` tool connects your agents and chains to the internet. A wrapper around the SearxNG API, this tool is useful for performing meta-search engine queries using the SearxNG API. It is particularly helpful in answering questions about current events. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { ChatOpenAI } from "@langchain/openai";import { AgentExecutor } from "langchain/agents";import { BaseMessageChunk } from "@langchain/core/messages";import { AgentAction, AgentFinish } from "@langchain/core/agents";import { RunnableSequence } from "@langchain/core/runnables";import { ChatPromptTemplate } from "@langchain/core/prompts";import { SearxngSearch } from "@langchain/community/tools/searxng_search";const model = new ChatOpenAI({ maxTokens: 1000, model: "gpt-4",});// `apiBase` will be automatically parsed from .env file, set "SEARXNG_API_BASE" in .env,const tools = [ new SearxngSearch({ params: { format: "json", // Do not change this, format other than "json" is will throw error engines: "google", }, // Custom Headers to support rapidAPI authentication Or any instance that requires custom headers headers: {}, }),];const prefix = ChatPromptTemplate.fromMessages([ [ "ai", "Answer the following questions as best you can. In your final answer, use a bulleted list markdown format.", ], ["human", "{input}"],]);// Replace this with your actual output parser.const customOutputParser = ( input: BaseMessageChunk): AgentAction | AgentFinish => ({ log: "test", returnValues: { output: input, },});// Replace this placeholder agent with your actual implementation.const agent = RunnableSequence.from([prefix, model, customOutputParser]);const executor = AgentExecutor.fromAgentAndTools({ agent, tools,});console.log("Loaded agent.");const input = `What is Langchain? Describe in 50 words`;console.log(`Executing with input "${input}"...`);const result = await executor.invoke({ input });console.log(result);/** * Langchain is a framework for developing applications powered by language models, such as chatbots, Generative Question-Answering, summarization, and more. It provides a standard interface, integrations with other tools, and end-to-end chains for common applications. Langchain enables data-aware and powerful applications. */ #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [AgentExecutor](https://v02.api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents` * [BaseMessageChunk](https://v02.api.js.langchain.com/classes/langchain_core_messages.BaseMessageChunk.html) from `@langchain/core/messages` * [AgentAction](https://v02.api.js.langchain.com/types/langchain_core_agents.AgentAction.html) from `@langchain/core/agents` * [AgentFinish](https://v02.api.js.langchain.com/types/langchain_core_agents.AgentFinish.html) from `@langchain/core/agents` * [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [SearxngSearch](https://v02.api.js.langchain.com/classes/langchain_community_tools_searxng_search.SearxngSearch.html) from `@langchain/community/tools/searxng_search` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous SearchApi tool ](/v0.2/docs/integrations/tools/searchapi)[ Next StackExchange Tool ](/v0.2/docs/integrations/tools/stackexchange) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/stackexchange
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * StackExchange Tool StackExchange Tool ================== The StackExchange tool connects your agents and chains to StackExchange's API. Usage[​](#usage "Direct link to Usage") --------------------------------------- import { StackExchangeAPI } from "@langchain/community/tools/stackexchange";// Get results from StackExchange APIconst stackExchangeTool = new StackExchangeAPI();const result = await stackExchangeTool.invoke("zsh: command not found: python");console.log(result);// Get results from StackExchange API with title queryconst stackExchangeTitleTool = new StackExchangeAPI({ queryType: "title",});const titleResult = await stackExchangeTitleTool.invoke( "zsh: command not found: python");console.log(titleResult);// Get results from StackExchange API with bad queryconst stackExchangeBadTool = new StackExchangeAPI();const badResult = await stackExchangeBadTool.invoke( "sjefbsmnazdkhbazkbdoaencopebfoubaef");console.log(badResult); #### API Reference: * [StackExchangeAPI](https://v02.api.js.langchain.com/classes/langchain_community_tools_stackexchange.StackExchangeAPI.html) from `@langchain/community/tools/stackexchange` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Searxng Search tool ](/v0.2/docs/integrations/tools/searxng)[ Next Tavily Search ](/v0.2/docs/integrations/tools/tavily_search) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/webbrowser
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * Web Browser Tool Web Browser Tool ================ The Webbrowser Tool gives your agent the ability to visit a website and extract information. It is described to the agent as useful for when you need to find something on or summarize a webpage. input should be a comma separated list of "valid URL including protocol","what you want to find on the page or empty string for a summary". It exposes two modes of operation: * when called by the Agent with only a URL it produces a summary of the website contents * when called by the Agent with a URL and a description of what to find it will instead use an in-memory Vector Store to find the most relevant snippets and summarise those Setup[​](#setup "Direct link to Setup") --------------------------------------- To use the Webbrowser Tool you need to install the dependencies: * npm * Yarn * pnpm npm install cheerio axios yarn add cheerio axios pnpm add cheerio axios Usage, standalone[​](#usage-standalone "Direct link to Usage, standalone") -------------------------------------------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { WebBrowser } from "langchain/tools/webbrowser";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";export async function run() { // this will not work with Azure OpenAI API yet // Azure OpenAI API does not support embedding with multiple inputs yet // Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions. // So we will fail fast, when Azure OpenAI API is used if (process.env.AZURE_OPENAI_API_KEY) { throw new Error( "Azure OpenAI API does not support embedding with multiple inputs yet" ); } const model = new ChatOpenAI({ temperature: 0 }); const embeddings = new OpenAIEmbeddings( process.env.AZURE_OPENAI_API_KEY ? { azureOpenAIApiDeploymentName: "Embeddings2" } : {} ); const browser = new WebBrowser({ model, embeddings }); const result = await browser.invoke( `"https://www.themarginalian.org/2015/04/09/find-your-bliss-joseph-campbell-power-of-myth","who is joseph campbell"` ); console.log(result); /* Joseph Campbell was a mythologist and writer who discussed spirituality, psychological archetypes, cultural myths, and the mythology of self. He sat down with Bill Moyers for a lengthy conversation at George Lucas’s Skywalker Ranch in California, which continued the following year at the American Museum of Natural History in New York. The resulting 24 hours of raw footage were edited down to six one-hour episodes and broadcast on PBS in 1988, shortly after Campbell’s death, in what became one of the most popular in the history of public television. Relevant Links: - [The Holstee Manifesto](http://holstee.com/manifesto-bp) - [The Silent Music of the Mind: Remembering Oliver Sacks](https://www.themarginalian.org/2015/08/31/remembering-oliver-sacks) - [Joseph Campbell series](http://billmoyers.com/spotlight/download-joseph-campbell-and-the-power-of-myth-audio/) - [Bill Moyers](https://www.themarginalian.org/tag/bill-moyers/) - [books](https://www.themarginalian.org/tag/books/) */} #### API Reference: * [WebBrowser](https://v02.api.js.langchain.com/classes/langchain_tools_webbrowser.WebBrowser.html) from `langchain/tools/webbrowser` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` Usage, in an Agent[​](#usage-in-an-agent "Direct link to Usage, in an Agent") ----------------------------------------------------------------------------- import { OpenAI, OpenAIEmbeddings } from "@langchain/openai";import { initializeAgentExecutorWithOptions } from "langchain/agents";import { Calculator } from "@langchain/community/tools/calculator";import { WebBrowser } from "langchain/tools/webbrowser";import { SerpAPI } from "@langchain/community/tools/serpapi";export const run = async () => { const model = new OpenAI({ temperature: 0 }); const embeddings = new OpenAIEmbeddings(); const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(), new WebBrowser({ model, embeddings }), ]; const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true, }); console.log("Loaded agent."); const input = `What is the word of the day on merriam webster. What is the top result on google for that word`; console.log(`Executing with input "${input}"...`); const result = await executor.invoke({ input }); /* Entering new agent_executor chain... I need to find the word of the day on Merriam Webster and then search for it on Google Action: web-browser Action Input: "https://www.merriam-webster.com/word-of-the-day", "" Summary: Merriam-Webster is a website that provides users with a variety of resources, including a dictionary, thesaurus, word finder, word of the day, games and quizzes, and more. The website also allows users to log in and save words, view recents, and access their account settings. The Word of the Day for April 14, 2023 is "lackadaisical", which means lacking in life, spirit, or zest. The website also provides quizzes and games to help users build their vocabulary. Relevant Links: - [Test Your Vocabulary](https://www.merriam-webster.com/games) - [Thesaurus](https://www.merriam-webster.com/thesaurus) - [Word Finder](https://www.merriam-webster.com/wordfinder) - [Word of the Day](https://www.merriam-webster.com/word-of-the-day) - [Shop](https://shop.merriam-webster.com/?utm_source=mwsite&utm_medium=nav&utm_content= I now need to search for the word of the day on Google Action: search Action Input: "lackadaisical" lackadaisical implies a carefree indifference marked by half-hearted efforts. lackadaisical college seniors pretending to study. listless suggests a lack of ... Finished chain. */ console.log(`Got output ${JSON.stringify(result, null, 2)}`); /* Got output { "output": "The word of the day on Merriam Webster is \"lackadaisical\", which implies a carefree indifference marked by half-hearted efforts.", "intermediateSteps": [ { "action": { "tool": "web-browser", "toolInput": "https://www.merriam-webster.com/word-of-the-day\", ", "log": " I need to find the word of the day on Merriam Webster and then search for it on Google\nAction: web-browser\nAction Input: \"https://www.merriam-webster.com/word-of-the-day\", \"\"" }, "observation": "\n\nSummary: Merriam-Webster is a website that provides users with a variety of resources, including a dictionary, thesaurus, word finder, word of the day, games and quizzes, and more. The website also allows users to log in and save words, view recents, and access their account settings. The Word of the Day for April 14, 2023 is \"lackadaisical\", which means lacking in life, spirit, or zest. The website also provides quizzes and games to help users build their vocabulary.\n\nRelevant Links: \n- [Test Your Vocabulary](https://www.merriam-webster.com/games)\n- [Thesaurus](https://www.merriam-webster.com/thesaurus)\n- [Word Finder](https://www.merriam-webster.com/wordfinder)\n- [Word of the Day](https://www.merriam-webster.com/word-of-the-day)\n- [Shop](https://shop.merriam-webster.com/?utm_source=mwsite&utm_medium=nav&utm_content=" }, { "action": { "tool": "search", "toolInput": "lackadaisical", "log": " I now need to search for the word of the day on Google\nAction: search\nAction Input: \"lackadaisical\"" }, "observation": "lackadaisical implies a carefree indifference marked by half-hearted efforts. lackadaisical college seniors pretending to study. listless suggests a lack of ..." } ] } */}; #### API Reference: * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [initializeAgentExecutorWithOptions](https://v02.api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` * [Calculator](https://v02.api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator` * [WebBrowser](https://v02.api.js.langchain.com/classes/langchain_tools_webbrowser.WebBrowser.html) from `langchain/tools/webbrowser` * [SerpAPI](https://v02.api.js.langchain.com/classes/langchain_community_tools_serpapi.SerpAPI.html) from `@langchain/community/tools/serpapi` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Tavily Search ](/v0.2/docs/integrations/tools/tavily_search)[ Next Wikipedia tool ](/v0.2/docs/integrations/tools/wikipedia) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/wikipedia
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * Wikipedia tool Wikipedia tool ============== The `WikipediaQueryRun` tool connects your agents and chains to Wikipedia. Usage[​](#usage "Direct link to Usage") --------------------------------------- import { WikipediaQueryRun } from "@langchain/community/tools/wikipedia_query_run";const tool = new WikipediaQueryRun({ topKResults: 3, maxDocContentLength: 4000,});const res = await tool.invoke("Langchain");console.log(res); #### API Reference: * [WikipediaQueryRun](https://v02.api.js.langchain.com/classes/langchain_community_tools_wikipedia_query_run.WikipediaQueryRun.html) from `@langchain/community/tools/wikipedia_query_run` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Web Browser Tool ](/v0.2/docs/integrations/tools/webbrowser)[ Next WolframAlpha Tool ](/v0.2/docs/integrations/tools/wolframalpha) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/wolframalpha
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * WolframAlpha Tool WolframAlpha Tool ================= The WolframAlpha tool connects your agents and chains to WolframAlpha's state-of-the-art computational intelligence engine. Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll need to create an app from the [WolframAlpha portal](https://developer.wolframalpha.com/) and obtain an `appid`. Usage[​](#usage "Direct link to Usage") --------------------------------------- import { WolframAlphaTool } from "@langchain/community/tools/wolframalpha";const tool = new WolframAlphaTool({ appid: "YOUR_APP_ID",});const res = await tool.invoke("What is 2 * 2?");console.log(res); #### API Reference: * [WolframAlphaTool](https://v02.api.js.langchain.com/classes/langchain_community_tools_wolframalpha.WolframAlphaTool.html) from `@langchain/community/tools/wolframalpha` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Wikipedia tool ](/v0.2/docs/integrations/tools/wikipedia)[ Next Agent with Zapier NLA Integration ](/v0.2/docs/integrations/tools/zapier_agent) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Knowledge Bases for Amazon Bedrock](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases) * [Chaindesk Retriever](/v0.2/docs/integrations/retrievers/chaindesk-retriever) * [ChatGPT Plugin Retriever](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin) * [Dria Retriever](/v0.2/docs/integrations/retrievers/dria) * [Exa Search](/v0.2/docs/integrations/retrievers/exa) * [HyDE Retriever](/v0.2/docs/integrations/retrievers/hyde) * [Amazon Kendra Retriever](/v0.2/docs/integrations/retrievers/kendra-retriever) * [Metal Retriever](/v0.2/docs/integrations/retrievers/metal-retriever) * [Supabase Hybrid Search](/v0.2/docs/integrations/retrievers/supabase-hybrid) * [Tavily Search API](/v0.2/docs/integrations/retrievers/tavily) * [Time-Weighted Retriever](/v0.2/docs/integrations/retrievers/time-weighted-retriever) * [Vector Store](/v0.2/docs/integrations/retrievers/vectorstore) * [Vespa Retriever](/v0.2/docs/integrations/retrievers/vespa-retriever) * [Zep Retriever](/v0.2/docs/integrations/retrievers/zep-retriever) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Retrievers](/v0.2/docs/integrations/retrievers) * Knowledge Bases for Amazon Bedrock Knowledge Bases for Amazon Bedrock ================================== Knowledge Bases for Amazon Bedrock is a fully managed support for end-to-end RAG workflow provided by Amazon Web Services (AWS). It provides an entire ingestion workflow of converting your documents into embeddings (vector) and storing the embeddings in a specialized vector database. Knowledge Bases for Amazon Bedrock supports popular databases for vector storage, including vector engine for Amazon OpenSearch Serverless, Pinecone, Redis Enterprise Cloud, Amazon Aurora (coming soon), and MongoDB (coming soon). Setup[​](#setup "Direct link to Setup") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm i @aws-sdk/client-bedrock-agent-runtime @langchain/community yarn add @aws-sdk/client-bedrock-agent-runtime @langchain/community pnpm add @aws-sdk/client-bedrock-agent-runtime @langchain/community Usage[​](#usage "Direct link to Usage") --------------------------------------- import { AmazonKnowledgeBaseRetriever } from "@langchain/community/retrievers/amazon_knowledge_base";const retriever = new AmazonKnowledgeBaseRetriever({ topK: 10, knowledgeBaseId: "YOUR_KNOWLEDGE_BASE_ID", region: "us-east-2", clientOptions: { credentials: { accessKeyId: "YOUR_ACCESS_KEY_ID", secretAccessKey: "YOUR_SECRET_ACCESS_KEY", }, },});const docs = await retriever.invoke("How are clouds formed?");console.log(docs); #### API Reference: * [AmazonKnowledgeBaseRetriever](https://v02.api.js.langchain.com/classes/langchain_community_retrievers_amazon_knowledge_base.AmazonKnowledgeBaseRetriever.html) from `@langchain/community/retrievers/amazon_knowledge_base` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Retrievers ](/v0.2/docs/integrations/retrievers)[ Next Chaindesk Retriever ](/v0.2/docs/integrations/retrievers/chaindesk-retriever) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/tools/zapier_agent
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool) * [Connery Action Tool](/v0.2/docs/integrations/tools/connery) * [Dall-E Tool](/v0.2/docs/integrations/tools/dalle) * [Discord Tool](/v0.2/docs/integrations/tools/discord) * [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search) * [Exa Search](/v0.2/docs/integrations/tools/exa_search) * [Gmail Tool](/v0.2/docs/integrations/tools/gmail) * [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar) * [Google Places Tool](/v0.2/docs/integrations/tools/google_places) * [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent) * [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter) * [SearchApi tool](/v0.2/docs/integrations/tools/searchapi) * [Searxng Search tool](/v0.2/docs/integrations/tools/searxng) * [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange) * [Tavily Search](/v0.2/docs/integrations/tools/tavily_search) * [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser) * [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia) * [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha) * [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Tools](/v0.2/docs/integrations/tools) * Agent with Zapier NLA Integration Agent with Zapier NLA Integration ================================= danger This module has been deprecated and is no longer supported. The documentation below will not work in versions 0.2.0 or later. Full docs here: [https://nla.zapier.com/start/](https://nla.zapier.com/start/) **Zapier Natural Language Actions** gives you access to the 5k+ apps and 20k+ actions on Zapier's platform through a natural language API interface. NLA supports apps like Gmail, Salesforce, Trello, Slack, Asana, HubSpot, Google Sheets, Microsoft Teams, and thousands more apps: [https://zapier.com/apps](https://zapier.com/apps) Zapier NLA handles ALL the underlying API auth and translation from natural language --> underlying API call --> return simplified output for LLMs. The key idea is you, or your users, expose a set of actions via an oauth-like setup window, which you can then query and execute via a REST API. NLA offers both API Key and OAuth for signing NLA API requests. Server-side (API Key): for quickly getting started, testing, and production scenarios where LangChain will only use actions exposed in the developer's Zapier account (and will use the developer's connected accounts on Zapier.com) User-facing (Oauth): for production scenarios where you are deploying an end-user facing application and LangChain needs access to end-user's exposed actions and connected accounts on Zapier.com Attach NLA credentials via either an environment variable (`ZAPIER_NLA_OAUTH_ACCESS_TOKEN` or `ZAPIER_NLA_API_KEY`) or refer to the params argument in the API reference for `ZapierNLAWrapper`. Review [auth docs](https://nla.zapier.com/docs/authentication/) for more details. The example below demonstrates how to use the Zapier integration as an Agent: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { OpenAI } from "@langchain/openai";import { ZapierNLAWrapper } from "langchain/tools";import { initializeAgentExecutorWithOptions, ZapierToolKit,} from "langchain/agents";const model = new OpenAI({ temperature: 0 });const zapier = new ZapierNLAWrapper();const toolkit = await ZapierToolKit.fromZapierNLAWrapper(zapier);const executor = await initializeAgentExecutorWithOptions( toolkit.tools, model, { agentType: "zero-shot-react-description", verbose: true, });console.log("Loaded agent.");const input = `Summarize the last email I received regarding Silicon Valley Bank. Send the summary to the #test-zapier Slack channel.`;console.log(`Executing with input "${input}"...`);const result = await executor.invoke({ input });console.log(`Got output ${result.output}`); * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous WolframAlpha Tool ](/v0.2/docs/integrations/tools/wolframalpha)[ Next Toolkits ](/v0.2/docs/integrations/toolkits) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/retrievers/zep-retriever
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Knowledge Bases for Amazon Bedrock](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases) * [Chaindesk Retriever](/v0.2/docs/integrations/retrievers/chaindesk-retriever) * [ChatGPT Plugin Retriever](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin) * [Dria Retriever](/v0.2/docs/integrations/retrievers/dria) * [Exa Search](/v0.2/docs/integrations/retrievers/exa) * [HyDE Retriever](/v0.2/docs/integrations/retrievers/hyde) * [Amazon Kendra Retriever](/v0.2/docs/integrations/retrievers/kendra-retriever) * [Metal Retriever](/v0.2/docs/integrations/retrievers/metal-retriever) * [Supabase Hybrid Search](/v0.2/docs/integrations/retrievers/supabase-hybrid) * [Tavily Search API](/v0.2/docs/integrations/retrievers/tavily) * [Time-Weighted Retriever](/v0.2/docs/integrations/retrievers/time-weighted-retriever) * [Vector Store](/v0.2/docs/integrations/retrievers/vectorstore) * [Vespa Retriever](/v0.2/docs/integrations/retrievers/vespa-retriever) * [Zep Retriever](/v0.2/docs/integrations/retrievers/zep-retriever) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Retrievers](/v0.2/docs/integrations/retrievers) * Zep Retriever Zep Retriever ============= > [Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps. With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant, while also reducing hallucinations, latency, and cost. > Interested in Zep Cloud? See [Zep Cloud Installation Guide](https://help.getzep.com/sdks), [Zep Cloud Retriever Example](https://help.getzep.com/langchain/examples/rag-message-history-example) This example shows how to use the Zep Retriever in a retrieval chain to retrieve documents from Zep memory store. Setup[​](#setup "Direct link to Setup") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm i @getzep/zep-js @langchain/community yarn add @getzep/zep-js @langchain/community pnpm add @getzep/zep-js @langchain/community Usage[​](#usage "Direct link to Usage") --------------------------------------- import { ZepRetriever } from "@langchain/community/retrievers/zep";import { ZepMemory } from "@langchain/community/memory/zep";import { Memory as MemoryModel, Message } from "@getzep/zep-js";import { randomUUID } from "crypto";function sleep(ms: number) { // eslint-disable-next-line no-promise-executor-return return new Promise((resolve) => setTimeout(resolve, ms));}export const run = async () => { const zepConfig = { url: process.env.ZEP_URL || "http://localhost:8000", sessionId: `session_${randomUUID()}`, }; console.log(`Zep Config: ${JSON.stringify(zepConfig)}`); const memory = new ZepMemory({ baseURL: zepConfig.url, sessionId: zepConfig.sessionId, }); // Generate chat messages about traveling to France const chatMessages = [ { role: "AI", message: "Bonjour! How can I assist you with your travel plans today?", }, { role: "User", message: "I'm planning a trip to France." }, { role: "AI", message: "That sounds exciting! What cities are you planning to visit?", }, { role: "User", message: "I'm thinking of visiting Paris and Nice." }, { role: "AI", message: "Great choices! Are you interested in any specific activities?", }, { role: "User", message: "I would love to visit some vineyards." }, { role: "AI", message: "France has some of the best vineyards in the world. I can help you find some.", }, { role: "User", message: "That would be great!" }, { role: "AI", message: "Do you prefer red or white wine?" }, { role: "User", message: "I prefer red wine." }, { role: "AI", message: "Perfect! I'll find some vineyards that are known for their red wines.", }, { role: "User", message: "Thank you, that would be very helpful." }, { role: "AI", message: "You're welcome! I'll also look up some French wine etiquette for you.", }, { role: "User", message: "That sounds great. I can't wait to start my trip!", }, { role: "AI", message: "I'm sure you'll have a fantastic time. Do you have any other questions about your trip?", }, { role: "User", message: "Not at the moment, thank you for your help!" }, ]; const zepClient = await memory.zepClientPromise; if (!zepClient) { throw new Error("ZepClient is not initialized"); } // Add chat messages to memory for (const chatMessage of chatMessages) { let m: MemoryModel; if (chatMessage.role === "AI") { m = new MemoryModel({ messages: [new Message({ role: "ai", content: chatMessage.message })], }); } else { m = new MemoryModel({ messages: [ new Message({ role: "human", content: chatMessage.message }), ], }); } await zepClient.memory.addMemory(zepConfig.sessionId, m); } // Wait for messages to be summarized, enriched, embedded and indexed. await sleep(10000); // Simple similarity search const query = "Can I drive red cars in France?"; const retriever = new ZepRetriever({ ...zepConfig, topK: 3 }); const docs = await retriever.invoke(query); console.log("Simple similarity search"); console.log(JSON.stringify(docs, null, 2)); // mmr reranking search const mmrRetriever = new ZepRetriever({ ...zepConfig, topK: 3, searchType: "mmr", mmrLambda: 0.5, }); const mmrDocs = await mmrRetriever.invoke(query); console.log("MMR reranking search"); console.log(JSON.stringify(mmrDocs, null, 2)); // summary search with mmr reranking const mmrSummaryRetriever = new ZepRetriever({ ...zepConfig, topK: 3, searchScope: "summary", searchType: "mmr", mmrLambda: 0.5, }); const mmrSummaryDocs = await mmrSummaryRetriever.invoke(query); console.log("Summary search with MMR reranking"); console.log(JSON.stringify(mmrSummaryDocs, null, 2)); // Filtered search const filteredRetriever = new ZepRetriever({ ...zepConfig, topK: 3, filter: { where: { jsonpath: '$.system.entities[*] ? (@.Label == "GPE")' }, }, }); const filteredDocs = await filteredRetriever.invoke(query); console.log("Filtered search"); console.log(JSON.stringify(filteredDocs, null, 2));}; #### API Reference: * [ZepRetriever](https://v02.api.js.langchain.com/classes/langchain_community_retrievers_zep.ZepRetriever.html) from `@langchain/community/retrievers/zep` * [ZepMemory](https://v02.api.js.langchain.com/classes/langchain_community_memory_zep.ZepMemory.html) from `@langchain/community/memory/zep` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Vespa Retriever ](/v0.2/docs/integrations/retrievers/vespa-retriever)[ Next Tools ](/v0.2/docs/integrations/tools) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/retrievers/dria
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Knowledge Bases for Amazon Bedrock](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases) * [Chaindesk Retriever](/v0.2/docs/integrations/retrievers/chaindesk-retriever) * [ChatGPT Plugin Retriever](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin) * [Dria Retriever](/v0.2/docs/integrations/retrievers/dria) * [Exa Search](/v0.2/docs/integrations/retrievers/exa) * [HyDE Retriever](/v0.2/docs/integrations/retrievers/hyde) * [Amazon Kendra Retriever](/v0.2/docs/integrations/retrievers/kendra-retriever) * [Metal Retriever](/v0.2/docs/integrations/retrievers/metal-retriever) * [Supabase Hybrid Search](/v0.2/docs/integrations/retrievers/supabase-hybrid) * [Tavily Search API](/v0.2/docs/integrations/retrievers/tavily) * [Time-Weighted Retriever](/v0.2/docs/integrations/retrievers/time-weighted-retriever) * [Vector Store](/v0.2/docs/integrations/retrievers/vectorstore) * [Vespa Retriever](/v0.2/docs/integrations/retrievers/vespa-retriever) * [Zep Retriever](/v0.2/docs/integrations/retrievers/zep-retriever) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Retrievers](/v0.2/docs/integrations/retrievers) * Dria Retriever Dria Retriever ============== The [Dria](https://dria.co/profile) retriever allows an agent to perform a text-based search across a comprehensive knowledge hub. Setup[​](#setup "Direct link to Setup") --------------------------------------- To use Dria retriever, first install Dria JS client: * npm * Yarn * pnpm npm install dria yarn add dria pnpm add dria You need to provide two things to the retriever: * **API Key**: you can get yours at your [profile page](https://dria.co/profile) when you create an account. * **Contract ID**: accessible at the top of the page when viewing a knowledge or in its URL. For example, the Bitcoin whitepaper is uploaded on Dria at [https://dria.co/knowledge/2KxNbEb040GKQ1DSDNDsA-Fsj\_BlQIEAlzBNuiapBR0](https://dria.co/knowledge/2KxNbEb040GKQ1DSDNDsA-Fsj_BlQIEAlzBNuiapBR0), so its contract ID is `2KxNbEb040GKQ1DSDNDsA-Fsj_BlQIEAlzBNuiapBR0`. Contract ID can be omitted during instantiation, and later be set via `dria.contractId = "your-contract"` Dria retriever exposes the underlying [Dria client](https://npmjs.com/package/dria) as well, refer to the [Dria documentation](https://github.com/firstbatchxyz/dria-js-client?tab=readme-ov-file#usage) to learn more about the client. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install dria @langchain/community yarn add dria @langchain/community pnpm add dria @langchain/community import { DriaRetriever } from "@langchain/community/retrievers/dria";// contract of TypeScript Handbook v4.9 uploaded to Dria// https://dria.co/knowledge/-B64DjhUtCwBdXSpsRytlRQCu-bie-vSTvTIT8Ap3g0const contractId = "-B64DjhUtCwBdXSpsRytlRQCu-bie-vSTvTIT8Ap3g0";const retriever = new DriaRetriever({ contractId, // a knowledge to connect to apiKey: "DRIA_API_KEY", // if not provided, will check env for `DRIA_API_KEY` topK: 15, // optional: default value is 10});const docs = await retriever.invoke("What is a union type?");console.log(docs); #### API Reference: * [DriaRetriever](https://v02.api.js.langchain.com/classes/langchain_community_retrievers_dria.DriaRetriever.html) from `@langchain/community/retrievers/dria` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous ChatGPT Plugin Retriever ](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin)[ Next Exa Search ](/v0.2/docs/integrations/retrievers/exa) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/retrievers/chaindesk-retriever
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Knowledge Bases for Amazon Bedrock](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases) * [Chaindesk Retriever](/v0.2/docs/integrations/retrievers/chaindesk-retriever) * [ChatGPT Plugin Retriever](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin) * [Dria Retriever](/v0.2/docs/integrations/retrievers/dria) * [Exa Search](/v0.2/docs/integrations/retrievers/exa) * [HyDE Retriever](/v0.2/docs/integrations/retrievers/hyde) * [Amazon Kendra Retriever](/v0.2/docs/integrations/retrievers/kendra-retriever) * [Metal Retriever](/v0.2/docs/integrations/retrievers/metal-retriever) * [Supabase Hybrid Search](/v0.2/docs/integrations/retrievers/supabase-hybrid) * [Tavily Search API](/v0.2/docs/integrations/retrievers/tavily) * [Time-Weighted Retriever](/v0.2/docs/integrations/retrievers/time-weighted-retriever) * [Vector Store](/v0.2/docs/integrations/retrievers/vectorstore) * [Vespa Retriever](/v0.2/docs/integrations/retrievers/vespa-retriever) * [Zep Retriever](/v0.2/docs/integrations/retrievers/zep-retriever) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Retrievers](/v0.2/docs/integrations/retrievers) * Chaindesk Retriever On this page Chaindesk Retriever =================== This example shows how to use the Chaindesk Retriever in a retrieval chain to retrieve documents from a Chaindesk.ai datastore. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/community yarn add @langchain/community pnpm add @langchain/community import { ChaindeskRetriever } from "@langchain/community/retrievers/chaindesk";const retriever = new ChaindeskRetriever({ datastoreId: "DATASTORE_ID", apiKey: "CHAINDESK_API_KEY", // optional: needed for private datastores topK: 8, // optional: default value is 3});const docs = await retriever.invoke("hello");console.log(docs); #### API Reference: * [ChaindeskRetriever](https://v02.api.js.langchain.com/classes/langchain_community_retrievers_chaindesk.ChaindeskRetriever.html) from `@langchain/community/retrievers/chaindesk` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Knowledge Bases for Amazon Bedrock ](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases)[ Next ChatGPT Plugin Retriever ](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Knowledge Bases for Amazon Bedrock](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases) * [Chaindesk Retriever](/v0.2/docs/integrations/retrievers/chaindesk-retriever) * [ChatGPT Plugin Retriever](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin) * [Dria Retriever](/v0.2/docs/integrations/retrievers/dria) * [Exa Search](/v0.2/docs/integrations/retrievers/exa) * [HyDE Retriever](/v0.2/docs/integrations/retrievers/hyde) * [Amazon Kendra Retriever](/v0.2/docs/integrations/retrievers/kendra-retriever) * [Metal Retriever](/v0.2/docs/integrations/retrievers/metal-retriever) * [Supabase Hybrid Search](/v0.2/docs/integrations/retrievers/supabase-hybrid) * [Tavily Search API](/v0.2/docs/integrations/retrievers/tavily) * [Time-Weighted Retriever](/v0.2/docs/integrations/retrievers/time-weighted-retriever) * [Vector Store](/v0.2/docs/integrations/retrievers/vectorstore) * [Vespa Retriever](/v0.2/docs/integrations/retrievers/vespa-retriever) * [Zep Retriever](/v0.2/docs/integrations/retrievers/zep-retriever) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Retrievers](/v0.2/docs/integrations/retrievers) * ChatGPT Plugin Retriever ChatGPT Plugin Retriever ======================== danger This module has been deprecated and is no longer supported. The documentation below will not work in versions 0.2.0 or later. This example shows how to use the ChatGPT Retriever Plugin within LangChain. To set up the ChatGPT Retriever Plugin, please follow instructions [here](https://github.com/openai/chatgpt-retrieval-plugin). Usage[​](#usage "Direct link to Usage") --------------------------------------- import { ChatGPTPluginRetriever } from "langchain/retrievers/remote";const retriever = new ChatGPTPluginRetriever({ url: "http://0.0.0.0:8000", auth: { bearer: "super-secret-jwt-token-with-at-least-32-characters-long", },});const docs = await retriever.invoke("hello world");console.log(docs); * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Chaindesk Retriever ](/v0.2/docs/integrations/retrievers/chaindesk-retriever)[ Next Dria Retriever ](/v0.2/docs/integrations/retrievers/dria) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/retrievers/hyde
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Knowledge Bases for Amazon Bedrock](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases) * [Chaindesk Retriever](/v0.2/docs/integrations/retrievers/chaindesk-retriever) * [ChatGPT Plugin Retriever](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin) * [Dria Retriever](/v0.2/docs/integrations/retrievers/dria) * [Exa Search](/v0.2/docs/integrations/retrievers/exa) * [HyDE Retriever](/v0.2/docs/integrations/retrievers/hyde) * [Amazon Kendra Retriever](/v0.2/docs/integrations/retrievers/kendra-retriever) * [Metal Retriever](/v0.2/docs/integrations/retrievers/metal-retriever) * [Supabase Hybrid Search](/v0.2/docs/integrations/retrievers/supabase-hybrid) * [Tavily Search API](/v0.2/docs/integrations/retrievers/tavily) * [Time-Weighted Retriever](/v0.2/docs/integrations/retrievers/time-weighted-retriever) * [Vector Store](/v0.2/docs/integrations/retrievers/vectorstore) * [Vespa Retriever](/v0.2/docs/integrations/retrievers/vespa-retriever) * [Zep Retriever](/v0.2/docs/integrations/retrievers/zep-retriever) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Retrievers](/v0.2/docs/integrations/retrievers) * HyDE Retriever HyDE Retriever ============== This example shows how to use the HyDE Retriever, which implements Hypothetical Document Embeddings (HyDE) as described in [this paper](https://arxiv.org/abs/2212.10496). At a high level, HyDE is an embedding technique that takes queries, generates a hypothetical answer, and then embeds that generated document and uses that as the final example. In order to use HyDE, we therefore need to provide a base embedding model, as well as an LLM that can be used to generate those documents. By default, the HyDE class comes with some default prompts to use (see the paper for more details on them), but we can also create our own, which should have a single input variable `{question}`. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { OpenAI, OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { HydeRetriever } from "langchain/retrievers/hyde";import { Document } from "@langchain/core/documents";const embeddings = new OpenAIEmbeddings();const vectorStore = new MemoryVectorStore(embeddings);const llm = new OpenAI();const retriever = new HydeRetriever({ vectorStore, llm, k: 1,});await vectorStore.addDocuments( [ "My name is John.", "My name is Bob.", "My favourite food is pizza.", "My favourite food is pasta.", ].map((pageContent) => new Document({ pageContent })));const results = await retriever.invoke("What is my favourite food?");console.log(results);/*[ Document { pageContent: 'My favourite food is pasta.', metadata: {} }]*/ #### API Reference: * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory` * [HydeRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_hyde.HydeRetriever.html) from `langchain/retrievers/hyde` * [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Exa Search ](/v0.2/docs/integrations/retrievers/exa)[ Next Amazon Kendra Retriever ](/v0.2/docs/integrations/retrievers/kendra-retriever) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/retrievers/metal-retriever
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Knowledge Bases for Amazon Bedrock](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases) * [Chaindesk Retriever](/v0.2/docs/integrations/retrievers/chaindesk-retriever) * [ChatGPT Plugin Retriever](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin) * [Dria Retriever](/v0.2/docs/integrations/retrievers/dria) * [Exa Search](/v0.2/docs/integrations/retrievers/exa) * [HyDE Retriever](/v0.2/docs/integrations/retrievers/hyde) * [Amazon Kendra Retriever](/v0.2/docs/integrations/retrievers/kendra-retriever) * [Metal Retriever](/v0.2/docs/integrations/retrievers/metal-retriever) * [Supabase Hybrid Search](/v0.2/docs/integrations/retrievers/supabase-hybrid) * [Tavily Search API](/v0.2/docs/integrations/retrievers/tavily) * [Time-Weighted Retriever](/v0.2/docs/integrations/retrievers/time-weighted-retriever) * [Vector Store](/v0.2/docs/integrations/retrievers/vectorstore) * [Vespa Retriever](/v0.2/docs/integrations/retrievers/vespa-retriever) * [Zep Retriever](/v0.2/docs/integrations/retrievers/zep-retriever) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Retrievers](/v0.2/docs/integrations/retrievers) * Metal Retriever Metal Retriever =============== This example shows how to use the Metal Retriever in a retrieval chain to retrieve documents from a Metal index. Setup[​](#setup "Direct link to Setup") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm i @getmetal/metal-sdk @langchain/community yarn add @getmetal/metal-sdk @langchain/community pnpm add @getmetal/metal-sdk @langchain/community Usage[​](#usage "Direct link to Usage") --------------------------------------- /* eslint-disable @typescript-eslint/no-non-null-assertion */import Metal from "@getmetal/metal-sdk";import { MetalRetriever } from "@langchain/community/retrievers/metal";export const run = async () => { const MetalSDK = Metal; const client = new MetalSDK( process.env.METAL_API_KEY!, process.env.METAL_CLIENT_ID!, process.env.METAL_INDEX_ID ); const retriever = new MetalRetriever({ client }); const docs = await retriever.invoke("hello"); console.log(docs);}; #### API Reference: * [MetalRetriever](https://v02.api.js.langchain.com/classes/langchain_community_retrievers_metal.MetalRetriever.html) from `@langchain/community/retrievers/metal` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Amazon Kendra Retriever ](/v0.2/docs/integrations/retrievers/kendra-retriever)[ Next Supabase Hybrid Search ](/v0.2/docs/integrations/retrievers/supabase-hybrid) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/retrievers/time-weighted-retriever
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Knowledge Bases for Amazon Bedrock](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases) * [Chaindesk Retriever](/v0.2/docs/integrations/retrievers/chaindesk-retriever) * [ChatGPT Plugin Retriever](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin) * [Dria Retriever](/v0.2/docs/integrations/retrievers/dria) * [Exa Search](/v0.2/docs/integrations/retrievers/exa) * [HyDE Retriever](/v0.2/docs/integrations/retrievers/hyde) * [Amazon Kendra Retriever](/v0.2/docs/integrations/retrievers/kendra-retriever) * [Metal Retriever](/v0.2/docs/integrations/retrievers/metal-retriever) * [Supabase Hybrid Search](/v0.2/docs/integrations/retrievers/supabase-hybrid) * [Tavily Search API](/v0.2/docs/integrations/retrievers/tavily) * [Time-Weighted Retriever](/v0.2/docs/integrations/retrievers/time-weighted-retriever) * [Vector Store](/v0.2/docs/integrations/retrievers/vectorstore) * [Vespa Retriever](/v0.2/docs/integrations/retrievers/vespa-retriever) * [Zep Retriever](/v0.2/docs/integrations/retrievers/zep-retriever) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Retrievers](/v0.2/docs/integrations/retrievers) * Time-Weighted Retriever On this page Time-Weighted Retriever ======================= A Time-Weighted Retriever is a retriever that takes into account recency in addition to similarity. The scoring algorithm is: let score = (1.0 - this.decayRate) ** hoursPassed + vectorRelevance; Notably, `hoursPassed` above refers to the time since the object in the retriever was last accessed, not since it was created. This means that frequently accessed objects remain "fresh" and score higher. `this.decayRate` is a configurable decimal number between 0 and 1. A lower number means that documents will be "remembered" for longer, while a higher number strongly weights more recently accessed documents. Note that setting a decay rate of exactly 0 or 1 makes `hoursPassed` irrelevant and makes this retriever equivalent to a standard vector lookup. Usage[​](#usage "Direct link to Usage") --------------------------------------- This example shows how to intialize a `TimeWeightedVectorStoreRetriever` with a vector store. It is important to note that due to required metadata, all documents must be added to the backing vector store using the `addDocuments` method on the **retriever**, not the vector store itself. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { TimeWeightedVectorStoreRetriever } from "langchain/retrievers/time_weighted";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = new MemoryVectorStore(new OpenAIEmbeddings());const retriever = new TimeWeightedVectorStoreRetriever({ vectorStore, memoryStream: [], searchKwargs: 2,});const documents = [ "My name is John.", "My name is Bob.", "My favourite food is pizza.", "My favourite food is pasta.", "My favourite food is sushi.",].map((pageContent) => ({ pageContent, metadata: {} }));// All documents must be added using this method on the retriever (not the vector store!)// so that the correct access history metadata is populatedawait retriever.addDocuments(documents);const results1 = await retriever.invoke("What is my favourite food?");console.log(results1);/*[ Document { pageContent: 'My favourite food is pasta.', metadata: {} }] */const results2 = await retriever.invoke("What is my favourite food?");console.log(results2);/*[ Document { pageContent: 'My favourite food is pasta.', metadata: {} }] */ #### API Reference: * [TimeWeightedVectorStoreRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_time_weighted.TimeWeightedVectorStoreRetriever.html) from `langchain/retrievers/time_weighted` * [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Tavily Search API ](/v0.2/docs/integrations/retrievers/tavily)[ Next Vector Store ](/v0.2/docs/integrations/retrievers/vectorstore) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/retrievers/supabase-hybrid
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Knowledge Bases for Amazon Bedrock](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases) * [Chaindesk Retriever](/v0.2/docs/integrations/retrievers/chaindesk-retriever) * [ChatGPT Plugin Retriever](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin) * [Dria Retriever](/v0.2/docs/integrations/retrievers/dria) * [Exa Search](/v0.2/docs/integrations/retrievers/exa) * [HyDE Retriever](/v0.2/docs/integrations/retrievers/hyde) * [Amazon Kendra Retriever](/v0.2/docs/integrations/retrievers/kendra-retriever) * [Metal Retriever](/v0.2/docs/integrations/retrievers/metal-retriever) * [Supabase Hybrid Search](/v0.2/docs/integrations/retrievers/supabase-hybrid) * [Tavily Search API](/v0.2/docs/integrations/retrievers/tavily) * [Time-Weighted Retriever](/v0.2/docs/integrations/retrievers/time-weighted-retriever) * [Vector Store](/v0.2/docs/integrations/retrievers/vectorstore) * [Vespa Retriever](/v0.2/docs/integrations/retrievers/vespa-retriever) * [Zep Retriever](/v0.2/docs/integrations/retrievers/zep-retriever) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Retrievers](/v0.2/docs/integrations/retrievers) * Supabase Hybrid Search On this page Supabase Hybrid Search ====================== Langchain supports hybrid search with a Supabase Postgres database. The hybrid search combines the postgres `pgvector` extension (similarity search) and Full-Text Search (keyword search) to retrieve documents. You can add documents via SupabaseVectorStore `addDocuments` function. SupabaseHybridKeyWordSearch accepts embedding, supabase client, number of results for similarity search, and number of results for keyword search as parameters. The `getRelevantDocuments` function produces a list of documents that has duplicates removed and is sorted by relevance score. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Install the library with[​](#install-the-library-with "Direct link to Install the library with") * npm * Yarn * pnpm npm install -S @supabase/supabase-js yarn add @supabase/supabase-js pnpm add @supabase/supabase-js ### Create a table and search functions in your database[​](#create-a-table-and-search-functions-in-your-database "Direct link to Create a table and search functions in your database") Run this in your database: -- Enable the pgvector extension to work with embedding vectorscreate extension vector;-- Create a table to store your documentscreate table documents ( id bigserial primary key, content text, -- corresponds to Document.pageContent metadata jsonb, -- corresponds to Document.metadata embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed);-- Create a function to similarity search for documentscreate function match_documents ( query_embedding vector(1536), match_count int DEFAULT null, filter jsonb DEFAULT '{}') returns table ( id bigint, content text, metadata jsonb, similarity float)language plpgsqlas $$#variable_conflict use_columnbegin return query select id, content, metadata, 1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding limit match_count;end;$$;-- Create a function to keyword search for documentscreate function kw_match_documents(query_text text, match_count int)returns table (id bigint, content text, metadata jsonb, similarity real)as $$beginreturn query executeformat('select id, content, metadata, ts_rank(to_tsvector(content), plainto_tsquery($1)) as similarityfrom documentswhere to_tsvector(content) @@ plainto_tsquery($1)order by similarity desclimit $2')using query_text, match_count;end;$$ language plpgsql; Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community import { OpenAIEmbeddings } from "@langchain/openai";import { createClient } from "@supabase/supabase-js";import { SupabaseHybridSearch } from "@langchain/community/retrievers/supabase";export const run = async () => { const client = createClient( process.env.SUPABASE_URL || "", process.env.SUPABASE_PRIVATE_KEY || "" ); const embeddings = new OpenAIEmbeddings(); const retriever = new SupabaseHybridSearch(embeddings, { client, // Below are the defaults, expecting that you set up your supabase table and functions according to the guide above. Please change if necessary. similarityK: 2, keywordK: 2, tableName: "documents", similarityQueryName: "match_documents", keywordQueryName: "kw_match_documents", }); const results = await retriever.invoke("hello bye"); console.log(results);}; #### API Reference: * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [SupabaseHybridSearch](https://v02.api.js.langchain.com/classes/langchain_community_retrievers_supabase.SupabaseHybridSearch.html) from `@langchain/community/retrievers/supabase` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Metal Retriever ](/v0.2/docs/integrations/retrievers/metal-retriever)[ Next Tavily Search API ](/v0.2/docs/integrations/retrievers/tavily) * [Setup](#setup) * [Install the library with](#install-the-library-with) * [Create a table and search functions in your database](#create-a-table-and-search-functions-in-your-database) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/retrievers/vectorstore
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Knowledge Bases for Amazon Bedrock](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases) * [Chaindesk Retriever](/v0.2/docs/integrations/retrievers/chaindesk-retriever) * [ChatGPT Plugin Retriever](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin) * [Dria Retriever](/v0.2/docs/integrations/retrievers/dria) * [Exa Search](/v0.2/docs/integrations/retrievers/exa) * [HyDE Retriever](/v0.2/docs/integrations/retrievers/hyde) * [Amazon Kendra Retriever](/v0.2/docs/integrations/retrievers/kendra-retriever) * [Metal Retriever](/v0.2/docs/integrations/retrievers/metal-retriever) * [Supabase Hybrid Search](/v0.2/docs/integrations/retrievers/supabase-hybrid) * [Tavily Search API](/v0.2/docs/integrations/retrievers/tavily) * [Time-Weighted Retriever](/v0.2/docs/integrations/retrievers/time-weighted-retriever) * [Vector Store](/v0.2/docs/integrations/retrievers/vectorstore) * [Vespa Retriever](/v0.2/docs/integrations/retrievers/vespa-retriever) * [Zep Retriever](/v0.2/docs/integrations/retrievers/zep-retriever) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Retrievers](/v0.2/docs/integrations/retrievers) * Vector Store Vector Store ============ Once you've created a [Vector Store](/v0.2/docs/concepts#vectorstores), the way to use it as a Retriever is very simple: vectorStore = ...retriever = vectorStore.asRetriever() * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Time-Weighted Retriever ](/v0.2/docs/integrations/retrievers/time-weighted-retriever)[ Next Vespa Retriever ](/v0.2/docs/integrations/retrievers/vespa-retriever) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/agents/quick_start/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [Quick start](/v0.1/docs/modules/agents/quick_start/) * [Concepts](/v0.1/docs/modules/agents/concepts/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * [How-to](/v0.1/docs/modules/agents/how_to/custom_agent/) * [Agents](/v0.1/docs/modules/agents/) * [Tools](/v0.1/docs/modules/agents/tools/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Agents](/v0.1/docs/modules/agents/) * Quick start On this page Quick Start =========== To best understand the agent framework, let’s build an agent that has two tools: one to look things up online, and one to look up specific data that we’ve loaded into a index. This will assume knowledge of [LLMs](/v0.1/docs/modules/model_io/) and [retrieval](/v0.1/docs/modules/data_connection/) so if you haven’t already explored those sections, it is recommended you do so. Setup: LangSmith[​](#setup-langsmith "Direct link to Setup: LangSmith") ----------------------------------------------------------------------- By definition, agents take a self-determined, input-dependent sequence of steps before returning a user-facing output. This makes debugging these systems particularly tricky, and observability particularly important. [LangSmith](https://smith.langchain.com) is especially useful for such cases. When building with LangChain, all steps will automatically be traced in LangSmith. To set up LangSmith we just need set the following environment variables: export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="<your-api-key>" Define tools[​](#define-tools "Direct link to Define tools") ------------------------------------------------------------ We first need to create the tools we want to use. We will use two tools: [Tavily](https://app.tavily.com) (to search online) and then a retriever over a local index we will create. ### [Tavily](https://app.tavily.com)[​](#tavily "Direct link to tavily") We have a built-in tool in LangChain to easily use Tavily search engine as tool. Note that this requires a Tavily API key set as an environment variable named `TAVILY_API_KEY` - they have a free tier, but if you don’t have one or don’t want to create one, you can always ignore this step. import { TavilySearchResults } from "@langchain/community/tools/tavily_search";const searchTool = new TavilySearchResults();const toolResult = await searchTool.invoke("what is the weather in SF?");console.log(toolResult);/* [{"title":"Weather in December 2023 in San Francisco, California, USA","url":"https://www.timeanddate.com/weather/@5391959/historic?month=12&year=2023","content":"Currently: 52 °F. Broken clouds. (Weather station: San Francisco International Airport, USA). See more current weather Select month: December 2023 Weather in San Francisco — Graph °F Sun, Dec 17 Lo:55 6 pm Hi:57 4 Mon, Dec 18 Lo:54 12 am Hi:55 7 Lo:54 6 am Hi:55 10 Lo:57 12 pm Hi:64 9 Lo:63 6 pm Hi:64 14 Tue, Dec 19 Lo:61","score":0.96006},...]*/ ### Retriever[​](#retriever "Direct link to Retriever") We will also create a retriever over some data of our own. For a deeper explanation of each step here, see this [section](/v0.1/docs/modules/data_connection/). import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";const loader = new CheerioWebBaseLoader( "https://docs.smith.langchain.com/user_guide");const rawDocs = await loader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200,});const docs = await splitter.splitDocuments(rawDocs);const vectorstore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());const retriever = vectorstore.asRetriever();const retrieverResult = await retriever.getRelevantDocuments( "how to upload a dataset");console.log(retrieverResult[0]);/* Document { pageContent: "your application progresses through the beta testing phase, it's essential to continue collecting data to refine and improve its performance. LangSmith enables you to add runs as examples to datasets (from both the project page and within an annotation queue), expanding your test coverage on real-world scenarios. This is a key benefit in having your logging system and your evaluation/testing system in the same platform.Production​Closely inspecting key data points, growing benchmarking datasets, annotating traces, and drilling down into important data in trace view are workflows you’ll also want to do once your app hits production. However, especially at the production stage, it’s crucial to get a high-level overview of application performance with respect to latency, cost, and feedback scores. This ensures that it's delivering desirable results at scale.Monitoring and A/B Testing​LangSmith provides monitoring charts that allow you to track key metrics over time. You can expand to", metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: { lines: [Object] } } }*/ Now that we have populated our index that we will do doing retrieval over, we can easily turn it into a tool (the format needed for an agent to properly use it): import { createRetrieverTool } from "langchain/tools/retriever";const retrieverTool = createRetrieverTool(retriever, { name: "langsmith_search", description: "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",}); ### Tools[​](#tools "Direct link to Tools") Now that we have created both, we can create a list of tools that we will use downstream: const tools = [searchTool, retrieverTool]; Create the agent[​](#create-the-agent "Direct link to Create the agent") ------------------------------------------------------------------------ Now that we have defined the tools, we can create the agent. We will be using an OpenAI Functions agent - for more information on this type of agent, as well as other options, see [this guide](/v0.1/docs/modules/agents/agent_types/). First, we choose the LLM we want to be guiding the agent. import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,}); Next, we choose the prompt we want to use to guide the agent: import type { ChatPromptTemplate } from "@langchain/core/prompts";import { pull } from "langchain/hub";// Get the prompt to use - you can modify this!// If you want to see the prompt in full, you can at:// https://smith.langchain.com/hub/hwchase17/openai-functions-agentconst prompt = await pull<ChatPromptTemplate>( "hwchase17/openai-functions-agent"); Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to thing about these components, see our [conceptual guide](/v0.1/docs/modules/agents/concepts/). import { createOpenAIFunctionsAgent } from "langchain/agents";const agent = await createOpenAIFunctionsAgent({ llm, tools, prompt,}); Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools). For more information about how to thing about these components, see our [conceptual guide](/v0.1/docs/modules/agents/concepts/). import { AgentExecutor } from "langchain/agents";const agentExecutor = new AgentExecutor({ agent, tools,}); Run the agent[​](#run-the-agent "Direct link to Run the agent") --------------------------------------------------------------- We can now run the agent on a few queries! Note that for now, these are all stateless queries (it won’t remember previous interactions). const result1 = await agentExecutor.invoke({ input: "hi!",});console.log(result1);/* [chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "hi!" } [chain/end] [1:chain:AgentExecutor] [1.36s] Exiting Chain run with output: { "output": "Hello! How can I assist you today?" } { input: 'hi!', output: 'Hello! How can I assist you today?' }*/ const result2 = await agentExecutor.invoke({ input: "how can langsmith help with testing?",});console.log(result2);/* [chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "how can langsmith help with testing?" } [chain/end] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 7:parser:OpenAIFunctionsAgentOutputParser] [66ms] Exiting Chain run with output: { "tool": "langsmith_search", "toolInput": { "query": "how can LangSmith help with testing?" }, "log": "Invoking \"langsmith_search\" with {\"query\":\"how can LangSmith help with testing?\"}\n", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "", "additional_kwargs": { "function_call": { "name": "langsmith_search", "arguments": "{\"query\":\"how can LangSmith help with testing?\"}" } } } } ] } [tool/start] [1:chain:AgentExecutor > 8:tool:langsmith_search] Entering Tool run with input: "{"query":"how can LangSmith help with testing?"}" [retriever/start] [1:chain:AgentExecutor > 8:tool:langsmith_search > 9:retriever:VectorStoreRetriever] Entering Retriever run with input: { "query": "how can LangSmith help with testing?" } [retriever/end] [1:chain:AgentExecutor > 8:tool:langsmith_search > 9:retriever:VectorStoreRetriever] [294ms] Exiting Retriever run with output: { "documents": [ { "pageContent": "You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be assigned string tags or key-value metadata, allowing you to attach correlation ids or AB test variants, and filter runs accordingly.We’ve also made it possible to associate feedback programmatically with runs. This means that if your application has a thumbs up/down button on it, you can use that to log feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the", "metadata": { "source": "https://docs.smith.langchain.com/user_guide", "loc": { "lines": { "from": 11, "to": 11 } } } }, { "pageContent": "the time that we do… it’s so helpful. We can use LangSmith to debug:An unexpected end resultWhy an agent is loopingWhy a chain was slower than expectedHow many tokens an agent usedDebugging​Debugging LLMs, chains, and agents can be tough. LangSmith helps solve the following pain points:What was the exact input to the LLM?​LLM calls are often tricky and non-deterministic. The inputs/outputs may seem straightforward, given they are technically string → string (or chat messages → chat message), but this can be misleading as the input string is usually constructed from a combination of user input and auxiliary functions.Most inputs to an LLM call are a combination of some type of fixed template along with input variables. These input variables could come directly from user input or from an auxiliary function (like retrieval). By the time these input variables go into the LLM they will have been converted to a string format, but often times they are not naturally represented as a string", "metadata": { "source": "https://docs.smith.langchain.com/user_guide", "loc": { "lines": { "from": 3, "to": 3 } } } }, { "pageContent": "inputs, and see what happens. At some point though, our application is performing\nwell and we want to be more rigorous about testing changes. We can use a dataset\nthat we’ve constructed along the way (see above). Alternatively, we could spend some\ntime constructing a small dataset by hand. For these situations, LangSmith simplifies", "metadata": { "source": "https://docs.smith.langchain.com/user_guide", "loc": { "lines": { "from": 4, "to": 7 } } } }, { "pageContent": "feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the debug mode approach.We’ve provided several examples in the LangSmith documentation for extracting insights from logged runs. In addition to guiding you on performing this task yourself, we also provide examples of integrating with third parties for this purpose. We're eager to expand this area in the coming months! If you have ideas for either -- an open-source way to evaluate, or are building a company that wants to do analytics over these runs, please reach out.Exporting datasets​LangSmith makes it easy to curate datasets. However, these aren’t just useful inside LangSmith; they can be exported for use in other contexts. Notable applications include exporting for use in OpenAI Evals or fine-tuning, such as with FireworksAI.To set up tracing in Deno, web browsers, or other runtime", "metadata": { "source": "https://docs.smith.langchain.com/user_guide", "loc": { "lines": { "from": 11, "to": 11 } } } } ] } [chain/start] [1:chain:AgentExecutor > 10:chain:RunnableAgent] Entering Chain run with input: { "input": "how can langsmith help with testing?", "steps": [ { "action": { "tool": "langsmith_search", "toolInput": { "query": "how can LangSmith help with testing?" }, "log": "Invoking \"langsmith_search\" with {\"query\":\"how can LangSmith help with testing?\"}\n", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "", "additional_kwargs": { "function_call": { "name": "langsmith_search", "arguments": "{\"query\":\"how can LangSmith help with testing?\"}" } } } } ] }, "observation": "You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be assigned string tags or key-value metadata, allowing you to attach correlation ids or AB test variants, and filter runs accordingly.We’ve also made it possible to associate feedback programmatically with runs. This means that if your application has a thumbs up/down button on it, you can use that to log feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the\n\nthe time that we do… it’s so helpful. We can use LangSmith to debug:An unexpected end resultWhy an agent is loopingWhy a chain was slower than expectedHow many tokens an agent usedDebugging​Debugging LLMs, chains, and agents can be tough. LangSmith helps solve the following pain points:What was the exact input to the LLM?​LLM calls are often tricky and non-deterministic. The inputs/outputs may seem straightforward, given they are technically string → string (or chat messages → chat message), but this can be misleading as the input string is usually constructed from a combination of user input and auxiliary functions.Most inputs to an LLM call are a combination of some type of fixed template along with input variables. These input variables could come directly from user input or from an auxiliary function (like retrieval). By the time these input variables go into the LLM they will have been converted to a string format, but often times they are not naturally represented as a string\n\ninputs, and see what happens. At some point though, our application is performing\nwell and we want to be more rigorous about testing changes. We can use a dataset\nthat we’ve constructed along the way (see above). Alternatively, we could spend some\ntime constructing a small dataset by hand. For these situations, LangSmith simplifies\n\nfeedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the debug mode approach.We’ve provided several examples in the LangSmith documentation for extracting insights from logged runs. In addition to guiding you on performing this task yourself, we also provide examples of integrating with third parties for this purpose. We're eager to expand this area in the coming months! If you have ideas for either -- an open-source way to evaluate, or are building a company that wants to do analytics over these runs, please reach out.Exporting datasets​LangSmith makes it easy to curate datasets. However, these aren’t just useful inside LangSmith; they can be exported for use in other contexts. Notable applications include exporting for use in OpenAI Evals or fine-tuning, such as with FireworksAI.To set up tracing in Deno, web browsers, or other runtime" } ] } [chain/end] [1:chain:AgentExecutor] [5.83s] Exiting Chain run with output: { "input": "how can langsmith help with testing?", "output": "LangSmith can help with testing in several ways:\n\n1. Debugging: LangSmith can be used to debug unexpected end results, agent loops, slow chains, and token usage. It helps in pinpointing underperforming data points and tracking performance over time.\n\n2. Monitoring: LangSmith can monitor applications by logging all traces, visualizing latency and token usage statistics, and troubleshooting specific issues as they arise. It also allows for associating feedback programmatically with runs, which can be used to track performance over time.\n\n3. Exporting Datasets: LangSmith makes it easy to curate datasets, which can be exported for use in other contexts such as OpenAI Evals or fine-tuning with FireworksAI.\n\nOverall, LangSmith simplifies the process of testing changes, constructing datasets, and extracting insights from logged runs, making it a valuable tool for testing and evaluation." } { input: 'how can langsmith help with testing?', output: 'LangSmith can help with testing in several ways:\n' + '\n' + '1. Initial Test Set: LangSmith allows developers to create datasets of inputs and reference outputs to run tests on their LLM applications. These test cases can be uploaded in bulk, created on the fly, or exported from application traces.\n' + '\n' + "2. Comparison View: When making changes to your applications, LangSmith provides a comparison view to see whether you've regressed with respect to your initial test cases. This is helpful for evaluating changes in prompts, retrieval strategies, or model choices.\n" + '\n' + '3. Monitoring and A/B Testing: LangSmith provides monitoring charts to track key metrics over time and allows for A/B testing changes in prompt, model, or retrieval strategy.\n' + '\n' + '4. Debugging: LangSmith offers tracing and debugging information at each step of an LLM sequence, making it easier to identify and root-cause issues when things go wrong.\n' + '\n' + '5. Beta Testing and Production: LangSmith enables the addition of runs as examples to datasets, expanding test coverage on real-world scenarios. It also provides monitoring for application performance with respect to latency, cost, and feedback scores at the production stage.\n' + '\n' + 'Overall, LangSmith provides comprehensive testing and monitoring capabilities for LLM applications.' }*/ Adding in memory[​](#adding-in-memory "Direct link to Adding in memory") ------------------------------------------------------------------------ As mentioned earlier, this agent is stateless. This means it does not remember previous interactions. To give it memory we need to pass in previous `chat_history`. **Note:** the input variable below needs to be called `chat_history` because of the prompt we are using. If we use a different prompt, we could change the variable name. const result3 = await agentExecutor.invoke({ input: "hi! my name is cob.", chat_history: [],});console.log(result3);/* { input: 'hi! my name is cob.', chat_history: [], output: "Hello Cob! It's nice to meet you. How can I assist you today?" }*/ import { HumanMessage, AIMessage } from "@langchain/core/messages";const result4 = await agentExecutor.invoke({ input: "what's my name?", chat_history: [ new HumanMessage("hi! my name is cob."), new AIMessage("Hello Cob! How can I assist you today?"), ],});console.log(result4);/* { input: "what's my name?", chat_history: [ HumanMessage { content: 'hi! my name is cob.', additional_kwargs: {} }, AIMessage { content: 'Hello Cob! How can I assist you today?', additional_kwargs: {} } ], output: 'Your name is Cob. How can I assist you today, Cob?' }*/ If we want to keep track of these messages automatically, we can wrap this in a RunnableWithMessageHistory. For more information on how to use this, see [this guide](/v0.1/docs/expression_language/how_to/message_history/). import { ChatMessageHistory } from "langchain/stores/message/in_memory";import { RunnableWithMessageHistory } from "@langchain/core/runnables";const messageHistory = new ChatMessageHistory();const agentWithChatHistory = new RunnableWithMessageHistory({ runnable: agentExecutor, // This is needed because in most real world scenarios, a session id is needed per user. // It isn't really used here because we are using a simple in memory ChatMessageHistory. getMessageHistory: (_sessionId) => messageHistory, inputMessagesKey: "input", historyMessagesKey: "chat_history",});const result5 = await agentWithChatHistory.invoke( { input: "hi! i'm cob", }, { // This is needed because in most real world scenarios, a session id is needed per user. // It isn't really used here because we are using a simple in memory ChatMessageHistory. configurable: { sessionId: "foo", }, });console.log(result5);/* { input: "hi! i'm cob", chat_history: [ HumanMessage { content: "hi! i'm cob", additional_kwargs: {} }, AIMessage { content: 'Hello Cob! How can I assist you today?', additional_kwargs: {} } ], output: 'Hello Cob! How can I assist you today?' }*/ const result6 = await agentWithChatHistory.invoke( { input: "what's my name?", }, { // This is needed because in most real world scenarios, a session id is needed per user. // It isn't really used here because we are using a simple in memory ChatMessageHistory. configurable: { sessionId: "foo", }, });console.log(result6);/* { input: "what's my name?", chat_history: [ HumanMessage { content: "hi! i'm cob", additional_kwargs: {} }, AIMessage { content: 'Hello Cob! How can I assist you today?', additional_kwargs: {} }, HumanMessage { content: "what's my name?", additional_kwargs: {} }, AIMessage { content: 'Your name is Cob. How can I assist you today, Cob?', additional_kwargs: {} } ], output: 'Your name is Cob. How can I assist you today, Cob?' }*/ Conclusion[​](#conclusion "Direct link to Conclusion") ------------------------------------------------------ That’s a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there’s lot to learn! Head back to the [main agent page](/v0.1/docs/modules/agents/) to find more resources on conceptual guides, different types of agents, how to create custom tools, and more! * * * #### Help us out by providing feedback on this documentation page: [ Previous Agents ](/v0.1/docs/modules/agents/)[ Next Concepts ](/v0.1/docs/modules/agents/concepts/) * [Setup: LangSmith](#setup-langsmith) * [Define tools](#define-tools) * [Tavily](#tavily) * [Retriever](#retriever) * [Tools](#tools) * [Create the agent](#create-the-agent) * [Run the agent](#run-the-agent) * [Adding in memory](#adding-in-memory) * [Conclusion](#conclusion) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/agents/agent_types/tool_calling/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [Quick start](/v0.1/docs/modules/agents/quick_start/) * [Concepts](/v0.1/docs/modules/agents/concepts/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * [Tool calling](/v0.1/docs/modules/agents/agent_types/tool_calling/) * [OpenAI tools](/v0.1/docs/modules/agents/agent_types/openai_tools_agent/) * [OpenAI functions](/v0.1/docs/modules/agents/agent_types/openai_functions_agent/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml/) * [Conversational](/v0.1/docs/modules/agents/agent_types/chat_conversation_agent/) * [OpenAI Assistant](/v0.1/docs/modules/agents/agent_types/openai_assistant/) * [Plan and execute](/v0.1/docs/modules/agents/agent_types/plan_and_execute/) * [ReAct](/v0.1/docs/modules/agents/agent_types/react/) * [Structured chat](/v0.1/docs/modules/agents/agent_types/structured_chat/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml_legacy/) * [How-to](/v0.1/docs/modules/agents/how_to/custom_agent/) * [Agents](/v0.1/docs/modules/agents/) * [Tools](/v0.1/docs/modules/agents/tools/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Agents](/v0.1/docs/modules/agents/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * Tool calling Tool calling agent ================== info Tool calling is only available with [supported models](/v0.1/docs/integrations/chat/). [Tool calling](/v0.1/docs/modules/model_io/chat/function_calling/) allows a model to respond to a given prompt by generating output that matches a user-defined schema. By supplying the model with a schema that matches up with a [LangChain tool’s](/v0.1/docs/modules/agents/tools/) signature, along with a name and description of what the tool does, we can get the model to reliably generate valid input. We can take advantage of this structured output, combined with the fact that [tool calling chat models](/v0.1/docs/integrations/chat/) can choose which tool to call in a given situation, to create an agent that repeatedly calls tools and receives results until a query is resolved. This is a more generalized version of the [OpenAI tools agent](/v0.1/docs/modules/agents/agent_types/openai_tools_agent/), which was designed for OpenAI’s specific style of tool calling. It uses LangChain’s [ToolCall](https://api.js.langchain.com/types/langchain_core_messages_tool.ToolCall.html) interface to support a wider range of provider implementations, such as [Anthropic](/v0.1/docs/integrations/chat/anthropic/), [Google Gemini](/v0.1/docs/integrations/chat/google_vertex_ai/), and [Mistral](/v0.1/docs/integrations/chat/mistral/) in addition to [OpenAI](/v0.1/docs/integrations/chat/openai/). Setup[​](#setup "Direct link to Setup") --------------------------------------- Most models that support tool calling can be used in this agent. See [this list](/v0.1/docs/integrations/chat/) for the most up-to-date information. This demo also uses [Tavily](https://app.tavily.com), but you can also swap in another [built in tool](/v0.1/docs/integrations/platforms/). You’ll need to sign up for an API key and set it as `process.env.TAVILY_API_KEY`. ### Pick your chat model: * Anthropic * OpenAI * MistralAI * FireworksAI #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/anthropic @langchain/community yarn add @langchain/anthropic @langchain/community pnpm add @langchain/anthropic @langchain/community #### Add environment variables ANTHROPIC_API_KEY=your-api-key #### Instantiate the model import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community #### Add environment variables OPENAI_API_KEY=your-api-key #### Instantiate the model import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/mistralai @langchain/community yarn add @langchain/mistralai @langchain/community pnpm add @langchain/mistralai @langchain/community #### Add environment variables MISTRAL_API_KEY=your-api-key #### Instantiate the model import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0}); #### Install dependencies tip See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages). * npm * yarn * pnpm npm i @langchain/community @langchain/community yarn add @langchain/community @langchain/community pnpm add @langchain/community @langchain/community #### Add environment variables FIREWORKS_API_KEY=your-api-key #### Instantiate the model import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0}); Initialize Tools[​](#initialize-tools "Direct link to Initialize Tools") ------------------------------------------------------------------------ We will first create a tool that can search the web: import { TavilySearchResults } from "@langchain/community/tools/tavily_search";// Define the tools the agent will have access to.const tools = [new TavilySearchResults({ maxResults: 1 })]; Create Agent[​](#create-agent "Direct link to Create Agent") ------------------------------------------------------------ Next, let’s initialize our tool calling agent: import { createToolCallingAgent } from "langchain/agents";import { ChatPromptTemplate } from "@langchain/core/prompts";// Prompt template must have "input" and "agent_scratchpad input variables"const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"],]);const agent = await createToolCallingAgent({ llm, tools, prompt,}); Run Agent[​](#run-agent "Direct link to Run Agent") --------------------------------------------------- Now, let’s initialize the executor that will run our agent and invoke it! import { AgentExecutor } from "langchain/agents";const agentExecutor = new AgentExecutor({ agent, tools,});const result = await agentExecutor.invoke({ input: "what is LangChain?",});console.log(result); { input: "what is LangChain?", output: "LangChain is an open-source framework for building applications with large language models (LLMs). S"... 983 more characters} tip [LangSmith trace](https://smith.langchain.com/public/2f956a2e-0820-47c4-a798-c83f024e5ca1/r) Using with chat history[​](#using-with-chat-history "Direct link to Using with chat history") --------------------------------------------------------------------------------------------- This type of agent can optionally take chat messages representing previous conversation turns. It can use that previous history to respond conversationally. For more details, see [this section of the agent quickstart](/v0.1/docs/modules/agents/quick_start/#adding-in-memory). import { AIMessage, HumanMessage } from "@langchain/core/messages";const result2 = await agentExecutor.invoke({ input: "what's my name?", chat_history: [ new HumanMessage("hi! my name is cob"), new AIMessage("Hello Cob! How can I assist you today?"), ],});console.log(result2); { input: "what's my name?", chat_history: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "hi! my name is cob", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "hi! my name is cob", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello Cob! How can I assist you today?", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello Cob! How can I assist you today?", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] } ], output: "You said your name is Cob."} tip [LangSmith trace](https://smith.langchain.com/public/e21ececb-2e60-49e5-9f06-a91b0fb11fb8/r) * * * #### Help us out by providing feedback on this documentation page: [ Previous Agent Types ](/v0.1/docs/modules/agents/agent_types/)[ Next OpenAI tools ](/v0.1/docs/modules/agents/agent_types/openai_tools_agent/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/retrievers/vespa-retriever
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Knowledge Bases for Amazon Bedrock](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases) * [Chaindesk Retriever](/v0.2/docs/integrations/retrievers/chaindesk-retriever) * [ChatGPT Plugin Retriever](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin) * [Dria Retriever](/v0.2/docs/integrations/retrievers/dria) * [Exa Search](/v0.2/docs/integrations/retrievers/exa) * [HyDE Retriever](/v0.2/docs/integrations/retrievers/hyde) * [Amazon Kendra Retriever](/v0.2/docs/integrations/retrievers/kendra-retriever) * [Metal Retriever](/v0.2/docs/integrations/retrievers/metal-retriever) * [Supabase Hybrid Search](/v0.2/docs/integrations/retrievers/supabase-hybrid) * [Tavily Search API](/v0.2/docs/integrations/retrievers/tavily) * [Time-Weighted Retriever](/v0.2/docs/integrations/retrievers/time-weighted-retriever) * [Vector Store](/v0.2/docs/integrations/retrievers/vectorstore) * [Vespa Retriever](/v0.2/docs/integrations/retrievers/vespa-retriever) * [Zep Retriever](/v0.2/docs/integrations/retrievers/zep-retriever) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Retrievers](/v0.2/docs/integrations/retrievers) * Vespa Retriever Vespa Retriever =============== This shows how to use Vespa.ai as a LangChain retriever. Vespa.ai is a platform for highly efficient structured text and vector search. Please refer to [Vespa.ai](https://vespa.ai) for more information. The following sets up a retriever that fetches results from Vespa's documentation search: import { VespaRetriever } from "@langchain/community/retrievers/vespa";export const run = async () => { const url = "https://doc-search.vespa.oath.cloud"; const query_body = { yql: "select content from paragraph where userQuery()", hits: 5, ranking: "documentation", locale: "en-us", }; const content_field = "content"; const retriever = new VespaRetriever({ url, auth: false, query_body, content_field, }); const result = await retriever.invoke("what is vespa?"); console.log(result);}; #### API Reference: * [VespaRetriever](https://v02.api.js.langchain.com/classes/langchain_community_retrievers_vespa.VespaRetriever.html) from `@langchain/community/retrievers/vespa` Here, up to 5 results are retrieved from the `content` field in the `paragraph` document type, using `documentation` as the ranking method. The `userQuery()` is replaced with the actual query passed from LangChain. Please refer to the [pyvespa documentation](https://pyvespa.readthedocs.io/en/latest/getting-started-pyvespa.html#Query) for more information. The URL is the endpoint of the Vespa application. You can connect to any Vespa endpoint, either a remote service or a local instance using Docker. However, most Vespa Cloud instances are protected with mTLS. If this is your case, you can, for instance set up a [CloudFlare Worker](https://cloud.vespa.ai/en/security/cloudflare-workers) that contains the necessary credentials to connect to the instance. Now you can return the results and continue using them in LangChain. * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Vector Store ](/v0.2/docs/integrations/retrievers/vectorstore)[ Next Zep Retriever ](/v0.2/docs/integrations/retrievers/zep-retriever) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/agents/concepts/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [Quick start](/v0.1/docs/modules/agents/quick_start/) * [Concepts](/v0.1/docs/modules/agents/concepts/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * [How-to](/v0.1/docs/modules/agents/how_to/custom_agent/) * [Agents](/v0.1/docs/modules/agents/) * [Tools](/v0.1/docs/modules/agents/tools/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Agents](/v0.1/docs/modules/agents/) * Concepts On this page Concepts ======== The core idea of agents is to use a language model to choose a sequence of actions to take. In chains, a sequence of actions is hardcoded (in code). In agents, a language model is used as a reasoning engine to determine which actions to take and in which order. There are several key components here: Schema[​](#schema "Direct link to Schema") ------------------------------------------ LangChain has several abstractions to make working with agents easy. ### AgentAction[​](#agentaction "Direct link to AgentAction") This represents the action an agent should take. It has a `tool` property (which is the name of the tool that should be invoked) and a `toolInput` property (the input to that tool) ### AgentFinish[​](#agentfinish "Direct link to AgentFinish") This represents the final result from an agent, when it is ready to return to the user. It contains a `returnValues` key-value mapping, which contains the final agent output. Usually, this contains an `output` key containing a string that is the agent's response. ### Intermediate Steps[​](#intermediate-steps "Direct link to Intermediate Steps") These represent previous agent actions and corresponding outputs from this CURRENT agent run. These are important to pass to future iteration so the agent knows what work it has already done. Agent[​](#agent "Direct link to Agent") --------------------------------------- This is the chain responsible for deciding what step to take next. This is usually powered by a language model, a prompt, and an output parser. Different agents have different prompting styles for reasoning, different ways of encoding inputs, and different ways of parsing the output. For a full list of built-in agents see [agent types](/v0.1/docs/modules/agents/agent_types/). You can also **build custom agents**, should you need further control. ### Agent Inputs[​](#agent-inputs "Direct link to Agent Inputs") The inputs to an agent are an object. There is only one required key: `steps`, which corresponds to `Intermediate Steps` as described above. Generally, the PromptTemplate takes care of transforming these pairs into a format that can best be passed into the LLM. ### Agent Outputs[​](#agent-outputs "Direct link to Agent Outputs") The output is the next action(s) to take or the final response to send to the user (`AgentAction`s or `AgentFinish`). Concretely, this can be typed as `AgentAction | AgentAction[] | AgentFinish`. The output parser is responsible for taking the raw LLM output and transforming it into one of these three types. AgentExecutor[​](#agentexecutor "Direct link to AgentExecutor") --------------------------------------------------------------- The agent executor is the runtime for an agent. This is what actually calls the agent, executes the actions it chooses, passes the action outputs back to the agent, and repeats. In pseudocode, this looks roughly like: let nextAction = agent.getAction(...);while (!isAgentFinish(nextAction)) { const observation = run(nextAction); nextAction = agent.getAction(..., nextAction, observation);}return nextAction; While this may seem simple, there are several complexities this runtime handles for you, including: 1. Handling cases where the agent selects a non-existent tool 2. Handling cases where the tool errors 3. Handling cases where the agent produces output that cannot be parsed into a tool invocation 4. Logging and observability at all levels (agent decisions, tool calls) to stdout and/or to [LangSmith](https://smith.langchain.com). Tools[​](#tools "Direct link to Tools") --------------------------------------- Tools are functions that an agent can invoke. The `Tool` abstraction consists of two components: 1. The input schema for the tool. This tells the LLM what parameters are needed to call the tool. Without this, it will not know what the correct inputs are. These parameters should be sensibly named and described. 2. The function to run. This is generally just a JavaScript function that is invoked. ### Considerations[​](#considerations "Direct link to Considerations") There are two important design considerations around tools: 1. Giving the agent access to the right tools 2. Describing the tools in a way that is most helpful to the agent Without thinking through both, you won't be able to build a working agent. If you don't give the agent access to a correct set of tools, it will never be able to accomplish the objectives you give it. If you don't describe the tools well, the agent won't know how to use them properly. LangChain provides a wide set of built-in tools, but also makes it easy to define your own (including custom descriptions). For a full list of built-in tools, see the [tools integrations section](/v0.1/docs/integrations/tools/) Toolkits[​](#toolkits "Direct link to Toolkits") ------------------------------------------------ For many common tasks, an agent will need a set of related tools. For this LangChain provides the concept of toolkits - groups of around 3-5 tools needed to accomplish specific objectives. For example, the GitHub toolkit has a tool for searching through GitHub issues, a tool for reading a file, a tool for commenting, etc. LangChain provides a wide set of toolkits to get started. For a full list of built-in toolkits, see the [toolkits integrations section](/v0.1/docs/integrations/toolkits/) * * * #### Help us out by providing feedback on this documentation page: [ Previous Quick start ](/v0.1/docs/modules/agents/quick_start/)[ Next Agent Types ](/v0.1/docs/modules/agents/agent_types/) * [Schema](#schema) * [AgentAction](#agentaction) * [AgentFinish](#agentfinish) * [Intermediate Steps](#intermediate-steps) * [Agent](#agent) * [Agent Inputs](#agent-inputs) * [Agent Outputs](#agent-outputs) * [AgentExecutor](#agentexecutor) * [Tools](#tools) * [Considerations](#considerations) * [Toolkits](#toolkits) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/agents/agent_types/openai_tools_agent/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [Quick start](/v0.1/docs/modules/agents/quick_start/) * [Concepts](/v0.1/docs/modules/agents/concepts/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * [Tool calling](/v0.1/docs/modules/agents/agent_types/tool_calling/) * [OpenAI tools](/v0.1/docs/modules/agents/agent_types/openai_tools_agent/) * [OpenAI functions](/v0.1/docs/modules/agents/agent_types/openai_functions_agent/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml/) * [Conversational](/v0.1/docs/modules/agents/agent_types/chat_conversation_agent/) * [OpenAI Assistant](/v0.1/docs/modules/agents/agent_types/openai_assistant/) * [Plan and execute](/v0.1/docs/modules/agents/agent_types/plan_and_execute/) * [ReAct](/v0.1/docs/modules/agents/agent_types/react/) * [Structured chat](/v0.1/docs/modules/agents/agent_types/structured_chat/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml_legacy/) * [How-to](/v0.1/docs/modules/agents/how_to/custom_agent/) * [Agents](/v0.1/docs/modules/agents/) * [Tools](/v0.1/docs/modules/agents/tools/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Agents](/v0.1/docs/modules/agents/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * OpenAI tools OpenAI tools ============ Compatibility OpenAI tool calling is new and only available on [OpenAI's latest models](https://platform.openai.com/docs/guides/function-calling). Certain OpenAI models have been finetuned to work with tool calling. This is very similar but different from function calling, and thus requires a separate agent type. While the goal of more reliably returning valid and useful function calls is the same as the functions agent, the ability to return multiple tools at once results in both fewer roundtrips for complex questions. Setup[​](#setup "Direct link to Setup") --------------------------------------- Install the OpenAI integration package, retrieve your key, and store it as an environment variable named `OPENAI_API_KEY`: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai This demo also uses [Tavily](https://app.tavily.com), but you can also swap in another [built in tool](/v0.1/docs/integrations/platforms/). You'll need to sign up for an API key and set it as `TAVILY_API_KEY`. Initialize Tools[​](#initialize-tools "Direct link to Initialize Tools") ------------------------------------------------------------------------ We will first create a tool: import { TavilySearchResults } from "@langchain/community/tools/tavily_search";// Define the tools the agent will have access to.const tools = [new TavilySearchResults({ maxResults: 1 })]; Create Agent[​](#create-agent "Direct link to Create Agent") ------------------------------------------------------------ import { AgentExecutor, createOpenAIToolsAgent } from "langchain/agents";import { pull } from "langchain/hub";import { ChatOpenAI } from "@langchain/openai";import type { ChatPromptTemplate } from "@langchain/core/prompts";// Get the prompt to use - you can modify this!// If you want to see the prompt in full, you can at:// https://smith.langchain.com/hub/hwchase17/openai-tools-agentconst prompt = await pull<ChatPromptTemplate>("hwchase17/openai-tools-agent");const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-1106", temperature: 0,});const agent = await createOpenAIToolsAgent({ llm, tools, prompt,}); Run Agent[​](#run-agent "Direct link to Run Agent") --------------------------------------------------- Now, let's run our agent! tip [LangSmith trace](https://smith.langchain.com/public/5c125a7e-0df5-41ec-96bf-3c13dc3a53f8/r) const agentExecutor = new AgentExecutor({ agent, tools,});const result = await agentExecutor.invoke({ input: "what is LangChain?",});console.log(result);/* { input: 'what is LangChain?', output: 'LangChain is a platform that offers a complete set of powerful building blocks for building context-aware, reasoning applications with flexible abstractions and an AI-first toolkit. It provides tools for chatbots, Q&A over docs, summarization, copilots, workflow automation, document analysis, and custom search. LangChain is used by global corporations, startups, and tinkerers to build applications powered by large language models (LLMs). You can find more information on their website: [LangChain](https://www.langchain.com/)' }*/ Using with chat history[​](#using-with-chat-history "Direct link to Using with chat history") --------------------------------------------------------------------------------------------- For more details, see [this section of the agent quickstart](/v0.1/docs/modules/agents/quick_start/#adding-in-memory). import { AIMessage, HumanMessage } from "@langchain/core/messages";const result2 = await agentExecutor.invoke({ input: "what's my name?", chat_history: [ new HumanMessage("hi! my name is cob"), new AIMessage("Hello Cob! How can I assist you today?"), ],});console.log(result2);/* { input: "what's my name?", chat_history: [ HumanMessage { content: 'hi! my name is cob', additional_kwargs: {} }, AIMessage { content: 'Hello Cob! How can I assist you today?', additional_kwargs: {} } ], output: 'Your name is Cob!' }*/ * * * #### Help us out by providing feedback on this documentation page: [ Previous Tool calling ](/v0.1/docs/modules/agents/agent_types/tool_calling/)[ Next OpenAI functions ](/v0.1/docs/modules/agents/agent_types/openai_functions_agent/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/agents/agent_types/openai_functions_agent/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [Quick start](/v0.1/docs/modules/agents/quick_start/) * [Concepts](/v0.1/docs/modules/agents/concepts/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * [Tool calling](/v0.1/docs/modules/agents/agent_types/tool_calling/) * [OpenAI tools](/v0.1/docs/modules/agents/agent_types/openai_tools_agent/) * [OpenAI functions](/v0.1/docs/modules/agents/agent_types/openai_functions_agent/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml/) * [Conversational](/v0.1/docs/modules/agents/agent_types/chat_conversation_agent/) * [OpenAI Assistant](/v0.1/docs/modules/agents/agent_types/openai_assistant/) * [Plan and execute](/v0.1/docs/modules/agents/agent_types/plan_and_execute/) * [ReAct](/v0.1/docs/modules/agents/agent_types/react/) * [Structured chat](/v0.1/docs/modules/agents/agent_types/structured_chat/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml_legacy/) * [How-to](/v0.1/docs/modules/agents/how_to/custom_agent/) * [Agents](/v0.1/docs/modules/agents/) * [Tools](/v0.1/docs/modules/agents/tools/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Agents](/v0.1/docs/modules/agents/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * OpenAI functions OpenAI functions ================ Certain models (like OpenAI's gpt-3.5-turbo and gpt-4) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function. In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions. The OpenAI Functions Agent is designed to work with these models. Setup[​](#setup "Direct link to Setup") --------------------------------------- Install the OpenAI integration package, retrieve your key, and store it as an environment variable named `OPENAI_API_KEY`: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai This demo also uses [Tavily](https://app.tavily.com), but you can also swap in another [built in tool](/v0.1/docs/integrations/platforms/). You'll need to sign up for an API key and set it as `TAVILY_API_KEY`. Initialize Tools[​](#initialize-tools "Direct link to Initialize Tools") ------------------------------------------------------------------------ We will first create a tool: import { TavilySearchResults } from "@langchain/community/tools/tavily_search";// Define the tools the agent will have access to.const tools = [new TavilySearchResults({ maxResults: 1 })]; Create Agent[​](#create-agent "Direct link to Create Agent") ------------------------------------------------------------ import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";import { pull } from "langchain/hub";import { ChatOpenAI } from "@langchain/openai";import type { ChatPromptTemplate } from "@langchain/core/prompts";// Get the prompt to use - you can modify this!// If you want to see the prompt in full, you can at:// https://smith.langchain.com/hub/hwchase17/openai-functions-agentconst prompt = await pull<ChatPromptTemplate>( "hwchase17/openai-functions-agent");const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-1106", temperature: 0,});const agent = await createOpenAIFunctionsAgent({ llm, tools, prompt,}); Run Agent[​](#run-agent "Direct link to Run Agent") --------------------------------------------------- Now, let's run our agent! tip [LangSmith trace](https://smith.langchain.com/public/28e915bc-a200-48b8-81a4-4b0f1739524b/r) const agentExecutor = new AgentExecutor({ agent, tools,});const result = await agentExecutor.invoke({ input: "what is LangChain?",});console.log(result);/* { input: 'what is LangChain?', output: 'LangChain is an open source project that was launched in October 2022 by Harrison Chase, while working at machine learning startup Robust Intelligence. It is a deployment tool designed to facilitate the transition from LCEL (LangChain Expression Language) prototypes to production-ready applications. LangChain has integrations with systems including Amazon, Google, and Microsoft Azure cloud storage, API wrappers for news, movie information, and weather, Bash for summarization, syntax and semantics checking, and execution of shell scripts, multiple web scraping subsystems and templates, few-shot learning prompt generation support, and more.\n' + '\n' + "In April 2023, LangChain incorporated as a new startup and raised over $20 million in funding at a valuation of at least $200 million from venture firm Sequoia Capital, a week after announcing a $10 million seed investment from Benchmark. The project quickly garnered popularity, with improvements from hundreds of contributors on GitHub, trending discussions on Twitter, lively activity on the project's Discord server, many YouTube tutorials, and meetups in San Francisco and London.\n" + '\n' + 'For more detailed information, you can visit the [LangChain Wikipedia page](https://en.wikipedia.org/wiki/LangChain).' }*/ Using with chat history[​](#using-with-chat-history "Direct link to Using with chat history") --------------------------------------------------------------------------------------------- For more details, see [this section of the agent quickstart](/v0.1/docs/modules/agents/quick_start/#adding-in-memory). import { AIMessage, HumanMessage } from "@langchain/core/messages";const result2 = await agentExecutor.invoke({ input: "what's my name?", chat_history: [ new HumanMessage("hi! my name is cob"), new AIMessage("Hello Cob! How can I assist you today?"), ],});console.log(result2);/* { input: "what's my name?", chat_history: [ HumanMessage { content: 'hi! my name is cob', name: undefined, additional_kwargs: {} }, AIMessage { content: 'Hello Cob! How can I assist you today?', name: undefined, additional_kwargs: {} } ], output: 'Your name is Cob. How can I assist you today, Cob?' }*/ * * * #### Help us out by providing feedback on this documentation page: [ Previous OpenAI tools ](/v0.1/docs/modules/agents/agent_types/openai_tools_agent/)[ Next XML Agent ](/v0.1/docs/modules/agents/agent_types/xml/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/agents/agent_types/xml/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [Quick start](/v0.1/docs/modules/agents/quick_start/) * [Concepts](/v0.1/docs/modules/agents/concepts/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * [Tool calling](/v0.1/docs/modules/agents/agent_types/tool_calling/) * [OpenAI tools](/v0.1/docs/modules/agents/agent_types/openai_tools_agent/) * [OpenAI functions](/v0.1/docs/modules/agents/agent_types/openai_functions_agent/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml/) * [Conversational](/v0.1/docs/modules/agents/agent_types/chat_conversation_agent/) * [OpenAI Assistant](/v0.1/docs/modules/agents/agent_types/openai_assistant/) * [Plan and execute](/v0.1/docs/modules/agents/agent_types/plan_and_execute/) * [ReAct](/v0.1/docs/modules/agents/agent_types/react/) * [Structured chat](/v0.1/docs/modules/agents/agent_types/structured_chat/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml_legacy/) * [How-to](/v0.1/docs/modules/agents/how_to/custom_agent/) * [Agents](/v0.1/docs/modules/agents/) * [Tools](/v0.1/docs/modules/agents/tools/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Agents](/v0.1/docs/modules/agents/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * XML Agent XML Agent ========= Some language models (like Anthropic's Claude) are particularly good at reasoning/writing XML. The below example shows how to use an agent that uses XML when prompting. Setup[​](#setup "Direct link to Setup") --------------------------------------- Install the Anthropic integration package, retrieve your key, and store it as an environment variable named `ANTHROPIC_API_KEY`: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic This demo also uses [Tavily](https://app.tavily.com), but you can also swap in another [built in tool](/v0.1/docs/integrations/platforms/). You'll need to sign up for an API key and set it as `TAVILY_API_KEY`. Initialize Tools[​](#initialize-tools "Direct link to Initialize Tools") ------------------------------------------------------------------------ We will first create a tool: import { TavilySearchResults } from "@langchain/community/tools/tavily_search";// Define the tools the agent will have access to.const tools = [new TavilySearchResults({ maxResults: 1 })]; Create Agent[​](#create-agent "Direct link to Create Agent") ------------------------------------------------------------ import { AgentExecutor, createXmlAgent } from "langchain/agents";import { pull } from "langchain/hub";import { ChatAnthropic } from "@langchain/anthropic";import type { PromptTemplate } from "@langchain/core/prompts";// Get the prompt to use - you can modify this!// If you want to see the prompt in full, you can at:// https://smith.langchain.com/hub/hwchase17/xml-agent-convoconst prompt = await pull<PromptTemplate>("hwchase17/xml-agent-convo");const llm = new ChatAnthropic({ temperature: 0,});const agent = await createXmlAgent({ llm, tools, prompt,}); Run Agent[​](#run-agent "Direct link to Run Agent") --------------------------------------------------- Now, let's run our agent! tip [LangSmith trace](https://smith.langchain.com/public/dacd12d2-f952-44fd-9b0a-7b2be88a171d/r) const agentExecutor = new AgentExecutor({ agent, tools,});const result = await agentExecutor.invoke({ input: "what is LangChain?",});console.log(result);/* { input: 'what is LangChain?', output: '\n' + 'LangChain is a platform that links large language models like GPT-3.5 and GPT-4 to external data sources to build natural language processing (NLP) applications. It provides modules and integrations to help create NLP apps more easily across various industries and use cases. Some key capabilities LangChain offers include connecting to LLMs, integrating external data sources, and enabling the development of custom NLP solutions.\n' }*/ Using with chat history[​](#using-with-chat-history "Direct link to Using with chat history") --------------------------------------------------------------------------------------------- For more details, see [this section of the agent quickstart](/v0.1/docs/modules/agents/quick_start/#adding-in-memory). const result2 = await agentExecutor.invoke({ input: "what's my name?", // Notice that chat_history is a string, since this prompt is aimed at LLMs, not chat models chat_history: "Human: Hi! My name is Cob\nAI: Hello Cob! Nice to meet you",});console.log(result2);/* { input: "what's my name?", chat_history: 'Human: Hi! My name is Cob\nAI: Hello Cob! Nice to meet you', output: 'Based on our previous conversation, your name is Cob.' }*/ * * * #### Help us out by providing feedback on this documentation page: [ Previous OpenAI functions ](/v0.1/docs/modules/agents/agent_types/openai_functions_agent/)[ Next Conversational ](/v0.1/docs/modules/agents/agent_types/chat_conversation_agent/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/agents/agent_types/chat_conversation_agent/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [Quick start](/v0.1/docs/modules/agents/quick_start/) * [Concepts](/v0.1/docs/modules/agents/concepts/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * [Tool calling](/v0.1/docs/modules/agents/agent_types/tool_calling/) * [OpenAI tools](/v0.1/docs/modules/agents/agent_types/openai_tools_agent/) * [OpenAI functions](/v0.1/docs/modules/agents/agent_types/openai_functions_agent/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml/) * [Conversational](/v0.1/docs/modules/agents/agent_types/chat_conversation_agent/) * [OpenAI Assistant](/v0.1/docs/modules/agents/agent_types/openai_assistant/) * [Plan and execute](/v0.1/docs/modules/agents/agent_types/plan_and_execute/) * [ReAct](/v0.1/docs/modules/agents/agent_types/react/) * [Structured chat](/v0.1/docs/modules/agents/agent_types/structured_chat/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml_legacy/) * [How-to](/v0.1/docs/modules/agents/how_to/custom_agent/) * [Agents](/v0.1/docs/modules/agents/) * [Tools](/v0.1/docs/modules/agents/tools/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Agents](/v0.1/docs/modules/agents/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * Conversational Conversational ============== This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well. This example shows how to construct an agent using LCEL. Constructing agents this way allows for customization beyond what previous methods like using `initializeAgentExecutorWithOptions` allow. Using LCEL ========== tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { ChatOpenAI } from "@langchain/openai";import { AgentExecutor } from "langchain/agents";import { Calculator } from "@langchain/community/tools/calculator";import { pull } from "langchain/hub";import { BufferMemory } from "langchain/memory";import { formatLogToString } from "langchain/agents/format_scratchpad/log";import { renderTextDescription } from "langchain/tools/render";import { ReActSingleInputOutputParser } from "langchain/agents/react/output_parser";import { PromptTemplate } from "@langchain/core/prompts";import { RunnableSequence } from "@langchain/core/runnables";import { AgentStep } from "@langchain/core/agents";import { BaseMessage } from "@langchain/core/messages";import { SerpAPI } from "@langchain/community/tools/serpapi";/** Define your chat model */const model = new ChatOpenAI({ model: "gpt-4" });/** Bind a stop token to the model */const modelWithStop = model.bind({ stop: ["\nObservation"],});/** Define your list of tools */const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];/** * Pull a prompt from LangChain Hub * @link https://smith.langchain.com/hub/hwchase17/react-chat */const prompt = await pull<PromptTemplate>("hwchase17/react-chat");/** Add input variables to prompt */const toolNames = tools.map((tool) => tool.name);const promptWithInputs = await prompt.partial({ tools: renderTextDescription(tools), tool_names: toolNames.join(","),});const runnableAgent = RunnableSequence.from([ { input: (i: { input: string; steps: AgentStep[]; chat_history: BaseMessage[]; }) => i.input, agent_scratchpad: (i: { input: string; steps: AgentStep[]; chat_history: BaseMessage[]; }) => formatLogToString(i.steps), chat_history: (i: { input: string; steps: AgentStep[]; chat_history: BaseMessage[]; }) => i.chat_history, }, promptWithInputs, modelWithStop, new ReActSingleInputOutputParser({ toolNames }),]);/** * Define your memory store * @important The memoryKey must be "chat_history" for the chat agent to work * because this is the key this particular prompt expects. */const memory = new BufferMemory({ memoryKey: "chat_history" });/** Define your executor and pass in the agent, tools and memory */const executor = AgentExecutor.fromAgentAndTools({ agent: runnableAgent, tools, memory,});console.log("Loaded agent.");const input0 = "hi, i am bob";const result0 = await executor.invoke({ input: input0 });console.log(`Got output ${result0.output}`);const input1 = "whats my name?";const result1 = await executor.invoke({ input: input1 });console.log(`Got output ${result1.output}`);const input2 = "whats the weather in pomfret?";const result2 = await executor.invoke({ input: input2 });console.log(`Got output ${result2.output}`);/** * Loaded agent. * Got output Hello Bob, how can I assist you today? * Got output Your name is Bob. * Got output The current weather in Pomfret, CT is partly cloudy with a temperature of 59 degrees Fahrenheit. The humidity is at 52% and there is a wind speed of 8 mph. There is a 0% chance of precipitation. */ #### API Reference: * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [AgentExecutor](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents` * [Calculator](https://api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator` * [pull](https://api.js.langchain.com/functions/langchain_hub.pull.html) from `langchain/hub` * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [formatLogToString](https://api.js.langchain.com/functions/langchain_agents_format_scratchpad_log.formatLogToString.html) from `langchain/agents/format_scratchpad/log` * [renderTextDescription](https://api.js.langchain.com/functions/langchain_tools_render.renderTextDescription.html) from `langchain/tools/render` * [ReActSingleInputOutputParser](https://api.js.langchain.com/classes/langchain_agents_react_output_parser.ReActSingleInputOutputParser.html) from `langchain/agents/react/output_parser` * [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables` * [AgentStep](https://api.js.langchain.com/types/langchain_core_agents.AgentStep.html) from `@langchain/core/agents` * [BaseMessage](https://api.js.langchain.com/classes/langchain_core_messages.BaseMessage.html) from `@langchain/core/messages` * [SerpAPI](https://api.js.langchain.com/classes/langchain_community_tools_serpapi.SerpAPI.html) from `@langchain/community/tools/serpapi` Using `initializeAgentExecutorWithOptions` ========================================== The example below covers how to create a conversational agent for a chat model. It will utilize chat specific prompts. import { ChatOpenAI } from "@langchain/openai";import { initializeAgentExecutorWithOptions } from "langchain/agents";import { Calculator } from "@langchain/community/tools/calculator";import { SerpAPI } from "@langchain/community/tools/serpapi";export const run = async () => { process.env.LANGCHAIN_HANDLER = "langchain"; const model = new ChatOpenAI({ temperature: 0 }); const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(), ]; // Passing "chat-conversational-react-description" as the agent type // automatically creates and uses BufferMemory with the executor. // If you would like to override this, you can pass in a custom // memory option, but the memoryKey set on it must be "chat_history". const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "chat-conversational-react-description", verbose: true, }); console.log("Loaded agent."); const input0 = "hi, i am bob"; const result0 = await executor.invoke({ input: input0 }); console.log(`Got output ${result0.output}`); const input1 = "whats my name?"; const result1 = await executor.invoke({ input: input1 }); console.log(`Got output ${result1.output}`); const input2 = "whats the weather in pomfret?"; const result2 = await executor.invoke({ input: input2 }); console.log(`Got output ${result2.output}`);}; #### API Reference: * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [initializeAgentExecutorWithOptions](https://api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` * [Calculator](https://api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator` * [SerpAPI](https://api.js.langchain.com/classes/langchain_community_tools_serpapi.SerpAPI.html) from `@langchain/community/tools/serpapi` Loaded agent.Entering new agent_executor chain...{ "action": "Final Answer", "action_input": "Hello Bob! How can I assist you today?"}Finished chain.Got output Hello Bob! How can I assist you today?Entering new agent_executor chain...{ "action": "Final Answer", "action_input": "Your name is Bob."}Finished chain.Got output Your name is Bob.Entering new agent_executor chain...```json{ "action": "search", "action_input": "weather in pomfret"}```A steady rain early...then remaining cloudy with a few showers. High 48F. Winds WNW at 10 to 15 mph. Chance of rain 80%.```json{ "action": "Final Answer", "action_input": "The weather in Pomfret is a steady rain early...then remaining cloudy with a few showers. High 48F. Winds WNW at 10 to 15 mph. Chance of rain 80%."}```Finished chain.Got output The weather in Pomfret is a steady rain early...then remaining cloudy with a few showers. High 48F. Winds WNW at 10 to 15 mph. Chance of rain 80%. * * * #### Help us out by providing feedback on this documentation page: [ Previous XML Agent ](/v0.1/docs/modules/agents/agent_types/xml/)[ Next OpenAI Assistant ](/v0.1/docs/modules/agents/agent_types/openai_assistant/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/agents/agent_types/openai_assistant/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [Quick start](/v0.1/docs/modules/agents/quick_start/) * [Concepts](/v0.1/docs/modules/agents/concepts/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * [Tool calling](/v0.1/docs/modules/agents/agent_types/tool_calling/) * [OpenAI tools](/v0.1/docs/modules/agents/agent_types/openai_tools_agent/) * [OpenAI functions](/v0.1/docs/modules/agents/agent_types/openai_functions_agent/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml/) * [Conversational](/v0.1/docs/modules/agents/agent_types/chat_conversation_agent/) * [OpenAI Assistant](/v0.1/docs/modules/agents/agent_types/openai_assistant/) * [Plan and execute](/v0.1/docs/modules/agents/agent_types/plan_and_execute/) * [ReAct](/v0.1/docs/modules/agents/agent_types/react/) * [Structured chat](/v0.1/docs/modules/agents/agent_types/structured_chat/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml_legacy/) * [How-to](/v0.1/docs/modules/agents/how_to/custom_agent/) * [Agents](/v0.1/docs/modules/agents/) * [Tools](/v0.1/docs/modules/agents/tools/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Agents](/v0.1/docs/modules/agents/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * OpenAI Assistant On this page OpenAI Assistant ================ info The [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) is still in beta. OpenAI released a new API for a conversational agent like system called Assistant. You can interact with OpenAI Assistants using OpenAI tools or custom tools. When using exclusively OpenAI tools, you can just invoke the assistant directly and get final answers. When using custom tools, you can run the assistant and tool execution loop using the built-in `AgentExecutor` or write your own executor. OpenAI assistants currently have access to two tools hosted by OpenAI: [code interpreter](https://platform.openai.com/docs/assistants/tools/code-interpreter), and [knowledge retrieval](https://platform.openai.com/docs/assistants/tools/knowledge-retrieval). We've implemented the assistant API in LangChain with some helpful abstractions. In this guide we'll go over those, and show how to use them to create powerful assistants. Creating an assistant[​](#creating-an-assistant "Direct link to Creating an assistant") --------------------------------------------------------------------------------------- Creating an assistant is easy. Use the `createAssistant` method and pass in a model ID, and optionally more parameters to further customize your assistant. import { OpenAIAssistantRunnable } from "langchain/experimental/openai_assistant";const assistant = await OpenAIAssistantRunnable.createAssistant({ model: "gpt-4-1106-preview",});const assistantResponse = await assistant.invoke({ content: "Hello world!",});console.log(assistantResponse);/** [ { id: 'msg_OBH60nkVI40V9zY2PlxMzbEI', thread_id: 'thread_wKpj4cu1XaYEVeJlx4yFbWx5', role: 'assistant', content: [ { type: 'text', value: 'Hello there! What can I do for you?' } ], assistant_id: 'asst_RtW03Vs6laTwqSSMCQpVND7i', run_id: 'run_4Ve5Y9fyKMcSxHbaNHOFvdC6', } ] */ If you run into an apiKey error, you can try to pass it directly as clientOptions: const assistant = await OpenAIAssistantRunnable.createAssistant({ clientOptions: { apiKey: OPENAI_API_KEY }, model: "gpt-4-1106-preview",}); If you have an existing assistant, you can pass it directly into the constructor: const assistant = new OpenAIAssistantRunnable({ assistantId: "asst_RtW03Vs6laTwqSSMCQpVND7i", // asAgent: true}); In this next example we'll show how you can turn your assistant into an agent. Assistant as an agent[​](#assistant-as-an-agent "Direct link to Assistant as an agent") --------------------------------------------------------------------------------------- import { AgentExecutor } from "langchain/agents";import { StructuredTool } from "langchain/tools";import { OpenAIAssistantRunnable } from "langchain/experimental/openai_assistant"; The first step is to define a list of tools you want to pass to your assistant. Here we'll only define one for simplicity's sake, however the assistant API allows for passing in a list of tools, and from there the model can use multiple tools at once. Read more about the run steps lifecycle [here](https://platform.openai.com/docs/assistants/how-it-works/runs-and-run-steps) note Only models released >= 1106 are able to use multiple tools at once. See the full list of OpenAI models [here](https://platform.openai.com/docs/models). function getCurrentWeather(location: string, _unit = "fahrenheit") { if (location.toLowerCase().includes("tokyo")) { return JSON.stringify({ location, temperature: "10", unit: "celsius" }); } else if (location.toLowerCase().includes("san francisco")) { return JSON.stringify({ location, temperature: "72", unit: "fahrenheit" }); } else { return JSON.stringify({ location, temperature: "22", unit: "celsius" }); }}class WeatherTool extends StructuredTool { schema = z.object({ location: z.string().describe("The city and state, e.g. San Francisco, CA"), unit: z.enum(["celsius", "fahrenheit"]).optional(), }); name = "get_current_weather"; description = "Get the current weather in a given location"; constructor() { super(...arguments); } async _call(input: { location: string; unit: string }) { const { location, unit } = input; const result = getCurrentWeather(location, unit); return result; }}const tools = [new WeatherTool()]; In the above code we've defined three things: * A function for the agent to call if the model requests it. * A tool class which we'll pass to the `AgentExecutor` * The tool list we can use to pass to our `OpenAIAssistantRunnable` and `AgentExecutor` Next, we construct the `OpenAIAssistantRunnable` and pass it to the `AgentExecutor`. const agent = await OpenAIAssistantRunnable.createAssistant({ model: "gpt-3.5-turbo-1106", instructions: "You are a weather bot. Use the provided functions to answer questions.", name: "Weather Assistant", tools, asAgent: true,});const agentExecutor = AgentExecutor.fromAgentAndTools({ agent, tools,}); Note how we're setting `asAgent` to `true`, this input parameter tells the `OpenAIAssistantRunnable` to return different, agent-acceptable outputs for actions or finished conversations. Above we're also doing something a little different from the first example by passing in input parameters for `instructions` and `name`. These are optional parameters, with the instructions being passed as extra context to the model, and the name being used to identify the assistant in the OpenAI dashboard. Finally to invoke our executor we call the `.invoke` method in the exact same way as we did in the first example. const assistantResponse = await agentExecutor.invoke({ content: "What's the weather in Tokyo and San Francisco?",});console.log(assistantResponse);/**{ output: 'The current weather in San Francisco is 72°F, and in Tokyo, it is 10°C.'}*/ Here we asked a question which contains two sub questions inside: `What's the weather in Tokyo?` and `What's the weather in San Francisco?`. In order for the `OpenAIAssistantRunnable` to answer that it returned two sets of function call arguments for each question, demonstrating it's ability to call multiple functions at once. Assistant tools[​](#assistant-tools "Direct link to Assistant tools") --------------------------------------------------------------------- OpenAI currently offers two tools for the assistant API: a [code interpreter](https://platform.openai.com/docs/assistants/tools/code-interpreter) and a [knowledge retrieval](https://platform.openai.com/docs/assistants/tools/knowledge-retrieval) tool. You can offer these tools to the assistant simply by passing them in as part of the `tools` parameter when creating the assistant. const assistant = await OpenAIAssistantRunnable.createAssistant({ model: "gpt-3.5-turbo-1106", instructions: "You are a helpful assistant that provides answers to math problems.", name: "Math Assistant", tools: [{ type: "code_interpreter" }],}); Since we're passing `code_interpreter` as a tool, the assistant will now be able to execute Python code, allowing for more complex tasks normal LLMs are not capable of doing well, like math. const assistantResponse = await assistant.invoke({ content: "What's 10 - 4 raised to the 2.7",});console.log(assistantResponse);/**[ { id: 'msg_OBH60nkVI40V9zY2PlxMzbEI', thread_id: 'thread_wKpj4cu1XaYEVeJlx4yFbWx5', role: 'assistant', content: [ { type: 'text', text: { value: 'The result of 10 - 4 raised to the 2.7 is approximately -32.22.', annotations: [] } } ], assistant_id: 'asst_RtW03Vs6laTwqSSMCQpVND7i', run_id: 'run_4Ve5Y9fyKMcSxHbaNHOFvdC6', }]*/ Here the assistant was able to utilize the `code_interpreter` tool to calculate the answer to our question. Retrieves an assistant[​](#retrieves-an-assistant "Direct link to Retrieves an assistant") ------------------------------------------------------------------------------------------ Retrieves an assistant. import { OpenAIAssistantRunnable } from "langchain/experimental/openai_assistant";const assistant = new OpenAIAssistantRunnable({ assistantId,});const assistantResponse = await assistant.getAssistant(); Modifies an assistant[​](#modifies-an-assistant "Direct link to Modifies an assistant") --------------------------------------------------------------------------------------- Modifies an assistant. import { OpenAIAssistantRunnable } from "langchain/experimental/openai_assistant";const assistant = await OpenAIAssistantRunnable.createAssistant({ name: "Personal Assistant", model: "gpt-4-1106-preview",});const assistantModified = await assistant.modifyAssistant({ name: "Personal Assistant 2",}); Delete an assistant[​](#delete-an-assistant "Direct link to Delete an assistant") --------------------------------------------------------------------------------- Delete an assistant. import { OpenAIAssistantRunnable } from "langchain/experimental/openai_assistant";const assistant = await OpenAIAssistantRunnable.createAssistant({ name: "Personal Assistant", model: "gpt-4-1106-preview",});const deleteStatus = await assistant.deleteAssistant(); OpenAI Files ============ Files are used to upload documents that can be used with features like Assistants and Fine-tuning. We've implemented the File API in LangChain with create and delete. You can see [the official API reference here](https://platform.openai.com/docs/api-reference/files/object). The `File` object represents a document that has been uploaded to OpenAI. { "id": "file-abc123", "object": "file", "bytes": 120000, "created_at": 1677610602, "filename": "salesOverview.pdf", "purpose": "assistants",} Create a File[​](#create-a-file "Direct link to Create a File") --------------------------------------------------------------- Upload a file that can be used across various endpoints. The size of all the files uploaded by one organization can be up to **100 GB**. The size of individual files can be a maximum of **512 MB**. See the Assistants Tools guide above to learn more about the types of files supported. The Fine-tuning API only supports `.jsonl` files. import { OpenAIFiles } from "langchain/experimental/openai_files";const openAIFiles = new OpenAIFiles();const file = await openAIFiles.createFile({ file: fs.createReadStream(path.resolve(__dirname, `./test.txt`)), purpose: "assistants",});/*** Output { "id": "file-BK7bzQj3FfZFXr7DbL6xJwfo", "object": "file", "bytes": 120000, "created_at": 1677610602, "filename": "salesOverview.pdf", "purpose": "assistants", }*/ If you run into an apiKey error, you can try to pass it directly as clientOptions: const openAIFiles = new OpenAIFiles(clientOptions: { apiKey: OPENAI_API_KEY }); Use File in AI Assistant[​](#use-file-in-ai-assistant "Direct link to Use File in AI Assistant") ------------------------------------------------------------------------------------------------ import { OpenAIAssistantRunnable } from "langchain/experimental/openai_assistant";import { OpenAIFiles } from "langchain/experimental/openai_files";const openAIFiles = new OpenAIFiles();const file = await openAIFiles.createFile({ file: fs.createReadStream(path.resolve(__dirname, `./test.txt`)), purpose: "assistants",});const agent = await OpenAIAssistantRunnable.createAssistant({ model: "gpt-3.5-turbo-1106", instructions: "You are a weather bot. Use the provided functions to answer questions.", name: "Weather Assistant", tools, asAgent: true, fileIds: [file.id],}); Delete a File[​](#delete-a-file "Direct link to Delete a File") --------------------------------------------------------------- Delete a file. import { OpenAIFiles } from "langchain/experimental/openai_files";const openAIFiles = new OpenAIFiles();const result = await openAIFiles.deleteFile({ fileId: file.id });/*** Output: { "id": "file-abc123", "object": "file", "deleted": true }*/ List all Files[​](#list-all-files "Direct link to List all Files") ------------------------------------------------------------------ Returns a list of files that belong to the user's organization. `purpose`?: string Only return files with the given purpose. import { OpenAIFiles } from "langchain/experimental/openai_files";const openAIFiles = new OpenAIFiles();const result = await openAIFiles.listFiles({ purpose: "assistants" });/*** Output: { "data": [ { "id": "file-abc123", "object": "file", "bytes": 175, "created_at": 1613677385, "filename": "salesOverview.pdf", "purpose": "assistants", }, { "id": "file-abc123", "object": "file", "bytes": 140, "created_at": 1613779121, "filename": "puppy.jsonl", "purpose": "fine-tune", } ], "object": "list" }*/ Retrieve File[​](#retrieve-file "Direct link to Retrieve File") --------------------------------------------------------------- Returns information about a specific file. import { OpenAIFiles } from "langchain/experimental/openai_files";const openAIFiles = new OpenAIFiles();const result = await openAIFiles.retrieveFile({ fileId: file.id });/*** Output: { "id": "file-abc123", "object": "file", "bytes": 120000, "created_at": 1677610602, "filename": "mydata.jsonl", "purpose": "fine-tune", }*/ Retrieve File Content[​](#retrieve-file-content "Direct link to Retrieve File Content") --------------------------------------------------------------------------------------- Returns the contents of the specified file. You can't retrieve the contents of a file that was uploaded with the "purpose": "assistants" API. import { OpenAIFiles } from "langchain/experimental/openai_files";const openAIFiles = new OpenAIFiles();const result = await openAIFiles.retrieveFileContent({ fileId: file.id });// Return the file content. * * * #### Help us out by providing feedback on this documentation page: [ Previous Conversational ](/v0.1/docs/modules/agents/agent_types/chat_conversation_agent/)[ Next Plan and execute ](/v0.1/docs/modules/agents/agent_types/plan_and_execute/) * [Creating an assistant](#creating-an-assistant) * [Assistant as an agent](#assistant-as-an-agent) * [Assistant tools](#assistant-tools) * [Retrieves an assistant](#retrieves-an-assistant) * [Modifies an assistant](#modifies-an-assistant) * [Delete an assistant](#delete-an-assistant) * [Create a File](#create-a-file) * [Use File in AI Assistant](#use-file-in-ai-assistant) * [Delete a File](#delete-a-file) * [List all Files](#list-all-files) * [Retrieve File](#retrieve-file) * [Retrieve File Content](#retrieve-file-content) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/agents/agent_types/plan_and_execute/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [Quick start](/v0.1/docs/modules/agents/quick_start/) * [Concepts](/v0.1/docs/modules/agents/concepts/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * [Tool calling](/v0.1/docs/modules/agents/agent_types/tool_calling/) * [OpenAI tools](/v0.1/docs/modules/agents/agent_types/openai_tools_agent/) * [OpenAI functions](/v0.1/docs/modules/agents/agent_types/openai_functions_agent/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml/) * [Conversational](/v0.1/docs/modules/agents/agent_types/chat_conversation_agent/) * [OpenAI Assistant](/v0.1/docs/modules/agents/agent_types/openai_assistant/) * [Plan and execute](/v0.1/docs/modules/agents/agent_types/plan_and_execute/) * [ReAct](/v0.1/docs/modules/agents/agent_types/react/) * [Structured chat](/v0.1/docs/modules/agents/agent_types/structured_chat/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml_legacy/) * [How-to](/v0.1/docs/modules/agents/how_to/custom_agent/) * [Agents](/v0.1/docs/modules/agents/) * [Tools](/v0.1/docs/modules/agents/tools/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Agents](/v0.1/docs/modules/agents/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * Plan and execute Plan and execute ================ Compatibility This agent currently only supports Chat Models. Plan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091). The planning is almost always done by an LLM. The execution is usually done by a separate agent (equipped with tools). This agent uses a two step process: 1. First, the agent uses an LLM to create a plan to answer the query with clear steps. 2. Once it has a plan, it uses an embedded traditional Action Agent to solve each step. The idea is that the planning step keeps the LLM more "on track" by breaking up a larger task into simpler subtasks. However, this method requires more individual LLM queries and has higher latency compared to Action Agents. With `PlanAndExecuteAgentExecutor` ================================== info This is an experimental chain and is not recommended for production use yet. tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { Calculator } from "@langchain/community/tools/calculator";import { ChatOpenAI } from "@langchain/openai";import { PlanAndExecuteAgentExecutor } from "langchain/experimental/plan_and_execute";import { SerpAPI } from "@langchain/community/tools/serpapi";const tools = [new Calculator(), new SerpAPI()];const model = new ChatOpenAI({ temperature: 0, model: "gpt-3.5-turbo", verbose: true,});const executor = await PlanAndExecuteAgentExecutor.fromLLMAndTools({ llm: model, tools,});const result = await executor.invoke({ input: `Who is the current president of the United States? What is their current age raised to the second power?`,});console.log({ result }); #### API Reference: * [Calculator](https://api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [PlanAndExecuteAgentExecutor](https://api.js.langchain.com/classes/langchain_experimental_plan_and_execute.PlanAndExecuteAgentExecutor.html) from `langchain/experimental/plan_and_execute` * [SerpAPI](https://api.js.langchain.com/classes/langchain_community_tools_serpapi.SerpAPI.html) from `@langchain/community/tools/serpapi` * * * #### Help us out by providing feedback on this documentation page: [ Previous OpenAI Assistant ](/v0.1/docs/modules/agents/agent_types/openai_assistant/)[ Next ReAct ](/v0.1/docs/modules/agents/agent_types/react/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/agents/agent_types/react/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [Quick start](/v0.1/docs/modules/agents/quick_start/) * [Concepts](/v0.1/docs/modules/agents/concepts/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * [Tool calling](/v0.1/docs/modules/agents/agent_types/tool_calling/) * [OpenAI tools](/v0.1/docs/modules/agents/agent_types/openai_tools_agent/) * [OpenAI functions](/v0.1/docs/modules/agents/agent_types/openai_functions_agent/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml/) * [Conversational](/v0.1/docs/modules/agents/agent_types/chat_conversation_agent/) * [OpenAI Assistant](/v0.1/docs/modules/agents/agent_types/openai_assistant/) * [Plan and execute](/v0.1/docs/modules/agents/agent_types/plan_and_execute/) * [ReAct](/v0.1/docs/modules/agents/agent_types/react/) * [Structured chat](/v0.1/docs/modules/agents/agent_types/structured_chat/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml_legacy/) * [How-to](/v0.1/docs/modules/agents/how_to/custom_agent/) * [Agents](/v0.1/docs/modules/agents/) * [Tools](/v0.1/docs/modules/agents/tools/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Agents](/v0.1/docs/modules/agents/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * ReAct On this page ReAct ===== This walkthrough showcases using an agent to implement the [ReAct](https://react-lm.github.io/) logic. Setup[​](#setup "Direct link to Setup") --------------------------------------- Install the OpenAI integration package, retrieve your key, and store it as an environment variable named `OPENAI_API_KEY`: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai This demo also uses [Tavily](https://app.tavily.com), but you can also swap in another [built in tool](/v0.1/docs/integrations/platforms/). You'll need to sign up for an API key and set it as `TAVILY_API_KEY`. Initialize Tools[​](#initialize-tools "Direct link to Initialize Tools") ------------------------------------------------------------------------ We will first create a tool: import { TavilySearchResults } from "@langchain/community/tools/tavily_search";// Define the tools the agent will have access to.const tools = [new TavilySearchResults({ maxResults: 1 })]; Create Agent[​](#create-agent "Direct link to Create Agent") ------------------------------------------------------------ import { AgentExecutor, createReactAgent } from "langchain/agents";import { pull } from "langchain/hub";import { OpenAI } from "@langchain/openai";import type { PromptTemplate } from "@langchain/core/prompts";// Get the prompt to use - you can modify this!// If you want to see the prompt in full, you can at:// https://smith.langchain.com/hub/hwchase17/reactconst prompt = await pull<PromptTemplate>("hwchase17/react");const llm = new OpenAI({ model: "gpt-3.5-turbo-instruct", temperature: 0,});const agent = await createReactAgent({ llm, tools, prompt,}); Run Agent[​](#run-agent "Direct link to Run Agent") --------------------------------------------------- Now, let's run our agent! tip [LangSmith trace](https://smith.langchain.com/public/44989da5-8742-429f-9ab1-2377d773b0d2/r) const agentExecutor = new AgentExecutor({ agent, tools,});const result = await agentExecutor.invoke({ input: "what is LangChain?",});console.log(result);/* { input: 'what is LangChain?', output: 'LangChain is a platform for building applications using LLMs (Language Model Microservices) through composability. It can be used for tasks such as retrieval augmented generation, analyzing structured data, and creating chatbots.' }*/ Using with chat history[​](#using-with-chat-history "Direct link to Using with chat history") --------------------------------------------------------------------------------------------- For more details, see [this section of the agent quickstart](/v0.1/docs/modules/agents/quick_start/#adding-in-memory). // Get the prompt to use - you can modify this!// If you want to see the prompt in full, you can at:// https://smith.langchain.com/hub/hwchase17/react-chatconst promptWithChat = await pull<PromptTemplate>("hwchase17/react-chat");const agentWithChat = await createReactAgent({ llm, tools, prompt: promptWithChat,});const agentExecutorWithChat = new AgentExecutor({ agent: agentWithChat, tools,});const result2 = await agentExecutorWithChat.invoke({ input: "what's my name?", // Notice that chat_history is a string, since this prompt is aimed at LLMs, not chat models chat_history: "Human: Hi! My name is Cob\nAI: Hello Cob! Nice to meet you",});console.log(result2);/* { input: "what's my name?", chat_history: 'Human: Hi! My name is Cob\nAI: Hello Cob! Nice to meet you', output: 'Your name is Cob.' }*/ * * * #### Help us out by providing feedback on this documentation page: [ Previous Plan and execute ](/v0.1/docs/modules/agents/agent_types/plan_and_execute/)[ Next Structured chat ](/v0.1/docs/modules/agents/agent_types/structured_chat/) * [Setup](#setup) * [Initialize Tools](#initialize-tools) * [Create Agent](#create-agent) * [Run Agent](#run-agent) * [Using with chat history](#using-with-chat-history) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/agents/agent_types/structured_chat/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [Quick start](/v0.1/docs/modules/agents/quick_start/) * [Concepts](/v0.1/docs/modules/agents/concepts/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * [Tool calling](/v0.1/docs/modules/agents/agent_types/tool_calling/) * [OpenAI tools](/v0.1/docs/modules/agents/agent_types/openai_tools_agent/) * [OpenAI functions](/v0.1/docs/modules/agents/agent_types/openai_functions_agent/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml/) * [Conversational](/v0.1/docs/modules/agents/agent_types/chat_conversation_agent/) * [OpenAI Assistant](/v0.1/docs/modules/agents/agent_types/openai_assistant/) * [Plan and execute](/v0.1/docs/modules/agents/agent_types/plan_and_execute/) * [ReAct](/v0.1/docs/modules/agents/agent_types/react/) * [Structured chat](/v0.1/docs/modules/agents/agent_types/structured_chat/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml_legacy/) * [How-to](/v0.1/docs/modules/agents/how_to/custom_agent/) * [Agents](/v0.1/docs/modules/agents/) * [Tools](/v0.1/docs/modules/agents/tools/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Agents](/v0.1/docs/modules/agents/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * Structured chat On this page Structured chat =============== info If you are using a functions-capable model like ChatOpenAI, we currently recommend that you use the [OpenAI Functions agent](/v0.1/docs/modules/agents/agent_types/openai_functions_agent/) for more complex tool calling. The structured chat agent is capable of using multi-input tools. Older agents are configured to specify an action input as a single string, but this agent can use the provided tools' `schema` to populate the action input. Setup[​](#setup "Direct link to Setup") --------------------------------------- Install the OpenAI integration package, retrieve your key, and store it as an environment variable named `OPENAI_API_KEY`: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai This demo also uses [Tavily](https://app.tavily.com), but you can also swap in another [built in tool](/v0.1/docs/integrations/platforms/). You'll need to sign up for an API key and set it as `TAVILY_API_KEY`. Initialize Tools[​](#initialize-tools "Direct link to Initialize Tools") ------------------------------------------------------------------------ We will first create a tool: import { TavilySearchResults } from "@langchain/community/tools/tavily_search";// Define the tools the agent will have access to.const tools = [new TavilySearchResults({ maxResults: 1 })]; Create Agent[​](#create-agent "Direct link to Create Agent") ------------------------------------------------------------ import { AgentExecutor, createOpenAIToolsAgent } from "langchain/agents";import { pull } from "langchain/hub";import { ChatOpenAI } from "@langchain/openai";import type { ChatPromptTemplate } from "@langchain/core/prompts";// Get the prompt to use - you can modify this!// If you want to see the prompt in full, you can at:// https://smith.langchain.com/hub/hwchase17/structured-chat-agentconst prompt = await pull<ChatPromptTemplate>( "hwchase17/structured-chat-agent");const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-1106", temperature: 0,});const agent = await createStructuredChatAgent({ llm, tools, prompt,}); Run Agent[​](#run-agent "Direct link to Run Agent") --------------------------------------------------- Now, let's run our agent! tip [LangSmith trace](https://smith.langchain.com/public/fe1b0993-4905-4e21-91d2-ff5fc16fdebd/r) const agentExecutor = new AgentExecutor({ agent, tools,});const result = await agentExecutor.invoke({ input: "what is LangChain?",});console.log(result);/* { input: 'what is LangChain?', output: 'LangChain is a project on GitHub that focuses on building applications with LLMs (Large Language Models) through composability. It offers resources, documentation, and encourages contributions to the project. LangChain can be used for tasks such as retrieval augmented generation, analyzing structured data, and creating chatbots.' }*/ Using with chat history[​](#using-with-chat-history "Direct link to Using with chat history") --------------------------------------------------------------------------------------------- For more details, see [this section of the agent quickstart](/v0.1/docs/modules/agents/quick_start/#adding-in-memory). import { AIMessage, HumanMessage } from "@langchain/core/messages";const result2 = await agentExecutor.invoke({ input: "what's my name?", chat_history: [ new HumanMessage("hi! my name is cob"), new AIMessage("Hello Cob! How can I assist you today?"), ],});console.log(result2);/* { input: "what's my name?", chat_history: [ HumanMessage { content: 'hi! my name is cob', additional_kwargs: {} }, AIMessage { content: 'Hello Cob! How can I assist you today?', additional_kwargs: {} } ], output: 'Your name is Cob.' }*/ * * * #### Help us out by providing feedback on this documentation page: [ Previous ReAct ](/v0.1/docs/modules/agents/agent_types/react/)[ Next XML Agent ](/v0.1/docs/modules/agents/agent_types/xml_legacy/) * [Setup](#setup) * [Initialize Tools](#initialize-tools) * [Create Agent](#create-agent) * [Run Agent](#run-agent) * [Using with chat history](#using-with-chat-history) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/agents/agent_types/xml_legacy/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [Quick start](/v0.1/docs/modules/agents/quick_start/) * [Concepts](/v0.1/docs/modules/agents/concepts/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * [Tool calling](/v0.1/docs/modules/agents/agent_types/tool_calling/) * [OpenAI tools](/v0.1/docs/modules/agents/agent_types/openai_tools_agent/) * [OpenAI functions](/v0.1/docs/modules/agents/agent_types/openai_functions_agent/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml/) * [Conversational](/v0.1/docs/modules/agents/agent_types/chat_conversation_agent/) * [OpenAI Assistant](/v0.1/docs/modules/agents/agent_types/openai_assistant/) * [Plan and execute](/v0.1/docs/modules/agents/agent_types/plan_and_execute/) * [ReAct](/v0.1/docs/modules/agents/agent_types/react/) * [Structured chat](/v0.1/docs/modules/agents/agent_types/structured_chat/) * [XML Agent](/v0.1/docs/modules/agents/agent_types/xml_legacy/) * [How-to](/v0.1/docs/modules/agents/how_to/custom_agent/) * [Agents](/v0.1/docs/modules/agents/) * [Tools](/v0.1/docs/modules/agents/tools/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Agents](/v0.1/docs/modules/agents/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * XML Agent XML Agent ========= caution This is a legacy chain, it is not recommended for use. Instead, see docs for the [LCEL version](/v0.1/docs/modules/agents/agent_types/xml/). Some language models (like Anthropic's Claude) are particularly good at reasoning/writing XML. The below example shows how to use an agent that uses XML when prompting. tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic import { TavilySearchResults } from "@langchain/community/tools/tavily_search";import { AgentExecutor, createXmlAgent } from "langchain/agents";import { pull } from "langchain/hub";import type { PromptTemplate } from "@langchain/core/prompts";import { ChatAnthropic } from "@langchain/anthropic";// Define the tools the agent will have access to.const tools = [new TavilySearchResults({ maxResults: 1 })];// Get the prompt to use - you can modify this!// If you want to see the prompt in full, you can at:// https://smith.langchain.com/hub/hwchase17/xml-agent-convoconst prompt = await pull<PromptTemplate>("hwchase17/xml-agent-convo");const llm = new ChatAnthropic({ model: "claude-3-opus-20240229", temperature: 0,});const agent = await createXmlAgent({ llm, tools, prompt,});const agentExecutor = new AgentExecutor({ agent, tools,});const result = await agentExecutor.invoke({ input: "what is LangChain?",});console.log(result);const result2 = await agentExecutor.invoke({ input: "what's my name?", // Notice that chat_history is a string, since this prompt is aimed at LLMs, not chat models chat_history: "Human: Hi! My name is Cob\nAI: Hello Cob! Nice to meet you",});console.log(result2); #### API Reference: * [TavilySearchResults](https://api.js.langchain.com/classes/langchain_community_tools_tavily_search.TavilySearchResults.html) from `@langchain/community/tools/tavily_search` * [AgentExecutor](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents` * [createXmlAgent](https://api.js.langchain.com/functions/langchain_agents.createXmlAgent.html) from `langchain/agents` * [pull](https://api.js.langchain.com/functions/langchain_hub.pull.html) from `langchain/hub` * [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [ChatAnthropic](https://api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic` * * * #### Help us out by providing feedback on this documentation page: [ Previous Structured chat ](/v0.1/docs/modules/agents/agent_types/structured_chat/)[ Next Custom agent ](/v0.1/docs/modules/agents/how_to/custom_agent/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/toolkits/connery
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Connery Toolkit](/v0.2/docs/integrations/toolkits/connery) * [JSON Agent Toolkit](/v0.2/docs/integrations/toolkits/json) * [OpenAPI Agent Toolkit](/v0.2/docs/integrations/toolkits/openapi) * [AWS Step Functions Toolkit](/v0.2/docs/integrations/toolkits/sfn_agent) * [SQL Agent Toolkit](/v0.2/docs/integrations/toolkits/sql) * [VectorStore Agent Toolkit](/v0.2/docs/integrations/toolkits/vectorstore) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Toolkits](/v0.2/docs/integrations/toolkits) * Connery Toolkit On this page Connery Toolkit =============== Using this toolkit, you can integrate Connery Actions into your LangChain agent. note If you want to use only one particular Connery Action in your agent, check out the [Connery Action Tool](/v0.2/docs/integrations/tools/connery) documentation. What is Connery?[​](#what-is-connery "Direct link to What is Connery?") ----------------------------------------------------------------------- Connery is an open-source plugin infrastructure for AI. With Connery, you can easily create a custom plugin with a set of actions and seamlessly integrate them into your LangChain agent. Connery will take care of critical aspects such as runtime, authorization, secret management, access management, audit logs, and other vital features. Furthermore, Connery, supported by our community, provides a diverse collection of ready-to-use open-source plugins for added convenience. Learn more about Connery: * GitHub: [https://github.com/connery-io/connery](https://github.com/connery-io/connery) * Documentation: [https://docs.connery.io](https://docs.connery.io) Prerequisites[​](#prerequisites "Direct link to Prerequisites") --------------------------------------------------------------- To use Connery Actions in your LangChain agent, you need to do some preparation: 1. Set up the Connery runner using the [Quickstart](https://docs.connery.io/docs/runner/quick-start/) guide. 2. Install all the plugins with the actions you want to use in your agent. 3. Set environment variables `CONNERY_RUNNER_URL` and `CONNERY_RUNNER_API_KEY` so the toolkit can communicate with the Connery Runner. Example of using Connery Toolkit[​](#example-of-using-connery-toolkit "Direct link to Example of using Connery Toolkit") ------------------------------------------------------------------------------------------------------------------------ ### Setup[​](#setup "Direct link to Setup") To use the Connery Toolkit you need to install the following official peer dependency: * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). ### Usage[​](#usage "Direct link to Usage") In the example below, we create an agent that uses two Connery Actions to summarize a public webpage and send the summary by email: 1. **Summarize public webpage** action from the [Summarization](https://github.com/connery-io/summarization-plugin) plugin. 2. **Send email** action from the [Gmail](https://github.com/connery-io/gmail) plugin. info You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/5485cb37-b73d-458f-8162-43639f2b49e1/r). import { ConneryService } from "@langchain/community/tools/connery";import { ConneryToolkit } from "@langchain/community/agents/toolkits/connery";import { ChatOpenAI } from "@langchain/openai";import { initializeAgentExecutorWithOptions } from "langchain/agents";// Specify your Connery Runner credentials.process.env.CONNERY_RUNNER_URL = "";process.env.CONNERY_RUNNER_API_KEY = "";// Specify OpenAI API key.process.env.OPENAI_API_KEY = "";// Specify your email address to receive the emails from examples below.const recepientEmail = "test@example.com";// Create a Connery Toolkit with all the available actions from the Connery Runner.const conneryService = new ConneryService();const conneryToolkit = await ConneryToolkit.createInstance(conneryService);// Use OpenAI Functions agent to execute the prompt using actions from the Connery Toolkit.const llm = new ChatOpenAI({ temperature: 0 });const agent = await initializeAgentExecutorWithOptions( conneryToolkit.tools, llm, { agentType: "openai-functions", verbose: true, });const result = await agent.invoke({ input: `Make a short summary of the webpage http://www.paulgraham.com/vb.html in three sentences ` + `and send it to ${recepientEmail}. Include the link to the webpage into the body of the email.`,});console.log(result.output); #### API Reference: * [ConneryService](https://v02.api.js.langchain.com/classes/langchain_community_tools_connery.ConneryService.html) from `@langchain/community/tools/connery` * [ConneryToolkit](https://v02.api.js.langchain.com/classes/langchain_community_agents_toolkits_connery.ConneryToolkit.html) from `@langchain/community/agents/toolkits/connery` * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [initializeAgentExecutorWithOptions](https://v02.api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` note Connery Action is a structured tool, so you can only use it in the agents supporting structured tools. * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Toolkits ](/v0.2/docs/integrations/toolkits)[ Next JSON Agent Toolkit ](/v0.2/docs/integrations/toolkits/json) * [What is Connery?](#what-is-connery) * [Prerequisites](#prerequisites) * [Example of using Connery Toolkit](#example-of-using-connery-toolkit) * [Setup](#setup) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/agents/tools/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [Quick start](/v0.1/docs/modules/agents/quick_start/) * [Concepts](/v0.1/docs/modules/agents/concepts/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * [How-to](/v0.1/docs/modules/agents/how_to/custom_agent/) * [Agents](/v0.1/docs/modules/agents/) * [Tools](/v0.1/docs/modules/agents/tools/) * [Toolkits](/v0.1/docs/modules/agents/tools/toolkits/) * [Defining custom tools](/v0.1/docs/modules/agents/tools/dynamic/) * [How-to](/v0.1/docs/modules/agents/tools/how_to/agents_with_vectorstores/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Agents](/v0.1/docs/modules/agents/) * Tools On this page Tools ===== Tools are interfaces that an agent can use to interact with the world. They combine a few things: 1. The name of the tool 2. A description of what the tool is 3. Schema of what the inputs to the tool are 4. The function to call 5. Whether the result of a tool should be returned directly to the user It is useful to have all this information because this information can be used to build action-taking systems! The name, description, and Schema can be used the prompt the LLM so it knows how to specify what action to take, and then the function to call is equivalent to taking that action. The simpler the input to a tool is, the easier it is for an LLM to be able to use it. Many agents will only work with tools that have a single string input. For a list of agent types and which ones work with more complicated inputs, please see [this documentation](/v0.1/docs/modules/agents/agent_types/) Importantly, the name, description, and schema (if used) are all used in the prompt. Therefore, it is really important that they are clear and describe exactly how the tool should be used. You may need to change the default name, description, or schema if the LLM is not understanding how to use the tool. Default Tools[​](#default-tools "Direct link to Default Tools") --------------------------------------------------------------- Let's take a look at how to work with tools. To do this, let's look at a built in tool that takes a simple string input: import { WikipediaQueryRun } from "@langchain/community/tools/wikipedia_query_run";const tool = new WikipediaQueryRun({ topKResults: 1, maxDocContentLength: 100,});console.log(tool.name);/* wikipedia-api*/console.log(tool.description);/* A tool for interacting with and fetching data from the Wikipedia API.*/const res = await tool.invoke("Langchain");console.log(res);/* Page: LangChain Summary: LangChain is a framework designed to simplify the creation of applications*/ You can define more complex `StructuredTool`s as well that require object inputs with several different parameters. * * * #### Help us out by providing feedback on this documentation page: [ Previous Agents ](/v0.1/docs/modules/agents/)[ Next Toolkits ](/v0.1/docs/modules/agents/tools/toolkits/) * [Default Tools](#default-tools) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/agents/how_to/custom_agent/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [Quick start](/v0.1/docs/modules/agents/quick_start/) * [Concepts](/v0.1/docs/modules/agents/concepts/) * [Agent Types](/v0.1/docs/modules/agents/agent_types/) * [How-to](/v0.1/docs/modules/agents/how_to/custom_agent/) * [Custom agent](/v0.1/docs/modules/agents/how_to/custom_agent/) * [Returning structured output](/v0.1/docs/modules/agents/how_to/agent_structured/) * [Subscribing to events](/v0.1/docs/modules/agents/how_to/callbacks/) * [Cancelling requests](/v0.1/docs/modules/agents/how_to/cancelling_requests/) * [Custom LLM Agent](/v0.1/docs/modules/agents/how_to/custom_llm_agent/) * [Custom LLM Agent (with a ChatModel)](/v0.1/docs/modules/agents/how_to/custom_llm_chat_agent/) * [Custom MRKL agent](/v0.1/docs/modules/agents/how_to/custom_mrkl_agent/) * [Handle parsing errors](/v0.1/docs/modules/agents/how_to/handle_parsing_errors/) * [Access intermediate steps](/v0.1/docs/modules/agents/how_to/intermediate_steps/) * [Logging and tracing](/v0.1/docs/modules/agents/how_to/logging_and_tracing/) * [Cap the max number of iterations](/v0.1/docs/modules/agents/how_to/max_iterations/) * [Streaming](/v0.1/docs/modules/agents/how_to/streaming/) * [Timeouts for agents](/v0.1/docs/modules/agents/how_to/timeouts/) * [Agents](/v0.1/docs/modules/agents/) * [Tools](/v0.1/docs/modules/agents/tools/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Agents](/v0.1/docs/modules/agents/) * How-to * Custom agent On this page Custom agent ============ This notebook goes through how to create your own custom agent. In this example, we will use OpenAI Function Calling to create this agent. **This is generally the most reliable way to create agents.** We will first create it WITHOUT memory, but we will then show how to add memory in. Memory is needed to enable conversation. Load the LLM[​](#load-the-llm "Direct link to Load the LLM") ------------------------------------------------------------ First, let's load the language model we're going to use to control the agent. import { ChatOpenAI } from "@langchain/openai";/** * Define your chat model to use. */const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,}); Define Tools[​](#define-tools "Direct link to Define Tools") ------------------------------------------------------------ Next, let's define some tools to use. Let's write a really simple JavaScript function to calculate the length of a word that is passed in. import { DynamicTool } from "@langchain/core/tools";const customTool = new DynamicTool({ name: "get_word_length", description: "Returns the length of a word.", func: async (input: string) => input.length.toString(),});/** Define your list of tools. */const tools = [customTool]; Create Prompt[​](#create-prompt "Direct link to Create Prompt") --------------------------------------------------------------- Now let us create the prompt. Because OpenAI Function Calling is finetuned for tool usage, we hardly need any instructions on how to reason, or how to output format. We will just have two input variables: `input` and `agent_scratchpad`. `input` should be a string containing the user objective. `agent_scratchpad` should be a sequence of messages that contains the previous agent tool invocations and the corresponding tool outputs. import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are very powerful assistant, but don't know current events"], ["human", "{input}"], new MessagesPlaceholder("agent_scratchpad"),]); Bind tools to LLM[​](#bind-tools-to-llm "Direct link to Bind tools to LLM") --------------------------------------------------------------------------- How does the agent know what tools it can use? In this case we're relying on OpenAI function calling LLMs, which take functions as a separate argument and have been specifically trained to know when to invoke those functions. To pass in our tools to the agent, we just need to format them to the OpenAI function format and pass them to our model. (By `bind`\-ing the functions, we're making sure that they're passed in each time the model is invoked.) import { convertToOpenAIFunction } from "@langchain/core/utils/function_calling";const modelWithFunctions = model.bind({ functions: tools.map((tool) => convertToOpenAIFunction(tool)),}); Create the Agent[​](#create-the-agent "Direct link to Create the Agent") ------------------------------------------------------------------------ Putting those pieces together, we can now create the agent. We will import two last utility functions: a component for formatting intermediate steps to input messages that can be sent to the model, and an output parser for converting the output message into an agent action/agent finish. import { RunnableSequence } from "@langchain/core/runnables";import { AgentExecutor, type AgentStep } from "langchain/agents";import { formatToOpenAIFunctionMessages } from "langchain/agents/format_scratchpad";import { OpenAIFunctionsAgentOutputParser } from "langchain/agents/openai/output_parser";const runnableAgent = RunnableSequence.from([ { input: (i: { input: string; steps: AgentStep[] }) => i.input, agent_scratchpad: (i: { input: string; steps: AgentStep[] }) => formatToOpenAIFunctionMessages(i.steps), }, prompt, modelWithFunctions, new OpenAIFunctionsAgentOutputParser(),]);const executor = AgentExecutor.fromAgentAndTools({ agent: runnableAgent, tools,}); And now, let's call the executor: tip [LangSmith trace](https://smith.langchain.com/public/6288fcd3-7e4e-488e-b40c-f83e052ad6ce/r) const input = "How many letters in the word educa?";console.log(`Calling agent executor with query: ${input}`);const result = await executor.invoke({ input,});console.log(result);/* Loaded agent executor Calling agent executor with query: What is the weather in New York? { input: 'How many letters in the word educa?', output: 'There are 5 letters in the word "educa".' }*/ Adding memory[​](#adding-memory "Direct link to Adding memory") --------------------------------------------------------------- This is great - we have an agent! However, this agent is stateless - it doesn't remember anything about previous interactions. This means you can't ask follow up questions easily. Let's fix that by adding in memory. In order to do this, we need to do two things: 1. Add a place for memory variables to go in the prompt 2. Keep track of the chat history First, let's add a place for memory in the prompt. We do this by adding a placeholder for messages with the key `"chat_history"`. Notice that we put this ABOVE the new user input (to follow the conversation flow). const MEMORY_KEY = "chat_history";const memoryPrompt = ChatPromptTemplate.fromMessages([ [ "system", "You are very powerful assistant, but bad at calculating lengths of words.", ], new MessagesPlaceholder(MEMORY_KEY), ["user", "{input}"], new MessagesPlaceholder("agent_scratchpad"),]); We can then set up a list to track the chat history: import { AIMessage, BaseMessage, HumanMessage } from "@langchain/core/messages";const chatHistory: BaseMessage[] = []; We can then put it all together in an agent: const agentWithMemory = RunnableSequence.from([ { input: (i) => i.input, agent_scratchpad: (i) => formatToOpenAIFunctionMessages(i.steps), chat_history: (i) => i.chat_history, }, memoryPrompt, modelWithFunctions, new OpenAIFunctionsAgentOutputParser(),]);/** Pass the runnable along with the tools to create the Agent Executor */const executorWithMemory = AgentExecutor.fromAgentAndTools({ agent: agentWithMemory, tools,}); When running, we now need to track the inputs and outputs as chat history. tip [LangSmith trace for the first invocation](https://smith.langchain.com/public/431f3955-693e-4ea5-ae07-737ec23e7e13/r) [LangSmith trace for the second invocation](https://smith.langchain.com/public/2618772e-3e13-4dde-b86f-973cffb2a3be/r) const input1 = "how many letters in the word educa?";const result1 = await executorWithMemory.invoke({ input: input1, chat_history: chatHistory,});console.log(result1);/* { input: 'how many letters in the word educa?', chat_history: [], output: 'There are 5 letters in the word "educa".' }*/chatHistory.push(new HumanMessage(input1));chatHistory.push(new AIMessage(result.output));const result2 = await executorWithMemory.invoke({ input: "is that a real English word?", chat_history: chatHistory,});console.log(result2);/* { input: 'is that a real English word?', chat_history: [ HumanMessage { lc_serializable: true, lc_kwargs: [Object], lc_namespace: [Array], content: 'how many letters in the word educa?', name: undefined, additional_kwargs: {} }, AIMessage { lc_serializable: true, lc_kwargs: [Object], lc_namespace: [Array], content: 'There are 5 letters in the word "educa".', name: undefined, additional_kwargs: {} } ], output: 'The word "educa" is not a real English word. It has 5 letters.' }*/ * * * #### Help us out by providing feedback on this documentation page: [ Previous XML Agent ](/v0.1/docs/modules/agents/agent_types/xml_legacy/)[ Next Returning structured output ](/v0.1/docs/modules/agents/how_to/agent_structured/) * [Load the LLM](#load-the-llm) * [Define Tools](#define-tools) * [Create Prompt](#create-prompt) * [Bind tools to LLM](#bind-tools-to-llm) * [Create the Agent](#create-the-agent) * [Adding memory](#adding-memory) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/toolkits/json
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Connery Toolkit](/v0.2/docs/integrations/toolkits/connery) * [JSON Agent Toolkit](/v0.2/docs/integrations/toolkits/json) * [OpenAPI Agent Toolkit](/v0.2/docs/integrations/toolkits/openapi) * [AWS Step Functions Toolkit](/v0.2/docs/integrations/toolkits/sfn_agent) * [SQL Agent Toolkit](/v0.2/docs/integrations/toolkits/sql) * [VectorStore Agent Toolkit](/v0.2/docs/integrations/toolkits/vectorstore) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Toolkits](/v0.2/docs/integrations/toolkits) * JSON Agent Toolkit JSON Agent Toolkit ================== This example shows how to load and use an agent with a JSON toolkit. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import * as fs from "fs";import * as yaml from "js-yaml";import { OpenAI } from "@langchain/openai";import { JsonSpec, JsonObject } from "langchain/tools";import { JsonToolkit, createJsonAgent } from "langchain/agents";export const run = async () => { let data: JsonObject; try { const yamlFile = fs.readFileSync("openai_openapi.yaml", "utf8"); data = yaml.load(yamlFile) as JsonObject; if (!data) { throw new Error("Failed to load OpenAPI spec"); } } catch (e) { console.error(e); return; } const toolkit = new JsonToolkit(new JsonSpec(data)); const model = new OpenAI({ temperature: 0 }); const executor = createJsonAgent(model, toolkit); const input = `What are the required parameters in the request body to the /completions endpoint?`; console.log(`Executing with input "${input}"...`); const result = await executor.invoke({ input }); console.log(`Got output ${result.output}`); console.log( `Got intermediate steps ${JSON.stringify( result.intermediateSteps, null, 2 )}` );}; * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Connery Toolkit ](/v0.2/docs/integrations/toolkits/connery)[ Next OpenAPI Agent Toolkit ](/v0.2/docs/integrations/toolkits/openapi) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/toolkits/vectorstore
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Connery Toolkit](/v0.2/docs/integrations/toolkits/connery) * [JSON Agent Toolkit](/v0.2/docs/integrations/toolkits/json) * [OpenAPI Agent Toolkit](/v0.2/docs/integrations/toolkits/openapi) * [AWS Step Functions Toolkit](/v0.2/docs/integrations/toolkits/sfn_agent) * [SQL Agent Toolkit](/v0.2/docs/integrations/toolkits/sql) * [VectorStore Agent Toolkit](/v0.2/docs/integrations/toolkits/vectorstore) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Toolkits](/v0.2/docs/integrations/toolkits) * VectorStore Agent Toolkit VectorStore Agent Toolkit ========================= tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community This example shows how to load and use an agent with a vectorstore toolkit. import { OpenAI, OpenAIEmbeddings } from "@langchain/openai";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import * as fs from "fs";import { VectorStoreToolkit, createVectorStoreAgent, VectorStoreInfo,} from "langchain/agents";const model = new OpenAI({ temperature: 0 });/* Load in the file we want to do question answering over */const text = fs.readFileSync("state_of_the_union.txt", "utf8");/* Split the text into chunks using character, not token, size */const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);/* Create the vectorstore */const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());/* Create the agent */const vectorStoreInfo: VectorStoreInfo = { name: "state_of_union_address", description: "the most recent state of the Union address", vectorStore,};const toolkit = new VectorStoreToolkit(vectorStoreInfo, model);const agent = createVectorStoreAgent(model, toolkit);const input = "What did biden say about Ketanji Brown Jackson is the state of the union address?";console.log(`Executing: ${input}`);const result = await agent.invoke({ input });console.log(`Got output ${result.output}`);console.log( `Got intermediate steps ${JSON.stringify(result.intermediateSteps, null, 2)}`); #### API Reference: * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [HNSWLib](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib` * [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters` * [VectorStoreToolkit](https://v02.api.js.langchain.com/classes/langchain_agents.VectorStoreToolkit.html) from `langchain/agents` * [createVectorStoreAgent](https://v02.api.js.langchain.com/functions/langchain_agents.createVectorStoreAgent.html) from `langchain/agents` * [VectorStoreInfo](https://v02.api.js.langchain.com/interfaces/langchain_agents.VectorStoreInfo.html) from `langchain/agents` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous SQL Agent Toolkit ](/v0.2/docs/integrations/toolkits/sql)[ Next Stores ](/v0.2/docs/integrations/stores/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/toolkits/sql
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Connery Toolkit](/v0.2/docs/integrations/toolkits/connery) * [JSON Agent Toolkit](/v0.2/docs/integrations/toolkits/json) * [OpenAPI Agent Toolkit](/v0.2/docs/integrations/toolkits/openapi) * [AWS Step Functions Toolkit](/v0.2/docs/integrations/toolkits/sfn_agent) * [SQL Agent Toolkit](/v0.2/docs/integrations/toolkits/sql) * [VectorStore Agent Toolkit](/v0.2/docs/integrations/toolkits/vectorstore) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Toolkits](/v0.2/docs/integrations/toolkits) * SQL Agent Toolkit SQL Agent Toolkit ================= This example shows how to load and use an agent with a SQL toolkit. Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll need to first install `typeorm`: * npm * Yarn * pnpm npm install typeorm yarn add typeorm pnpm add typeorm Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { OpenAI } from "@langchain/openai";import { SqlDatabase } from "langchain/sql_db";import { createSqlAgent, SqlToolkit } from "langchain/agents/toolkits/sql";import { DataSource } from "typeorm";/** This example uses Chinook database, which is a sample database available for SQL Server, Oracle, MySQL, etc. * To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file * in the examples folder. */export const run = async () => { const datasource = new DataSource({ type: "sqlite", database: "Chinook.db", }); const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource, }); const model = new OpenAI({ temperature: 0 }); const toolkit = new SqlToolkit(db, model); const executor = createSqlAgent(model, toolkit); const input = `List the total sales per country. Which country's customers spent the most?`; console.log(`Executing with input "${input}"...`); const result = await executor.invoke({ input }); console.log(`Got output ${result.output}`); console.log( `Got intermediate steps ${JSON.stringify( result.intermediateSteps, null, 2 )}` ); await datasource.destroy();}; #### API Reference: * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [SqlDatabase](https://v02.api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db` * [createSqlAgent](https://v02.api.js.langchain.com/functions/langchain_agents_toolkits_sql.createSqlAgent.html) from `langchain/agents/toolkits/sql` * [SqlToolkit](https://v02.api.js.langchain.com/classes/langchain_agents_toolkits_sql.SqlToolkit.html) from `langchain/agents/toolkits/sql` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous AWS Step Functions Toolkit ](/v0.2/docs/integrations/toolkits/sfn_agent)[ Next VectorStore Agent Toolkit ](/v0.2/docs/integrations/toolkits/vectorstore) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/toolkits/openapi
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Connery Toolkit](/v0.2/docs/integrations/toolkits/connery) * [JSON Agent Toolkit](/v0.2/docs/integrations/toolkits/json) * [OpenAPI Agent Toolkit](/v0.2/docs/integrations/toolkits/openapi) * [AWS Step Functions Toolkit](/v0.2/docs/integrations/toolkits/sfn_agent) * [SQL Agent Toolkit](/v0.2/docs/integrations/toolkits/sql) * [VectorStore Agent Toolkit](/v0.2/docs/integrations/toolkits/vectorstore) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Toolkits](/v0.2/docs/integrations/toolkits) * OpenAPI Agent Toolkit OpenAPI Agent Toolkit ===================== This example shows how to load and use an agent with a OpenAPI toolkit. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import * as fs from "fs";import * as yaml from "js-yaml";import { OpenAI } from "@langchain/openai";import { JsonSpec, JsonObject } from "langchain/tools";import { createOpenApiAgent, OpenApiToolkit } from "langchain/agents";export const run = async () => { let data: JsonObject; try { const yamlFile = fs.readFileSync("openai_openapi.yaml", "utf8"); data = yaml.load(yamlFile) as JsonObject; if (!data) { throw new Error("Failed to load OpenAPI spec"); } } catch (e) { console.error(e); return; } const headers = { "Content-Type": "application/json", Authorization: `Bearer ${process.env.OPENAI_API_KEY}`, }; const model = new OpenAI({ temperature: 0 }); const toolkit = new OpenApiToolkit(new JsonSpec(data), model, headers); const executor = createOpenApiAgent(model, toolkit); const input = `Make a POST request to openai /completions. The prompt should be 'tell me a joke.'`; console.log(`Executing with input "${input}"...`); const result = await executor.invoke({ input }); console.log(`Got output ${result.output}`); console.log( `Got intermediate steps ${JSON.stringify( result.intermediateSteps, null, 2 )}` );}; Disclaimer ⚠️ ============= This agent can make requests to external APIs. Use with caution, especially when granting access to users. Be aware that this agent could theoretically send requests with provided credentials or other sensitive data to unverified or potentially malicious URLs --although it should never in theory. Consider adding limitations to what actions can be performed via the agent, what APIs it can access, what headers can be passed, and more. In addition, consider implementing measures to validate URLs before sending requests, and to securely handle and protect sensitive data such as credentials. * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous JSON Agent Toolkit ](/v0.2/docs/integrations/toolkits/json)[ Next AWS Step Functions Toolkit ](/v0.2/docs/integrations/toolkits/sfn_agent) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/toolkits/sfn_agent
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Connery Toolkit](/v0.2/docs/integrations/toolkits/connery) * [JSON Agent Toolkit](/v0.2/docs/integrations/toolkits/json) * [OpenAPI Agent Toolkit](/v0.2/docs/integrations/toolkits/openapi) * [AWS Step Functions Toolkit](/v0.2/docs/integrations/toolkits/sfn_agent) * [SQL Agent Toolkit](/v0.2/docs/integrations/toolkits/sql) * [VectorStore Agent Toolkit](/v0.2/docs/integrations/toolkits/vectorstore) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Toolkits](/v0.2/docs/integrations/toolkits) * AWS Step Functions Toolkit AWS Step Functions Toolkit ========================== **AWS Step Functions** are a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines. By including a `AWSSfn` tool in the list of tools provided to an Agent, you can grant your Agent the ability to invoke async workflows running in your AWS Cloud. When an Agent uses the `AWSSfn` tool, it will provide an argument of type `string` which will in turn be passed into one of the supported actions this tool supports. The supported actions are: `StartExecution`, `DescribeExecution`, and `SendTaskSuccess`. Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll need to install the Node AWS Step Functions SDK: * npm * Yarn * pnpm npm install @aws-sdk/client-sfn yarn add @aws-sdk/client-sfn pnpm add @aws-sdk/client-sfn Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community ### Note about credentials:[​](#note-about-credentials "Direct link to Note about credentials:") * If you have not run [`aws configure`](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) via the AWS CLI, the `region`, `accessKeyId`, and `secretAccessKey` must be provided to the AWSSfn constructor. * The IAM role corresponding to those credentials must have permission to invoke the Step Function. import { OpenAI } from "@langchain/openai";import { AWSSfnToolkit, createAWSSfnAgent,} from "@langchain/community/agents/toolkits/aws_sfn";const _EXAMPLE_STATE_MACHINE_ASL = `{ "Comment": "A simple example of the Amazon States Language to define a state machine for new client onboarding.", "StartAt": "OnboardNewClient", "States": { "OnboardNewClient": { "Type": "Pass", "Result": "Client onboarded!", "End": true } }}`;/** * This example uses a deployed AWS Step Function state machine with the above Amazon State Language (ASL) definition. * You can test by provisioning a state machine using the above ASL within your AWS environment, or you can use a tool like LocalStack * to mock AWS services locally. See https://localstack.cloud/ for more information. */export const run = async () => { const model = new OpenAI({ temperature: 0 }); const toolkit = new AWSSfnToolkit({ name: "onboard-new-client-workflow", description: "Onboard new client workflow. Can also be used to get status of any excuting workflow or state machine.", stateMachineArn: "arn:aws:states:us-east-1:1234567890:stateMachine:my-state-machine", // Update with your state machine ARN accordingly region: "<your Sfn's region>", accessKeyId: "<your access key id>", secretAccessKey: "<your secret access key>", }); const executor = createAWSSfnAgent(model, toolkit); const input = `Onboard john doe (john@example.com) as a new client.`; console.log(`Executing with input "${input}"...`); const result = await executor.invoke({ input }); console.log(`Got output ${result.output}`); console.log( `Got intermediate steps ${JSON.stringify( result.intermediateSteps, null, 2 )}` );}; #### API Reference: * [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [AWSSfnToolkit](https://v02.api.js.langchain.com/classes/langchain_community_agents_toolkits_aws_sfn.AWSSfnToolkit.html) from `@langchain/community/agents/toolkits/aws_sfn` * [createAWSSfnAgent](https://v02.api.js.langchain.com/functions/langchain_community_agents_toolkits_aws_sfn.createAWSSfnAgent.html) from `@langchain/community/agents/toolkits/aws_sfn` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous OpenAPI Agent Toolkit ](/v0.2/docs/integrations/toolkits/openapi)[ Next SQL Agent Toolkit ](/v0.2/docs/integrations/toolkits/sql) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/text_embedding/cloudflare_ai
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Alibaba Tongyi](/v0.2/docs/integrations/text_embedding/alibaba_tongyi) * [Azure OpenAI](/v0.2/docs/integrations/text_embedding/azure_openai) * [Baidu Qianfan](/v0.2/docs/integrations/text_embedding/baidu_qianfan) * [Bedrock](/v0.2/docs/integrations/text_embedding/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/text_embedding/cloudflare_ai) * [Cohere](/v0.2/docs/integrations/text_embedding/cohere) * [Fireworks](/v0.2/docs/integrations/text_embedding/fireworks) * [Google AI](/v0.2/docs/integrations/text_embedding/google_generativeai) * [Google PaLM](/v0.2/docs/integrations/text_embedding/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/text_embedding/google_vertex_ai) * [Gradient AI](/v0.2/docs/integrations/text_embedding/gradient_ai) * [HuggingFace Inference](/v0.2/docs/integrations/text_embedding/hugging_face_inference) * [Llama CPP](/v0.2/docs/integrations/text_embedding/llama_cpp) * [Minimax](/v0.2/docs/integrations/text_embedding/minimax) * [Mistral AI](/v0.2/docs/integrations/text_embedding/mistralai) * [Nomic](/v0.2/docs/integrations/text_embedding/nomic) * [Ollama](/v0.2/docs/integrations/text_embedding/ollama) * [OpenAI](/v0.2/docs/integrations/text_embedding/openai) * [Prem AI](/v0.2/docs/integrations/text_embedding/premai) * [TensorFlow](/v0.2/docs/integrations/text_embedding/tensorflow) * [Together AI](/v0.2/docs/integrations/text_embedding/togetherai) * [HuggingFace Transformers](/v0.2/docs/integrations/text_embedding/transformers) * [Voyage AI](/v0.2/docs/integrations/text_embedding/voyageai) * [ZhipuAI](/v0.2/docs/integrations/text_embedding/zhipuai) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Embedding models](/v0.2/docs/integrations/text_embedding) * Cloudflare Workers AI Cloudflare Workers AI ===================== If you're deploying your project in a Cloudflare worker, you can use Cloudflare's [built-in Workers AI embeddings](https://developers.cloudflare.com/workers-ai/) with LangChain.js. Setup[​](#setup "Direct link to Setup") --------------------------------------- First, [follow the official docs](https://developers.cloudflare.com/workers-ai/get-started/workers-wrangler/) to set up your worker. You'll also need to install the LangChain Cloudflare integration package: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/cloudflare yarn add @langchain/cloudflare pnpm add @langchain/cloudflare Usage[​](#usage "Direct link to Usage") --------------------------------------- Below is an example worker that uses Workers AI embeddings with a [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize) vectorstore. note If running locally, be sure to run wrangler as `npx wrangler dev --remote`! name = "langchain-test"main = "worker.js"compatibility_date = "2024-01-10"[[vectorize]]binding = "VECTORIZE_INDEX"index_name = "langchain-test"[ai]binding = "AI" // @ts-nocheckimport type { VectorizeIndex, Fetcher, Request,} from "@cloudflare/workers-types";import { CloudflareVectorizeStore, CloudflareWorkersAIEmbeddings,} from "@langchain/cloudflare";export interface Env { VECTORIZE_INDEX: VectorizeIndex; AI: Fetcher;}export default { async fetch(request: Request, env: Env) { const { pathname } = new URL(request.url); const embeddings = new CloudflareWorkersAIEmbeddings({ binding: env.AI, model: "@cf/baai/bge-small-en-v1.5", }); const store = new CloudflareVectorizeStore(embeddings, { index: env.VECTORIZE_INDEX, }); if (pathname === "/") { const results = await store.similaritySearch("hello", 5); return Response.json(results); } else if (pathname === "/load") { // Upsertion by id is supported await store.addDocuments( [ { pageContent: "hello", metadata: {}, }, { pageContent: "world", metadata: {}, }, { pageContent: "hi", metadata: {}, }, ], { ids: ["id1", "id2", "id3"] } ); return Response.json({ success: true }); } else if (pathname === "/clear") { await store.delete({ ids: ["id1", "id2", "id3"] }); return Response.json({ success: true }); } return Response.json({ error: "Not Found" }, { status: 404 }); },}; #### API Reference: * [CloudflareVectorizeStore](https://v02.api.js.langchain.com/classes/langchain_cloudflare.CloudflareVectorizeStore.html) from `@langchain/cloudflare` * [CloudflareWorkersAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_cloudflare.CloudflareWorkersAIEmbeddings.html) from `@langchain/cloudflare` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Bedrock ](/v0.2/docs/integrations/text_embedding/bedrock)[ Next Cohere ](/v0.2/docs/integrations/text_embedding/cohere) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/platforms/google/#chatgooglegenerativeai
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Providers](/v0.1/docs/integrations/platforms/) * Google On this page Google ====== Functionality related to [Google Cloud Platform](https://cloud.google.com/) Chat models[​](#chat-models "Direct link to Chat models") --------------------------------------------------------- ### Gemini Models[​](#gemini-models "Direct link to Gemini Models") Access Gemini models such as `gemini-pro` and `gemini-pro-vision` through the [`ChatGoogleGenerativeAI`](/v0.1/docs/integrations/chat/google_generativeai/), or if using VertexAI, via the [`ChatVertexAI`](/v0.1/docs/integrations/chat/google_vertex_ai/) class. * GenAI * VertexAI tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/google-genai yarn add @langchain/google-genai pnpm add @langchain/google-genai Configure your API key. export GOOGLE_API_KEY=your-api-key import { ChatGoogleGenerativeAI } from "@langchain/google-genai";const model = new ChatGoogleGenerativeAI({ model: "gemini-pro", maxOutputTokens: 2048,});// Batch and stream are also supportedconst res = await model.invoke([ [ "human", "What would be a good company name for a company that makes colorful socks?", ],]); Gemini vision models support image inputs when providing a single human message. For example: const visionModel = new ChatGoogleGenerativeAI({ model: "gemini-pro-vision", maxOutputTokens: 2048,});const image = fs.readFileSync("./hotdog.jpg").toString("base64");const input2 = [ new HumanMessage({ content: [ { type: "text", text: "Describe the following image.", }, { type: "image_url", image_url: `data:image/png;base64,${image}`, }, ], }),];const res = await visionModel.invoke(input2); tip Click [here](/v0.1/docs/integrations/chat/google_generativeai/) for the `@langchain/google-genai` specific integration docs tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai Then, you'll need to add your service account credentials, either directly as a `GOOGLE_VERTEX_AI_WEB_CREDENTIALS` environment variable: GOOGLE_VERTEX_AI_WEB_CREDENTIALS={"type":"service_account","project_id":"YOUR_PROJECT-12345",...} or as a file path: GOOGLE_VERTEX_AI_WEB_CREDENTIALS_FILE=/path/to/your/credentials.json import { ChatVertexAI } from "@langchain/google-vertexai";// Or, if using the web entrypoint:// import { ChatVertexAI } from "@langchain/google-vertexai-web";const model = new ChatVertexAI({ model: "gemini-1.0-pro", maxOutputTokens: 2048,});// Batch and stream are also supportedconst res = await model.invoke([ [ "human", "What would be a good company name for a company that makes colorful socks?", ],]); Gemini vision models support image inputs when providing a single human message. For example: const visionModel = new ChatVertexAI({ model: "gemini-pro-vision", maxOutputTokens: 2048,});const image = fs.readFileSync("./hotdog.png").toString("base64");const input2 = [ new HumanMessage({ content: [ { type: "text", text: "Describe the following image.", }, { type: "image_url", image_url: `data:image/png;base64,${image}`, }, ], }),];const res = await visionModel.invoke(input2); tip Click [here](/v0.1/docs/integrations/chat/google_vertex_ai/) for the `@langchain/google-vertexai` specific integration docs The value of `image_url` must be a base64 encoded image (e.g., `data:image/png;base64,abcd124`). ### Vertex AI (Legacy)[​](#vertex-ai-legacy "Direct link to Vertex AI (Legacy)") tip See the legacy Google PaLM and VertexAI documentation [here](/v0.1/docs/integrations/chat/google_palm/) for chat, and [here](/v0.1/docs/integrations/llms/google_palm/) for LLMs. Vector Store[​](#vector-store "Direct link to Vector Store") ------------------------------------------------------------ ### Vertex AI Vector Search[​](#vertex-ai-vector-search "Direct link to Vertex AI Vector Search") > [Vertex AI Vector Search](https://cloud.google.com/vertex-ai/docs/matching-engine/overview), formerly known as Vertex AI Matching Engine, provides the industry's leading high-scale low latency vector database. These vector databases are commonly referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service. import { MatchingEngine } from "langchain/vectorstores/googlevertexai"; Tools[​](#tools "Direct link to Tools") --------------------------------------- ### Google Search[​](#google-search "Direct link to Google Search") * Set up a Custom Search Engine, following [these instructions](https://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search) * Get an API Key and Custom Search Engine ID from the previous step, and set them as environment variables `GOOGLE_API_KEY` and `GOOGLE_CSE_ID` respectively There exists a `GoogleCustomSearch` utility which wraps this API. To import this utility: import { GoogleCustomSearch } from "langchain/tools"; We can easily load this wrapper as a Tool (to use with an Agent). We can do this with: const tools = [new GoogleCustomSearch({})];// Pass this variable into your agent. * * * #### Help us out by providing feedback on this documentation page: [ Previous AWS ](/v0.1/docs/integrations/platforms/aws/)[ Next Microsoft ](/v0.1/docs/integrations/platforms/microsoft/) * [Chat models](#chat-models) * [Gemini Models](#gemini-models) * [Vertex AI (Legacy)](#vertex-ai-legacy) * [Vector Store](#vector-store) * [Vertex AI Vector Search](#vertex-ai-vector-search) * [Tools](#tools) * [Google Search](#google-search) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/alibaba_tongyi
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Alibaba Tongyi On this page ChatAlibabaTongyi ================= LangChain.js supports the Alibaba qwen family of models. Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll need to sign up for an Alibaba API key and set it as an environment variable named `ALIBABA_API_KEY`. Then, you'll need to install the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/community yarn add @langchain/community pnpm add @langchain/community Usage[​](#usage "Direct link to Usage") --------------------------------------- Here's an example: import { ChatAlibabaTongyi } from "@langchain/community/chat_models/alibaba_tongyi";import { HumanMessage } from "@langchain/core/messages";// Default model is qwen-turboconst qwenTurbo = new ChatAlibabaTongyi({ alibabaApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ALIBABA_API_KEY});// Use qwen-plusconst qwenPlus = new ChatAlibabaTongyi({ model: "qwen-plus", // Available models: qwen-turbo, qwen-plus, qwen-max temperature: 1, alibabaApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ALIBABA_API_KEY});const messages = [new HumanMessage("Hello")];const res = await qwenTurbo.invoke(messages);/*AIMessage { content: "Hello! How can I help you today? Is there something you would like to talk about or ask about? I'm here to assist you with any questions you may have.",}*/const res2 = await qwenPlus.invoke(messages);/*AIMessage { text: "Hello! How can I help you today? Is there something you would like to talk about or ask about? I'm here to assist you with any questions you may have.",}*/ #### API Reference: * [ChatAlibabaTongyi](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_alibaba_tongyi.ChatAlibabaTongyi.html) from `@langchain/community/chat_models/alibaba_tongyi` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Chat models ](/v0.2/docs/integrations/chat/)[ Next Anthropic ](/v0.2/docs/integrations/chat/anthropic) * [Setup](#setup) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/azure
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Azure OpenAI On this page Azure ChatOpenAI ================ [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) is a cloud service to help you quickly develop generative AI experiences with a diverse set of prebuilt and curated models from OpenAI, Meta and beyond. LangChain.js supports integration with [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) using either the dedicated [Azure OpenAI SDK](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai) or the [OpenAI SDK](https://github.com/openai/openai-node). You can learn more about Azure OpenAI and its difference with the OpenAI API on [this page](https://learn.microsoft.com/azure/ai-services/openai/overview). If you don't have an Azure account, you can [create a free account](https://azure.microsoft.com/free/) to get started. Using the OpenAI SDK[​](#using-the-openai-sdk "Direct link to Using the OpenAI SDK") ------------------------------------------------------------------------------------ You can use the `ChatOpenAI` class to access OpenAI instances hosted on Azure. For example, if your Azure instance is hosted under `https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}`, you could initialize your instance like this: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai tip We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys. import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ temperature: 0.9, azureOpenAIApiKey: "SOME_SECRET_VALUE", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME}); #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` If your instance is hosted under a domain other than the default `openai.azure.com`, you'll need to use the alternate `AZURE_OPENAI_BASE_PATH` environment variable. For example, here's how you would connect to the domain `https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}`: import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ temperature: 0.9, azureOpenAIApiKey: "SOME_SECRET_VALUE", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH}); #### API Reference: * [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` Using the Azure OpenAI SDK[​](#using-the-azure-openai-sdk "Direct link to Using the Azure OpenAI SDK") ------------------------------------------------------------------------------------------------------ You'll first need to install the [`@langchain/azure-openai`](https://www.npmjs.com/package/@langchain/azure-openai) package: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install -S @langchain/azure-openai yarn add @langchain/azure-openai pnpm add @langchain/azure-openai You'll also need to have an Azure OpenAI instance deployed. You can deploy a version on Azure Portal following [this guide](https://learn.microsoft.com/azure/ai-services/openai/how-to/create-resource?pivots=web-portal). Once you have your instance running, make sure you have the endpoint and key. You can find them in the Azure Portal, under the "Keys and Endpoint" section of your instance. You can then define the following environment variables to use the service: AZURE_OPENAI_API_ENDPOINT=<YOUR_ENDPOINT>AZURE_OPENAI_API_KEY=<YOUR_KEY>AZURE_OPENAI_API_EMBEDDING_DEPLOYMENT_NAME=<YOUR_EMBEDDING_DEPLOYMENT_NAME> Alternatively, you can pass the values directly to the `AzureOpenAI` constructor: import { AzureChatOpenAI } from "@langchain/azure-openai";const model = new AzureChatOpenAI({ azureOpenAIEndpoint: "<your_endpoint>", apiKey: "<your_key>", azureOpenAIApiDeploymentName: "<your_embedding_deployment_name", model: "<your_model>",}); If you're using Azure Managed Identity, you can also pass the credentials directly to the constructor: import { DefaultAzureCredential } from "@azure/identity";import { AzureChatOpenAI } from "@langchain/azure-openai";const credentials = new DefaultAzureCredential();const model = new AzureChatOpenAI({ credentials, azureOpenAIEndpoint: "<your_endpoint>", azureOpenAIApiDeploymentName: "<your_embedding_deployment_name", model: "<your_model>",}); ### Usage example[​](#usage-example "Direct link to Usage example") import { AzureChatOpenAI } from "@langchain/azure-openai";const model = new AzureChatOpenAI({ model: "gpt-4", prefixMessages: [ { role: "system", content: "You are a helpful assistant that answers in pirate language", }, ], maxTokens: 50,});const res = await model.invoke( "What would be a good company name for a company that makes colorful socks?");console.log({ res }); #### API Reference: * [AzureChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_azure_openai.AzureChatOpenAI.html) from `@langchain/azure-openai` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Anthropic Tools ](/v0.2/docs/integrations/chat/anthropic_tools)[ Next Baidu Wenxin ](/v0.2/docs/integrations/chat/baidu_wenxin) * [Using the OpenAI SDK](#using-the-openai-sdk) * [Using the Azure OpenAI SDK](#using-the-azure-openai-sdk) * [Usage example](#usage-example) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/baidu_wenxin
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Baidu Wenxin ChatBaiduWenxin =============== LangChain.js supports Baidu's ERNIE-bot family of models. Here's an example: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/community yarn add @langchain/community pnpm add @langchain/community Available models: `ERNIE-Bot`,`ERNIE-Bot-turbo`,`ERNIE-Bot-4`,`ERNIE-Speed-8K`,`ERNIE-Speed-128K`,`ERNIE-4.0-8K`, `ERNIE-4.0-8K-Preview`,`ERNIE-3.5-8K`,`ERNIE-3.5-8K-Preview`,`ERNIE-Lite-8K`,`ERNIE-Tiny-8K`,`ERNIE-Character-8K`, `ERNIE Speed-AppBuilder` import { ChatBaiduWenxin } from "@langchain/community/chat_models/baiduwenxin";import { HumanMessage } from "@langchain/core/messages";// Default model is ERNIE-Bot-turboconst ernieTurbo = new ChatBaiduWenxin({ baiduApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.BAIDU_API_KEY baiduSecretKey: "YOUR-SECRET-KEY", // In Node.js defaults to process.env.BAIDU_SECRET_KEY});// Use ERNIE-Botconst ernie = new ChatBaiduWenxin({ model: "ERNIE-Bot", // Available models are shown above temperature: 1, baiduApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.BAIDU_API_KEY baiduSecretKey: "YOUR-SECRET-KEY", // In Node.js defaults to process.env.BAIDU_SECRET_KEY});const messages = [new HumanMessage("Hello")];let res = await ernieTurbo.invoke(messages);/*AIChatMessage { text: 'Hello! How may I assist you today?', name: undefined, additional_kwargs: {} }}*/res = await ernie.invoke(messages);/*AIChatMessage { text: 'Hello! How may I assist you today?', name: undefined, additional_kwargs: {} }}*/ #### API Reference: * [ChatBaiduWenxin](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_baiduwenxin.ChatBaiduWenxin.html) from `@langchain/community/chat_models/baiduwenxin` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Azure OpenAI ](/v0.2/docs/integrations/chat/azure)[ Next Bedrock ](/v0.2/docs/integrations/chat/bedrock) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/anthropic_tools
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Anthropic Tools On this page anthropic\_tools ================ danger This API is deprecated as Anthropic now officially supports tools. [Click here to read the documentation](/v0.2/docs/integrations/chat/anthropic#tools). Anthropic Tools =============== LangChain offers an experimental wrapper around Anthropic that gives it the same API as OpenAI Functions. Setup[​](#setup "Direct link to Setup") --------------------------------------- To start, install the `@langchain/anthropic` integration package. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/anthropic yarn add @langchain/anthropic pnpm add @langchain/anthropic Initialize model[​](#initialize-model "Direct link to Initialize model") ------------------------------------------------------------------------ You can initialize this wrapper the same way you'd initialize a standard `ChatAnthropic` instance: tip We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys. import { ChatAnthropicTools } from "@langchain/anthropic/experimental";const model = new ChatAnthropicTools({ temperature: 0.1, model: "claude-3-sonnet-20240229", apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ANTHROPIC_API_KEY}); Passing in tools[​](#passing-in-tools "Direct link to Passing in tools") ------------------------------------------------------------------------ You can now pass in tools the same way as OpenAI: import { ChatAnthropicTools } from "@langchain/anthropic/experimental";import { HumanMessage } from "@langchain/core/messages";const model = new ChatAnthropicTools({ temperature: 0.1, model: "claude-3-sonnet-20240229",}).bind({ tools: [ { type: "function", function: { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, }, ], // You can set the `function_call` arg to force the model to use a function tool_choice: { type: "function", function: { name: "get_current_weather", }, },});const response = await model.invoke([ new HumanMessage({ content: "What's the weather in Boston?", }),]);console.log(response);/* AIMessage { lc_serializable: true, lc_kwargs: { content: '', additional_kwargs: { tool_calls: [Array] } }, lc_namespace: [ 'langchain_core', 'messages' ], content: '', name: undefined, additional_kwargs: { tool_calls: [ [Object] ] } }*/console.log(response.additional_kwargs.tool_calls);/* [ { id: '0', type: 'function', function: { name: 'get_current_weather', arguments: '{"location":"Boston, MA","unit":"fahrenheit"}' } } ]*/ #### API Reference: * [ChatAnthropicTools](https://v02.api.js.langchain.com/classes/langchain_anthropic_experimental.ChatAnthropicTools.html) from `@langchain/anthropic/experimental` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` Parallel tool calling[​](#parallel-tool-calling "Direct link to Parallel tool calling") --------------------------------------------------------------------------------------- The model may choose to call multiple tools. Here is an example using an extraction use-case: import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";import { ChatAnthropicTools } from "@langchain/anthropic/experimental";import { PromptTemplate } from "@langchain/core/prompts";import { JsonOutputToolsParser } from "@langchain/core/output_parsers/openai_tools";const EXTRACTION_TEMPLATE = `Extract and save the relevant entities mentioned in the following passage together with their properties.Passage:{input}`;const prompt = PromptTemplate.fromTemplate(EXTRACTION_TEMPLATE);// Use Zod for easier schema declarationconst schema = z.object({ name: z.string().describe("The name of a person"), height: z.number().describe("The person's height"), hairColor: z.optional(z.string()).describe("The person's hair color"),});const model = new ChatAnthropicTools({ temperature: 0.1, model: "claude-3-sonnet-20240229",}).bind({ tools: [ { type: "function", function: { name: "person", description: "Extracts the relevant people from the passage.", parameters: zodToJsonSchema(schema), }, }, ], // Can also set to "auto" to let the model choose a tool tool_choice: { type: "function", function: { name: "person", }, },});// Use a JsonOutputToolsParser to get the parsed JSON response directly.const chain = await prompt.pipe(model).pipe(new JsonOutputToolsParser());const response = await chain.invoke({ input: "Alex is 5 feet tall. Claudia is 1 foot taller than Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.",});console.log(JSON.stringify(response, null, 2));/* [ { "type": "person", "args": { "name": "Alex", "height": 5, "hairColor": "blonde" } }, { "type": "person", "args": { "name": "Claudia", "height": 6, "hairColor": "brunette" } } ]*/ #### API Reference: * [ChatAnthropicTools](https://v02.api.js.langchain.com/classes/langchain_anthropic_experimental.ChatAnthropicTools.html) from `@langchain/anthropic/experimental` * [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [JsonOutputToolsParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers_openai_tools.JsonOutputToolsParser.html) from `@langchain/core/output_parsers/openai_tools` `.withStructuredOutput({ ... })`[​](#withstructuredoutput-- "Direct link to withstructuredoutput--") ---------------------------------------------------------------------------------------------------- info The `.withStructuredOutput` method is in beta. It is actively being worked on, so the API may change. Using the `.withStructuredOutput` method, you can make the LLM return structured output, given only a Zod or JSON schema: import { z } from "zod";import { ChatAnthropicTools } from "@langchain/anthropic/experimental";import { ChatPromptTemplate } from "@langchain/core/prompts";const calculatorSchema = z.object({ operation: z .enum(["add", "subtract", "multiply", "divide"]) .describe("The type of operation to execute"), number1: z.number().describe("The first number to operate on."), number2: z.number().describe("The second number to operate on."),});const model = new ChatAnthropicTools({ model: "claude-3-sonnet-20240229", temperature: 0.1,});// Pass the schema and tool name to the withStructuredOutput methodconst modelWithTool = model.withStructuredOutput(calculatorSchema);// You can also set force: false to allow the model scratchpad space.// This may improve reasoning capabilities.// const modelWithTool = model.withStructuredOutput(calculatorSchema, {// force: false,// });const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant who always needs to use a calculator.", ], ["human", "{input}"],]);// Chain your prompt and model togetherconst chain = prompt.pipe(modelWithTool);const response = await chain.invoke({ input: "What is 2 + 2?",});console.log(response);/* { operation: 'add', number1: 2, number2: 2 }*/ #### API Reference: * [ChatAnthropicTools](https://v02.api.js.langchain.com/classes/langchain_anthropic_experimental.ChatAnthropicTools.html) from `@langchain/anthropic/experimental` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` ### Using JSON schema:[​](#using-json-schema "Direct link to Using JSON schema:") import { ChatAnthropicTools } from "@langchain/anthropic/experimental";import { ChatPromptTemplate } from "@langchain/core/prompts";const calculatorJsonSchema = { type: "object", properties: { operation: { type: "string", enum: ["add", "subtract", "multiply", "divide"], description: "The type of operation to execute.", }, number1: { type: "number", description: "The first number to operate on." }, number2: { type: "number", description: "The second number to operate on.", }, }, required: ["operation", "number1", "number2"], description: "A simple calculator tool",};const model = new ChatAnthropicTools({ model: "claude-3-sonnet-20240229", temperature: 0.1,});// Pass the schema and optionally, the tool name to the withStructuredOutput methodconst modelWithTool = model.withStructuredOutput(calculatorJsonSchema, { name: "calculator",});const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant who always needs to use a calculator.", ], ["human", "{input}"],]);// Chain your prompt and model togetherconst chain = prompt.pipe(modelWithTool);const response = await chain.invoke({ input: "What is 2 + 2?",});console.log(response);/* { operation: 'add', number1: 2, number2: 2 }*/ #### API Reference: * [ChatAnthropicTools](https://v02.api.js.langchain.com/classes/langchain_anthropic_experimental.ChatAnthropicTools.html) from `@langchain/anthropic/experimental` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Anthropic ](/v0.2/docs/integrations/chat/anthropic)[ Next Azure OpenAI ](/v0.2/docs/integrations/chat/azure) * [Setup](#setup) * [Initialize model](#initialize-model) * [Passing in tools](#passing-in-tools) * [Parallel tool calling](#parallel-tool-calling) * [`.withStructuredOutput({ ... })`](#withstructuredoutput--) * [Using JSON schema:](#using-json-schema) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/bedrock
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Bedrock On this page BedrockChat =========== > [Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that makes Foundation Models (FMs) from leading AI startups and Amazon available via an API. You can choose from a wide range of FMs to find the model that is best suited for your use case. Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll need to install the `@langchain/community` package: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/community yarn add @langchain/community pnpm add @langchain/community Then, you'll need to install a few official AWS packages as peer dependencies: * npm * Yarn * pnpm npm install @aws-crypto/sha256-js @aws-sdk/credential-provider-node @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types yarn add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types pnpm add @aws-crypto/sha256-js @aws-sdk/credential-provider-node @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types You can also use BedrockChat in web environments such as Edge functions or Cloudflare Workers by omitting the `@aws-sdk/credential-provider-node` dependency and using the `web` entrypoint: * npm * Yarn * pnpm npm install @aws-crypto/sha256-js @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types yarn add @aws-crypto/sha256-js @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types pnpm add @aws-crypto/sha256-js @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types Usage[​](#usage "Direct link to Usage") --------------------------------------- tip We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys. Currently, only Anthropic, Cohere, and Mistral models are supported with the chat model integration. For foundation models from AI21 or Amazon, see [the text generation Bedrock variant](/v0.2/docs/integrations/llms/bedrock). import { BedrockChat } from "@langchain/community/chat_models/bedrock";// Or, from web environments:// import { BedrockChat } from "@langchain/community/chat_models/bedrock/web";import { HumanMessage } from "@langchain/core/messages";// If no credentials are provided, the default credentials from// @aws-sdk/credential-provider-node will be used.// modelKwargs are additional parameters passed to the model when it// is invoked.const model = new BedrockChat({ model: "anthropic.claude-3-sonnet-20240229-v1:0", region: "us-east-1", // endpointUrl: "custom.amazonaws.com", // credentials: { // accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!, // secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!, // }, // modelKwargs: { // anthropic_version: "bedrock-2023-05-31", // },});// Other model names include:// "mistral.mistral-7b-instruct-v0:2"// "mistral.mixtral-8x7b-instruct-v0:1"//// For a full list, see the Bedrock page in AWS.const res = await model.invoke([ new HumanMessage({ content: "Tell me a joke" }),]);console.log(res);/* AIMessage { content: "Here's a silly joke for you:\n" + '\n' + "Why can't a bicycle stand up by itself?\n" + "Because it's two-tired!", name: undefined, additional_kwargs: { id: 'msg_01NYN7Rf39k4cgurqpZWYyDh' } }*/const stream = await model.stream([ new HumanMessage({ content: "Tell me a joke" }),]);for await (const chunk of stream) { console.log(chunk.content);}/* Here 's a silly joke for you : Why can 't a bicycle stand up by itself ? Because it 's two - tired !*/ #### API Reference: * [BedrockChat](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_bedrock.BedrockChat.html) from `@langchain/community/chat_models/bedrock` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` Multimodal inputs[​](#multimodal-inputs "Direct link to Multimodal inputs") --------------------------------------------------------------------------- tip Multimodal inputs are currently only supported by Anthropic Claude-3 models. Anthropic Claude-3 models hosted on Bedrock have multimodal capabilities and can reason about images. Here's an example: import * as fs from "node:fs/promises";import { BedrockChat } from "@langchain/community/chat_models/bedrock";// Or, from web environments:// import { BedrockChat } from "@langchain/community/chat_models/bedrock/web";import { HumanMessage } from "@langchain/core/messages";// If no credentials are provided, the default credentials from// @aws-sdk/credential-provider-node will be used.// modelKwargs are additional parameters passed to the model when it// is invoked.const model = new BedrockChat({ model: "anthropic.claude-3-sonnet-20240229-v1:0", region: "us-east-1", // endpointUrl: "custom.amazonaws.com", // credentials: { // accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!, // secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!, // }, // modelKwargs: { // anthropic_version: "bedrock-2023-05-31", // },});const imageData = await fs.readFile("./hotdog.jpg");const res = await model.invoke([ new HumanMessage({ content: [ { type: "text", text: "What's in this image?", }, { type: "image_url", image_url: { url: `data:image/jpeg;base64,${imageData.toString("base64")}`, }, }, ], }),]);console.log(res);/* AIMessage { content: 'The image shows a hot dog or frankfurter. It has a reddish-pink sausage filling encased in a light brown bread-like bun. The hot dog bun is split open, revealing the sausage inside. This classic fast food item is a popular snack or meal, often served at events like baseball games or cookouts. The hot dog appears to be against a plain white background, allowing the details and textures of the food item to be clearly visible.', name: undefined, additional_kwargs: { id: 'msg_01XrLPL9vCb82U3Wrrpza18p' } }*/ #### API Reference: * [BedrockChat](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_bedrock.BedrockChat.html) from `@langchain/community/chat_models/bedrock` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Baidu Wenxin ](/v0.2/docs/integrations/chat/baidu_wenxin)[ Next Cloudflare Workers AI ](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Setup](#setup) * [Usage](#usage) * [Multimodal inputs](#multimodal-inputs) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/cloudflare_workersai
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Cloudflare Workers AI On this page ChatCloudflareWorkersAI ======================= Workers AI allows you to run machine learning models, on the Cloudflare network, from your own code. Usage[​](#usage "Direct link to Usage") --------------------------------------- You'll first need to install the LangChain Cloudflare integration package: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/cloudflare yarn add @langchain/cloudflare pnpm add @langchain/cloudflare tip We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys. import { ChatCloudflareWorkersAI } from "@langchain/cloudflare";const model = new ChatCloudflareWorkersAI({ model: "@cf/meta/llama-2-7b-chat-int8", // Default value cloudflareAccountId: process.env.CLOUDFLARE_ACCOUNT_ID, cloudflareApiToken: process.env.CLOUDFLARE_API_TOKEN, // Pass a custom base URL to use Cloudflare AI Gateway // baseUrl: `https://gateway.ai.cloudflare.com/v1/{YOUR_ACCOUNT_ID}/{GATEWAY_NAME}/workers-ai/`,});const response = await model.invoke([ ["system", "You are a helpful assistant that translates English to German."], ["human", `Translate "I love programming".`],]);console.log(response);/*AIMessage { content: `Sure! Here's the translation of "I love programming" into German:\n` + '\n' + '"Ich liebe Programmieren."\n' + '\n' + 'In this sentence, "Ich" means "I," "liebe" means "love," and "Programmieren" means "programming."', additional_kwargs: {}}*/const stream = await model.stream([ ["system", "You are a helpful assistant that translates English to German."], ["human", `Translate "I love programming".`],]);for await (const chunk of stream) { console.log(chunk);}/* AIMessageChunk { content: 'S', additional_kwargs: {} } AIMessageChunk { content: 'ure', additional_kwargs: {} } AIMessageChunk { content: '!', additional_kwargs: {} } AIMessageChunk { content: ' Here', additional_kwargs: {} } ...*/ #### API Reference: * [ChatCloudflareWorkersAI](https://v02.api.js.langchain.com/classes/langchain_cloudflare.ChatCloudflareWorkersAI.html) from `@langchain/cloudflare` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Bedrock ](/v0.2/docs/integrations/chat/bedrock)[ Next Cohere ](/v0.2/docs/integrations/chat/cohere) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/cohere
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Cohere On this page ChatCohere ========== info The Cohere Chat API is still in beta. This means Cohere may make breaking changes at any time. Setup[​](#setup "Direct link to Setup") --------------------------------------- In order to use the LangChain.js Cohere integration you'll need an API key. You can sign up for a Cohere account and create an API key [here](https://dashboard.cohere.com/welcome/register). You'll first need to install the [`@langchain/cohere`](https://www.npmjs.com/package/@langchain/cohere) package: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/cohere yarn add @langchain/cohere pnpm add @langchain/cohere Usage[​](#usage "Direct link to Usage") --------------------------------------- import { ChatCohere } from "@langchain/cohere";import { ChatPromptTemplate } from "@langchain/core/prompts";const model = new ChatCohere({ apiKey: process.env.COHERE_API_KEY, // Default model: "command", // Default});const prompt = ChatPromptTemplate.fromMessages([ ["ai", "You are a helpful assistant"], ["human", "{input}"],]);const chain = prompt.pipe(model);const response = await chain.invoke({ input: "Hello there friend!",});console.log("response", response);/**response AIMessage { lc_serializable: true, lc_namespace: [ 'langchain_core', 'messages' ], content: "Hi there! I'm not your friend, but I'm happy to help you in whatever way I can today. How are you doing? Is there anything I can assist you with? I am an AI chatbot capable of generating thorough responses, and I'm designed to have helpful, inclusive conversations with users. \n" + '\n' + "If you have any questions, feel free to ask away, and I'll do my best to provide you with helpful responses. \n" + '\n' + 'Would you like me to help you with anything in particular right now?', additional_kwargs: { response_id: 'c6baa057-ef94-4bb0-9c25-3a424963a074', generationId: 'd824fcdc-b922-4ae6-8d45-7b65a21cdd6a', token_count: { prompt_tokens: 66, response_tokens: 104, total_tokens: 170, billed_tokens: 159 }, meta: { api_version: [Object], billed_units: [Object] }, tool_inputs: null }} */ #### API Reference: * [ChatCohere](https://v02.api.js.langchain.com/classes/langchain_cohere.ChatCohere.html) from `@langchain/cohere` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` info You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/69ccd2aa-b651-4f07-9223-ecc0b77e645e/r) ### Streaming[​](#streaming "Direct link to Streaming") Cohere's API also supports streaming token responses. The example below demonstrates how to use this feature. import { ChatCohere } from "@langchain/cohere";import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";const model = new ChatCohere({ apiKey: process.env.COHERE_API_KEY, // Default model: "command", // Default});const prompt = ChatPromptTemplate.fromMessages([ ["ai", "You are a helpful assistant"], ["human", "{input}"],]);const outputParser = new StringOutputParser();const chain = prompt.pipe(model).pipe(outputParser);const response = await chain.stream({ input: "Why is the sky blue? Be concise with your answer.",});let streamTokens = "";let streamIters = 0;for await (const item of response) { streamTokens += item; streamIters += 1;}console.log("stream tokens:", streamTokens);console.log("stream iters:", streamIters);/**stream item:stream item: Hello! I'm here to help answer any questions youstream item: might have or assist you with any task you'd like tostream item: accomplish. I can provide informationstream item: on a wide range of topicsstream item: , from math and science to history and literature. I canstream item: also help you manage your schedule, set reminders, andstream item: much more. Is there something specific you need help with? Letstream item: me know!stream item: */ #### API Reference: * [ChatCohere](https://v02.api.js.langchain.com/classes/langchain_cohere.ChatCohere.html) from `@langchain/cohere` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` info You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/36ae0564-b096-4ec1-9318-1f82fe705fe8/r) ### Stateful conversation API[​](#stateful-conversation-api "Direct link to Stateful conversation API") Cohere's chat API supports stateful conversations. This means the API stores previous chat messages which can be accessed by passing in a `conversation_id` field. The example below demonstrates how to use this feature. import { ChatCohere } from "@langchain/cohere";import { HumanMessage } from "@langchain/core/messages";const model = new ChatCohere({ apiKey: process.env.COHERE_API_KEY, // Default model: "command", // Default});const conversationId = `demo_test_id-${Math.random()}`;const response = await model.invoke( [new HumanMessage("Tell me a joke about bears.")], { conversationId, });console.log("response: ", response.content);/**response: Why did the bear go to the dentist?Because she had bear teeth!Hope you found that joke about bears to be a little bit tooth-arious!Would you like me to tell you another one? I could also provide you with a list of jokes about bears if you prefer.Just let me know if you have any other jokes or topics you'd like to hear about! */const response2 = await model.invoke( [new HumanMessage("What was the subject of my last question?")], { conversationId, });console.log("response2: ", response2.content);/**response2: Your last question was about bears. You asked me to tell you a joke about bears, which I am programmed to assist with.Would you like me to assist you with anything else bear-related? I can provide you with facts about bears, stories about bears, or even list other topics that might be of interest to you.Please let me know if you have any other questions and I will do my best to provide you with a response. */ #### API Reference: * [ChatCohere](https://v02.api.js.langchain.com/classes/langchain_cohere.ChatCohere.html) from `@langchain/cohere` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` info You can see the LangSmith traces from this example [here](https://smith.langchain.com/public/8e67b05a-4e63-414e-ac91-a91acf21b262/r) and [here](https://smith.langchain.com/public/50fabc25-46fe-4727-a59c-7e4eb0de8e70/r) ### RAG[​](#rag "Direct link to RAG") Cohere also comes out of the box with RAG support. You can pass in documents as context to the API request and Cohere's models will use them when generating responses. import { ChatCohere } from "@langchain/cohere";import { HumanMessage } from "@langchain/core/messages";const model = new ChatCohere({ apiKey: process.env.COHERE_API_KEY, // Default model: "command", // Default});const documents = [ { title: "Harrison's work", snippet: "Harrison worked at Kensho as an engineer.", }, { title: "Harrison's work duration", snippet: "Harrison worked at Kensho for 3 years.", }, { title: "Polar berars in the Appalachian Mountains", snippet: "Polar bears have surprisingly adapted to the Appalachian Mountains, thriving in the diverse, forested terrain despite their traditional arctic habitat. This unique situation has sparked significant interest and study in climate adaptability and wildlife behavior.", },];const response = await model.invoke( [new HumanMessage("Where did Harrison work and for how long?")], { documents, });console.log("response: ", response.content);/**response: Harrison worked as an engineer at Kensho for about 3 years. */ #### API Reference: * [ChatCohere](https://v02.api.js.langchain.com/classes/langchain_cohere.ChatCohere.html) from `@langchain/cohere` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` info You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/de71fffe-6f01-4c36-9b49-40d1bc87dea3/r) ### Connectors[​](#connectors "Direct link to Connectors") The API also allows for other connections which are not static documents. An example of this is their `web-search` connector which allows you to pass in a query and the API will search the web for relevant documents. The example below demonstrates how to use this feature. import { ChatCohere } from "@langchain/cohere";import { HumanMessage } from "@langchain/core/messages";const model = new ChatCohere({ apiKey: process.env.COHERE_API_KEY, // Default model: "command", // Default});const response = await model.invoke( [new HumanMessage("How tall are the largest pengiuns?")], { connectors: [{ id: "web-search" }], });console.log("response: ", JSON.stringify(response, null, 2));/**response: { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "The tallest penguin species currently in existence is the Emperor Penguin, with a height of 110cm to the top of their head or 115cm to the tip of their beak. This is equivalent to being approximately 3 feet and 7 inches tall.\n\nA fossil of an Anthropornis penguin was found in New Zealand and is suspected to have been even taller at 1.7 metres, though this is uncertain as the fossil is only known from preserved arm and leg bones. The height of a closely related species, Kumimanu biceae, has been estimated at 1.77 metres.\n\nDid you know that because larger-bodied penguins can hold their breath for longer, the colossus penguin could have stayed underwater for 40 minutes or more?", "additional_kwargs": { "response_id": "a3567a59-2377-439d-894f-0309f7fea1de", "generationId": "65dc5b1b-6099-44c4-8338-50eed0d427c5", "token_count": { "prompt_tokens": 1394, "response_tokens": 149, "total_tokens": 1543, "billed_tokens": 159 }, "meta": { "api_version": { "version": "1" }, "billed_units": { "input_tokens": 10, "output_tokens": 149 } }, "citations": [ { "start": 58, "end": 73, "text": "Emperor Penguin", "documentIds": [ "web-search_3:2", "web-search_4:10" ] }, { "start": 92, "end": 157, "text": "110cm to the top of their head or 115cm to the tip of their beak.", "documentIds": [ "web-search_4:10" ] }, { "start": 200, "end": 225, "text": "3 feet and 7 inches tall.", "documentIds": [ "web-search_3:2", "web-search_4:10" ] }, { "start": 242, "end": 262, "text": "Anthropornis penguin", "documentIds": [ "web-search_9:4" ] }, { "start": 276, "end": 287, "text": "New Zealand", "documentIds": [ "web-search_9:4" ] }, { "start": 333, "end": 343, "text": "1.7 metres", "documentIds": [ "web-search_9:4" ] }, { "start": 403, "end": 431, "text": "preserved arm and leg bones.", "documentIds": [ "web-search_9:4" ] }, { "start": 473, "end": 488, "text": "Kumimanu biceae", "documentIds": [ "web-search_9:4" ] }, { "start": 512, "end": 524, "text": "1.77 metres.", "documentIds": [ "web-search_9:4" ] }, { "start": 613, "end": 629, "text": "colossus penguin", "documentIds": [ "web-search_3:2" ] }, { "start": 663, "end": 681, "text": "40 minutes or more", "documentIds": [ "web-search_3:2" ] } ], "documents": [ { "id": "web-search_3:2", "snippet": " By comparison, the largest species of penguin alive today, the emperor penguin, is \"only\" about 4 feet tall and can weigh as much as 100 pounds.\n\nInterestingly, because larger bodied penguins can hold their breath for longer, the colossus penguin probably could have stayed underwater for 40 minutes or more. It boggles the mind to imagine the kinds of huge, deep sea fish this mammoth bird might have been capable of hunting.\n\nThe fossil was found at the La Meseta formation on Seymour Island, an island in a chain of 16 major islands around the tip of the Graham Land on the Antarctic Peninsula.", "title": "Giant 6-Foot-8 Penguin Discovered in Antarctica", "url": "https://www.treehugger.com/giant-foot-penguin-discovered-in-antarctica-4864169" }, { "id": "web-search_4:10", "snippet": "\n\nWhat is the Tallest Penguin?\n\nThe tallest penguin is the Emperor Penguin which is 110cm to the top of their head or 115cm to the tip of their beak.\n\nHow Tall Are Emperor Penguins in Feet?\n\nAn Emperor Penguin is about 3 feet and 7 inches to the top of its head. They are the largest penguin species currently in existence.\n\nHow Much Do Penguins Weigh in Pounds?\n\nPenguins weigh between 2.5lbs for the smallest species, the Little Penguin, up to 82lbs for the largest species, the Emperor Penguin.\n\nDr. Jackie Symmons is a professional ecologist with a Ph.D. in Ecology and Wildlife Management from Bangor University and over 25 years of experience delivering conservation projects.", "title": "How Big Are Penguins? [Height & Weight of Every Species] - Polar Guidebook", "url": "https://polarguidebook.com/how-big-are-penguins/" }, { "id": "web-search_9:4", "snippet": "\n\nA fossil of an Anthropornis penguin found on the island may have been even taller, but this is likely to be an exception. The majority of these penguins were only 1.7 metres tall and weighed around 80 kilogrammes.\n\nWhile Palaeeudyptes klekowskii remains the tallest ever penguin, it is no longer the heaviest. At an estimated 150 kilogrammes, Kumimanu fordycei would have been around three times heavier than any living penguin.\n\nWhile it's uncertain how tall the species was, the height of a closely related species, Kumimanu biceae, has been estimated at 1.77 metres.\n\nThese measurements, however, are all open for debate. Many fossil penguins are only known from preserved arm and leg bones, rather than complete skeletons.", "title": "The largest ever penguin species has been discovered in New Zealand | Natural History Museum", "url": "https://www.nhm.ac.uk/discover/news/2023/february/largest-ever-penguin-species-discovered-new-zealand.html" } ], "searchResults": [ { "searchQuery": { "text": "largest penguin species height", "generationId": "908fe321-5d27-48c4-bdb6-493be5687344" }, "documentIds": [ "web-search_3:2", "web-search_4:10", "web-search_9:4" ], "connector": { "id": "web-search" } } ], "tool_inputs": null, "searchQueries": [ { "text": "largest penguin species height", "generationId": "908fe321-5d27-48c4-bdb6-493be5687344" } ] } }} */ #### API Reference: * [ChatCohere](https://v02.api.js.langchain.com/classes/langchain_cohere.ChatCohere.html) from `@langchain/cohere` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` info You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/9a6f996b-cff2-4f3f-916a-640469a5a963/r) We can see in the `kwargs` object that the API request did a few things: * Performed a search query, storing the result data in the `searchQueries` and `searchResults` fields. In the `searchQueries` field we see they rephrased our query to `largest penguin species height` for better results. * Generated three documents from the search query. * Generated a list of citations * Generated a final response based on the above actions & content. * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Cloudflare Workers AI ](/v0.2/docs/integrations/chat/cloudflare_workersai)[ Next Fake LLM ](/v0.2/docs/integrations/chat/fake) * [Setup](#setup) * [Usage](#usage) * [Streaming](#streaming) * [Stateful conversation API](#stateful-conversation-api) * [RAG](#rag) * [Connectors](#connectors) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/fake
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Fake LLM On this page Fake LLM ======== LangChain provides a fake LLM chat model for testing purposes. This allows you to mock out calls to the LLM and and simulate what would happen if the LLM responded in a certain way. Usage[​](#usage "Direct link to Usage") --------------------------------------- import { FakeListChatModel } from "@langchain/core/utils/testing";import { HumanMessage } from "@langchain/core/messages";import { StringOutputParser } from "@langchain/core/output_parsers";/** * The FakeListChatModel can be used to simulate ordered predefined responses. */const chat = new FakeListChatModel({ responses: ["I'll callback later.", "You 'console' them!"],});const firstMessage = new HumanMessage("You want to hear a JavasSript joke?");const secondMessage = new HumanMessage( "How do you cheer up a JavaScript developer?");const firstResponse = await chat.invoke([firstMessage]);const secondResponse = await chat.invoke([secondMessage]);console.log({ firstResponse });console.log({ secondResponse });/** * The FakeListChatModel can also be used to simulate streamed responses. */const stream = await chat .pipe(new StringOutputParser()) .stream(`You want to hear a JavasSript joke?`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/** * The FakeListChatModel can also be used to simulate delays in either either synchronous or streamed responses. */const slowChat = new FakeListChatModel({ responses: ["Because Oct 31 equals Dec 25", "You 'console' them!"], sleep: 1000,});const thirdMessage = new HumanMessage( "Why do programmers always mix up Halloween and Christmas?");const slowResponse = await slowChat.invoke([thirdMessage]);console.log({ slowResponse });const slowStream = await slowChat .pipe(new StringOutputParser()) .stream("How do you cheer up a JavaScript developer?");const slowChunks = [];for await (const chunk of slowStream) { slowChunks.push(chunk);}console.log(slowChunks.join("")); #### API Reference: * [FakeListChatModel](https://v02.api.js.langchain.com/classes/langchain_core_utils_testing.FakeListChatModel.html) from `@langchain/core/utils/testing` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Cohere ](/v0.2/docs/integrations/chat/cohere)[ Next Fireworks ](/v0.2/docs/integrations/chat/fireworks) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/fireworks
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Fireworks ChatFireworks ============= You can use models provided by Fireworks AI as follows: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/community yarn add @langchain/community pnpm add @langchain/community tip We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys. import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ temperature: 0.9, // In Node.js defaults to process.env.FIREWORKS_API_KEY apiKey: "YOUR-API-KEY",}); #### API Reference: * [ChatFireworks](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_fireworks.ChatFireworks.html) from `@langchain/community/chat_models/fireworks` Behind the scenes, Fireworks AI uses the OpenAI SDK and OpenAI compatible API, with some caveats: * Certain properties are not supported by the Fireworks API, see [here](https://readme.fireworks.ai/docs/openai-compatibility#api-compatibility). * Generation using multiple prompts is not supported. * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Fake LLM ](/v0.2/docs/integrations/chat/fake)[ Next Friendli ](/v0.2/docs/integrations/chat/friendli) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/google_generativeai
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Google GenAI On this page ChatGoogleGenerativeAI ====================== You can access Google's `gemini` and `gemini-vision` models, as well as other generative models in LangChain through `ChatGoogleGenerativeAI` class in the `@langchain/google-genai` integration package. tip You can also access Google's `gemini` family of models via the LangChain VertexAI and VertexAI-web integrations. Click [here](/v0.2/docs/integrations/chat/google_vertex_ai) to read the docs. Get an API key here: [https://ai.google.dev/tutorials/setup](https://ai.google.dev/tutorials/setup) You'll first need to install the `@langchain/google-genai` package: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/google-genai yarn add @langchain/google-genai pnpm add @langchain/google-genai Usage[​](#usage "Direct link to Usage") --------------------------------------- tip We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys. import { ChatGoogleGenerativeAI } from "@langchain/google-genai";import { HarmBlockThreshold, HarmCategory } from "@google/generative-ai";/* * Before running this, you should make sure you have created a * Google Cloud Project that has `generativelanguage` API enabled. * * You will also need to generate an API key and set * an environment variable GOOGLE_API_KEY * */// Textconst model = new ChatGoogleGenerativeAI({ model: "gemini-pro", maxOutputTokens: 2048, safetySettings: [ { category: HarmCategory.HARM_CATEGORY_HARASSMENT, threshold: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE, }, ],});// Batch and stream are also supportedconst res = await model.invoke([ [ "human", "What would be a good company name for a company that makes colorful socks?", ],]);console.log(res);/* AIMessage { content: '1. Rainbow Soles\n' + '2. Toe-tally Colorful\n' + '3. Bright Sock Creations\n' + '4. Hue Knew Socks\n' + '5. The Happy Sock Factory\n' + '6. Color Pop Hosiery\n' + '7. Sock It to Me!\n' + '8. Mismatched Masterpieces\n' + '9. Threads of Joy\n' + '10. Funky Feet Emporium\n' + '11. Colorful Threads\n' + '12. Sole Mates\n' + '13. Colorful Soles\n' + '14. Sock Appeal\n' + '15. Happy Feet Unlimited\n' + '16. The Sock Stop\n' + '17. The Sock Drawer\n' + '18. Sole-diers\n' + '19. Footloose Footwear\n' + '20. Step into Color', name: 'model', additional_kwargs: {} }*/ #### API Reference: * [ChatGoogleGenerativeAI](https://v02.api.js.langchain.com/classes/langchain_google_genai.ChatGoogleGenerativeAI.html) from `@langchain/google-genai` Multimodal support[​](#multimodal-support "Direct link to Multimodal support") ------------------------------------------------------------------------------ To provide an image, pass a human message with a `content` field set to an array of content objects. Each content object where each dict contains either an image value (type of image\_url) or a text (type of text) value. The value of image\_url must be a base64 encoded image (e.g., data:image/png;base64,abcd124): import fs from "fs";import { ChatGoogleGenerativeAI } from "@langchain/google-genai";import { HumanMessage } from "@langchain/core/messages";// Multi-modalconst vision = new ChatGoogleGenerativeAI({ model: "gemini-pro-vision", maxOutputTokens: 2048,});const image = fs.readFileSync("./hotdog.jpg").toString("base64");const input2 = [ new HumanMessage({ content: [ { type: "text", text: "Describe the following image.", }, { type: "image_url", image_url: `data:image/png;base64,${image}`, }, ], }),];const res2 = await vision.invoke(input2);console.log(res2);/* AIMessage { content: ' The image shows a hot dog in a bun. The hot dog is grilled and has a dark brown color. The bun is toasted and has a light brown color. The hot dog is in the center of the bun.', name: 'model', additional_kwargs: {} }*/// Multi-modal streamingconst res3 = await vision.stream(input2);for await (const chunk of res3) { console.log(chunk);}/* AIMessageChunk { content: ' The image shows a hot dog in a bun. The hot dog is grilled and has grill marks on it. The bun is toasted and has a light golden', name: 'model', additional_kwargs: {} } AIMessageChunk { content: ' brown color. The hot dog is in the center of the bun.', name: 'model', additional_kwargs: {} }*/ #### API Reference: * [ChatGoogleGenerativeAI](https://v02.api.js.langchain.com/classes/langchain_google_genai.ChatGoogleGenerativeAI.html) from `@langchain/google-genai` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` Gemini Prompting FAQs[​](#gemini-prompting-faqs "Direct link to Gemini Prompting FAQs") --------------------------------------------------------------------------------------- As of the time this doc was written (2023/12/12), Gemini has some restrictions on the types and structure of prompts it accepts. Specifically: 1. When providing multimodal (image) inputs, you are restricted to at most 1 message of "human" (user) type. You cannot pass multiple messages (though the single human message may have multiple content entries) 2. System messages are not natively supported, and will be merged with the first human message if present. 3. For regular chat conversations, messages must follow the human/ai/human/ai alternating pattern. You may not provide 2 AI or human messages in sequence. 4. Message may be blocked if they violate the safety checks of the LLM. In this case, the model will return an empty response. * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Friendli ](/v0.2/docs/integrations/chat/friendli)[ Next (Legacy) Google PaLM/VertexAI ](/v0.2/docs/integrations/chat/google_palm) * [Usage](#usage) * [Multimodal support](#multimodal-support) * [Gemini Prompting FAQs](#gemini-prompting-faqs) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/google_palm
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * (Legacy) Google PaLM/VertexAI On this page ChatGooglePaLM ============== note This integration does not support `gemini-*` models. Check Google [GenAI](/v0.2/docs/integrations/chat/google_generativeai) or [VertexAI](/v0.2/docs/integrations/chat/google_vertex_ai). The [Google PaLM API](https://developers.generativeai.google/products/palm) can be integrated by first installing the required packages: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install google-auth-library @google-ai/generativelanguage @langchain/community yarn add google-auth-library @google-ai/generativelanguage @langchain/community pnpm add google-auth-library @google-ai/generativelanguage @langchain/community tip We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys. Create an **API key** from [Google MakerSuite](https://makersuite.google.com/app/apikey). You can then set the key as `GOOGLE_PALM_API_KEY` environment variable or pass it as `apiKey` parameter while instantiating the model. import { ChatGooglePaLM } from "@langchain/community/chat_models/googlepalm";import { AIMessage, HumanMessage, SystemMessage,} from "@langchain/core/messages";export const run = async () => { const model = new ChatGooglePaLM({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` temperature: 0.7, // OPTIONAL model: "models/chat-bison-001", // OPTIONAL topK: 40, // OPTIONAL topP: 1, // OPTIONAL examples: [ // OPTIONAL { input: new HumanMessage("What is your favorite sock color?"), output: new AIMessage("My favorite sock color be arrrr-ange!"), }, ], }); // ask questions const questions = [ new SystemMessage( "You are a funny assistant that answers in pirate language." ), new HumanMessage("What is your favorite food?"), ]; // You can also use the model as part of a chain const res = await model.invoke(questions); console.log({ res });}; #### API Reference: * [ChatGooglePaLM](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_googlepalm.ChatGooglePaLM.html) from `@langchain/community/chat_models/googlepalm` * [AIMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.AIMessage.html) from `@langchain/core/messages` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * [SystemMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.SystemMessage.html) from `@langchain/core/messages` ChatGooglePaLM ============== LangChain.js supports Google Vertex AI chat models as an integration. It supports two different methods of authentication based on whether you're running in a Node environment or a web environment. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Node[​](#node "Direct link to Node") To call Vertex AI models in Node, you'll need to install [Google's official auth client](https://www.npmjs.com/package/google-auth-library) as a peer dependency. You should make sure the Vertex AI API is enabled for the relevant project and that you've authenticated to Google Cloud using one of these methods: * You are logged into an account (using `gcloud auth application-default login`) permitted to that project. * You are running on a machine using a service account that is permitted to the project. * You have downloaded the credentials for a service account that is permitted to the project and set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the path of this file. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install google-auth-library @langchain/community yarn add google-auth-library @langchain/community pnpm add google-auth-library @langchain/community ### Web[​](#web "Direct link to Web") To call Vertex AI models in web environments (like Edge functions), you'll need to install the [`web-auth-library`](https://github.com/kriasoft/web-auth-library) pacakge as a peer dependency: * npm * Yarn * pnpm npm install web-auth-library yarn add web-auth-library pnpm add web-auth-library Then, you'll need to add your service account credentials directly as a `GOOGLE_VERTEX_AI_WEB_CREDENTIALS` environment variable: GOOGLE_VERTEX_AI_WEB_CREDENTIALS={"type":"service_account","project_id":"YOUR_PROJECT-12345",...} You can also pass your credentials directly in code like this: import { ChatGoogleVertexAI } from "@langchain/community/chat_models/googlevertexai";const model = new ChatGoogleVertexAI({ authOptions: { credentials: {"type":"service_account","project_id":"YOUR_PROJECT-12345",...}, },}); Usage[​](#usage "Direct link to Usage") --------------------------------------- Several models are available and can be specified by the `model` attribute in the constructor. These include: * code-bison (default) * code-bison-32k The ChatGoogleVertexAI class works just like other chat-based LLMs, with a few exceptions: 1. The first `SystemMessage` passed in is mapped to the "context" parameter that the PaLM model expects. No other `SystemMessages` are allowed. 2. After the first `SystemMessage`, there must be an odd number of messages, representing a conversation between a human and the model. 3. Human messages must alternate with AI messages. import { ChatGoogleVertexAI } from "@langchain/community/chat_models/googlevertexai";// Or, if using the web entrypoint:// import { ChatGoogleVertexAI } from "@langchain/community/chat_models/googlevertexai/web";const model = new ChatGoogleVertexAI({ temperature: 0.7,}); #### API Reference: * [ChatGoogleVertexAI](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_googlevertexai.ChatGoogleVertexAI.html) from `@langchain/community/chat_models/googlevertexai` ### Streaming[​](#streaming "Direct link to Streaming") ChatGoogleVertexAI also supports streaming in multiple chunks for faster responses: import { ChatGoogleVertexAI } from "@langchain/community/chat_models/googlevertexai";// Or, if using the web entrypoint:// import { ChatGoogleVertexAI } from "@langchain/community/chat_models/googlevertexai/web";const model = new ChatGoogleVertexAI({ temperature: 0.7,});const stream = await model.stream([ ["system", "You are a funny assistant that answers in pirate language."], ["human", "What is your favorite food?"],]);for await (const chunk of stream) { console.log(chunk);}/*AIMessageChunk { content: ' Ahoy there, matey! My favorite food be fish, cooked any way ye ', additional_kwargs: {}}AIMessageChunk { content: 'like!', additional_kwargs: {}}AIMessageChunk { content: '', name: undefined, additional_kwargs: {}}*/ #### API Reference: * [ChatGoogleVertexAI](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_googlevertexai.ChatGoogleVertexAI.html) from `@langchain/community/chat_models/googlevertexai` ### Examples[​](#examples "Direct link to Examples") There is also an optional `examples` constructor parameter that can help the model understand what an appropriate response looks like. import { ChatGoogleVertexAI } from "@langchain/community/chat_models/googlevertexai";import { AIMessage, HumanMessage, SystemMessage,} from "@langchain/core/messages";// Or, if using the web entrypoint:// import { ChatGoogleVertexAI } from "@langchain/community/chat_models/googlevertexai/web";const examples = [ { input: new HumanMessage("What is your favorite sock color?"), output: new AIMessage("My favorite sock color be arrrr-ange!"), },];const model = new ChatGoogleVertexAI({ temperature: 0.7, examples,});const questions = [ new SystemMessage( "You are a funny assistant that answers in pirate language." ), new HumanMessage("What is your favorite food?"),];// You can also use the model as part of a chainconst res = await model.invoke(questions);console.log({ res }); #### API Reference: * [ChatGoogleVertexAI](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_googlevertexai.ChatGoogleVertexAI.html) from `@langchain/community/chat_models/googlevertexai` * [AIMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.AIMessage.html) from `@langchain/core/messages` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * [SystemMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.SystemMessage.html) from `@langchain/core/messages` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Google GenAI ](/v0.2/docs/integrations/chat/google_generativeai)[ Next Google Vertex AI ](/v0.2/docs/integrations/chat/google_vertex_ai) * [Setup](#setup) * [Node](#node) * [Web](#web) * [Usage](#usage) * [Streaming](#streaming) * [Examples](#examples) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/friendli
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Friendli On this page Friendli ======== > [Friendli](https://friendli.ai/) enhances AI application performance and optimizes cost savings with scalable, efficient deployment options, tailored for high-demand AI workloads. This tutorial guides you through integrating `ChatFriendli` for chat applications using LangChain. `ChatFriendli` offers a flexible approach to generating conversational AI responses, supporting both synchronous and asynchronous calls. Setup[​](#setup "Direct link to Setup") --------------------------------------- Ensure the `@langchain/community` is installed. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/community yarn add @langchain/community pnpm add @langchain/community Sign in to [Friendli Suite](https://suite.friendli.ai/) to create a Personal Access Token, and set it as the `FRIENDLI_TOKEN` environment. You can set team id as `FRIENDLI_TEAM` environment. You can initialize a Friendli chat model with selecting the model you want to use. The default model is `llama-2-13b-chat`. You can check the available models at [docs.friendli.ai](https://docs.friendli.ai/guides/serverless_endpoints/pricing#text-generation-models). Usage[​](#usage "Direct link to Usage") --------------------------------------- import { ChatFriendli } from "@langchain/community/chat_models/friendli";const model = new ChatFriendli({ model: "llama-2-13b-chat", // Default value friendliToken: process.env.FRIENDLI_TOKEN, friendliTeam: process.env.FRIENDLI_TEAM, maxTokens: 800, temperature: 0.9, topP: 0.9, frequencyPenalty: 0, stop: [],});const response = await model.invoke( "Draft a cover letter for a role in software engineering.");console.log(response.content);/*Dear [Hiring Manager],I am excited to apply for the role of Software Engineer at [Company Name]. With my passion for innovation, creativity, and problem-solving, I am confident that I would be a valuable asset to your team.As a highly motivated and detail-oriented individual, ...*/const stream = await model.stream( "Draft a cover letter for a role in software engineering.");for await (const chunk of stream) { console.log(chunk.content);}/*Dear [Hiring...[Your Name]*/ #### API Reference: * [ChatFriendli](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_friendli.ChatFriendli.html) from `@langchain/community/chat_models/friendli` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Fireworks ](/v0.2/docs/integrations/chat/fireworks)[ Next Google GenAI ](/v0.2/docs/integrations/chat/google_generativeai) * [Setup](#setup) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/google_vertex_ai
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Google Vertex AI On this page ChatVertexAI ============ LangChain.js supports Google Vertex AI chat models as an integration. It supports two different methods of authentication based on whether you're running in a Node environment or a web environment. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Node[​](#node "Direct link to Node") To call Vertex AI models in Node, you'll need to install the `@langchain/google-vertexai` package: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai tip We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys. You should make sure the Vertex AI API is enabled for the relevant project and that you've authenticated to Google Cloud using one of these methods: * You are logged into an account (using `gcloud auth application-default login`) permitted to that project. * You are running on a machine using a service account that is permitted to the project. * You have downloaded the credentials for a service account that is permitted to the project and set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the path of this file. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/google-vertexai yarn add @langchain/google-vertexai pnpm add @langchain/google-vertexai ### Web[​](#web "Direct link to Web") To call Vertex AI models in web environments (like Edge functions), you'll need to install the `@langchain/google-vertexai-web` package: * npm * Yarn * pnpm npm install @langchain/google-vertexai-web yarn add @langchain/google-vertexai-web pnpm add @langchain/google-vertexai-web Then, you'll need to add your service account credentials directly as a `GOOGLE_VERTEX_AI_WEB_CREDENTIALS` environment variable: GOOGLE_VERTEX_AI_WEB_CREDENTIALS={"type":"service_account","project_id":"YOUR_PROJECT-12345",...} Lastly, you may also pass your credentials directly in code like this: import { ChatVertexAI } from "@langchain/google-vertexai-web";const model = new ChatVertexAI({ authOptions: { credentials: {"type":"service_account","project_id":"YOUR_PROJECT-12345",...}, },}); Usage[​](#usage "Direct link to Usage") --------------------------------------- The entire family of `gemini` models are available by specifying the `modelName` parameter. For example: import { ChatVertexAI } from "@langchain/google-vertexai";// Or, if using the web entrypoint:// import { ChatVertexAI } from "@langchain/google-vertexai-web";const model = new ChatVertexAI({ temperature: 0.7, model: "gemini-1.0-pro",});const response = await model.invoke("Why is the ocean blue?");console.log(response);/*AIMessageChunk { content: [{ type: 'text', text: 'The ocean appears blue due to a phenomenon called Rayleigh scattering. This occurs when sunlight' }], additional_kwargs: {}, response_metadata: {}} */ #### API Reference: * [ChatVertexAI](https://v02.api.js.langchain.com/classes/langchain_google_vertexai.ChatVertexAI.html) from `@langchain/google-vertexai` tip See the LangSmith trace for the example above [here](https://smith.langchain.com/public/9fb579d8-4987-4302-beca-29a684ae2f4c/r). ### Streaming[​](#streaming "Direct link to Streaming") `ChatVertexAI` also supports streaming in multiple chunks for faster responses: import { ChatVertexAI } from "@langchain/google-vertexai";// Or, if using the web entrypoint:// import { ChatVertexAI } from "@langchain/google-vertexai-web";const model = new ChatVertexAI({ temperature: 0.7,});const stream = await model.stream([ ["system", "You are a funny assistant that answers in pirate language."], ["human", "What is your favorite food?"],]);for await (const chunk of stream) { console.log(chunk);}/*AIMessageChunk { content: [{ type: 'text', text: 'Ahoy there, matey! Me favorite grub be fish and chips, with' }], additional_kwargs: {}, response_metadata: { data: { candidates: [Array], promptFeedback: [Object] } }}AIMessageChunk { content: [{ type: 'text', text: " a hearty pint o' grog to wash it down. What be yer fancy, landlubber?" }], additional_kwargs: {}, response_metadata: { data: { candidates: [Array] } }}AIMessageChunk { content: '', additional_kwargs: {}, response_metadata: { finishReason: 'stop' }}*/ #### API Reference: * [ChatVertexAI](https://v02.api.js.langchain.com/classes/langchain_google_vertexai.ChatVertexAI.html) from `@langchain/google-vertexai` tip See the LangSmith trace for the example above [here](https://smith.langchain.com/public/ba4cb190-3f60-49aa-a6f8-7d31316d94cf/r). ### Tool calling[​](#tool-calling "Direct link to Tool calling") `ChatVertexAI` also supports calling the model with a tool: import { ChatVertexAI } from "@langchain/google-vertexai";import { type GeminiTool } from "@langchain/google-vertexai/types";import { zodToGeminiParameters } from "@langchain/google-vertexai/utils";import { z } from "zod";// Or, if using the web entrypoint:// import { ChatVertexAI } from "@langchain/google-vertexai-web";const calculatorSchema = z.object({ operation: z .enum(["add", "subtract", "multiply", "divide"]) .describe("The type of operation to execute"), number1: z.number().describe("The first number to operate on."), number2: z.number().describe("The second number to operate on."),});const geminiCalculatorTool: GeminiTool = { functionDeclarations: [ { name: "calculator", description: "A simple calculator tool", parameters: zodToGeminiParameters(calculatorSchema), }, ],};const model = new ChatVertexAI({ temperature: 0.7, model: "gemini-1.0-pro",}).bind({ tools: [geminiCalculatorTool],});const response = await model.invoke("What is 1628253239 times 81623836?");console.log(JSON.stringify(response.additional_kwargs, null, 2));/*{ "tool_calls": [ { "id": "calculator", "type": "function", "function": { "name": "calculator", "arguments": "{\"number2\":81623836,\"number1\":1628253239,\"operation\":\"multiply\"}" } } ],} */ #### API Reference: * [ChatVertexAI](https://v02.api.js.langchain.com/classes/langchain_google_vertexai.ChatVertexAI.html) from `@langchain/google-vertexai` * [GeminiTool](https://v02.api.js.langchain.com/interfaces/langchain_google_common_types.GeminiTool.html) from `@langchain/google-vertexai/types` * [zodToGeminiParameters](https://v02.api.js.langchain.com/functions/langchain_google_common.zodToGeminiParameters.html) from `@langchain/google-vertexai/utils` tip See the LangSmith trace for the example above [here](https://smith.langchain.com/public/49e1c32c-395a-45e2-afba-913aa3389137/r). ### `withStructuredOutput`[​](#withstructuredoutput "Direct link to withstructuredoutput") Alternatively, you can also use the `withStructuredOutput` method: import { ChatVertexAI } from "@langchain/google-vertexai";import { z } from "zod";// Or, if using the web entrypoint:// import { ChatVertexAI } from "@langchain/google-vertexai-web";const calculatorSchema = z.object({ operation: z .enum(["add", "subtract", "multiply", "divide"]) .describe("The type of operation to execute"), number1: z.number().describe("The first number to operate on."), number2: z.number().describe("The second number to operate on."),});const model = new ChatVertexAI({ temperature: 0.7, model: "gemini-1.0-pro",}).withStructuredOutput(calculatorSchema);const response = await model.invoke("What is 1628253239 times 81623836?");console.log(response);/*{ operation: 'multiply', number1: 1628253239, number2: 81623836 } */ #### API Reference: * [ChatVertexAI](https://v02.api.js.langchain.com/classes/langchain_google_vertexai.ChatVertexAI.html) from `@langchain/google-vertexai` tip See the LangSmith trace for the example above [here](https://smith.langchain.com/public/41bbbddb-f357-4bfa-a111-def8294a4514/r). ### VertexAI tools agent[​](#vertexai-tools-agent "Direct link to VertexAI tools agent") The Gemini family of models not only support tool calling, but can also be used in the Tool Calling agent. Here's an example: import { z } from "zod";import { DynamicStructuredTool } from "@langchain/core/tools";import { AgentExecutor, createToolCallingAgent } from "langchain/agents";import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatVertexAI } from "@langchain/google-vertexai";// Uncomment this if you're running inside a web/edge environment.// import { ChatVertexAI } from "@langchain/google-vertexai-web";const llm: any = new ChatVertexAI({ temperature: 0,});// Prompt template must have "input" and "agent_scratchpad input variables"const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"],]);const currentWeatherTool = new DynamicStructuredTool({ name: "get_current_weather", description: "Get the current weather in a given location", schema: z.object({ location: z.string().describe("The city and state, e.g. San Francisco, CA"), }), func: async () => Promise.resolve("28 °C"),});const agent = await createToolCallingAgent({ llm, tools: [currentWeatherTool], prompt,});const agentExecutor = new AgentExecutor({ agent, tools: [currentWeatherTool],});const input = "What's the weather like in Paris?";const { output } = await agentExecutor.invoke({ input });console.log(output);/* It's 28 degrees Celsius in Paris.*/ #### API Reference: * [DynamicStructuredTool](https://v02.api.js.langchain.com/classes/langchain_core_tools.DynamicStructuredTool.html) from `@langchain/core/tools` * [AgentExecutor](https://v02.api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents` * [createToolCallingAgent](https://v02.api.js.langchain.com/functions/langchain_agents.createToolCallingAgent.html) from `langchain/agents` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [ChatVertexAI](https://v02.api.js.langchain.com/classes/langchain_google_vertexai.ChatVertexAI.html) from `@langchain/google-vertexai` tip See the LangSmith trace for the agent example above [here](https://smith.langchain.com/public/5615ee35-ba76-433b-8639-9b321cb6d4bf/r). * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous (Legacy) Google PaLM/VertexAI ](/v0.2/docs/integrations/chat/google_palm)[ Next Groq ](/v0.2/docs/integrations/chat/groq) * [Setup](#setup) * [Node](#node) * [Web](#web) * [Usage](#usage) * [Streaming](#streaming) * [Tool calling](#tool-calling) * [`withStructuredOutput`](#withstructuredoutput) * [VertexAI tools agent](#vertexai-tools-agent) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/llama_cpp
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Llama CPP On this page Llama CPP ========= Compatibility Only available on Node.js. This module is based on the [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) Node.js bindings for [llama.cpp](https://github.com/ggerganov/llama.cpp), allowing you to work with a locally running LLM. This allows you to work with a much smaller quantized model capable of running on a laptop environment, ideal for testing and scratch padding ideas without running up a bill! Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll need to install the [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) module to communicate with your local model. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install -S node-llama-cpp @langchain/community yarn add node-llama-cpp @langchain/community pnpm add node-llama-cpp @langchain/community You will also need a local Llama 2 model (or a model supported by [node-llama-cpp](https://github.com/withcatai/node-llama-cpp)). You will need to pass the path to this model to the LlamaCpp module as a part of the parameters (see example). Out-of-the-box `node-llama-cpp` is tuned for running on a MacOS platform with support for the Metal GPU of Apple M-series of processors. If you need to turn this off or need support for the CUDA architecture then refer to the documentation at [node-llama-cpp](https://withcatai.github.io/node-llama-cpp/). For advice on getting and preparing `llama2` see the documentation for the LLM version of this module. A note to LangChain.js contributors: if you want to run the tests associated with this module you will need to put the path to your local model in the environment variable `LLAMA_PATH`. Usage[​](#usage "Direct link to Usage") --------------------------------------- ### Basic use[​](#basic-use "Direct link to Basic use") In this case we pass in a prompt wrapped as a message and expect a response. import { ChatLlamaCpp } from "@langchain/community/chat_models/llama_cpp";import { HumanMessage } from "@langchain/core/messages";const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";const model = new ChatLlamaCpp({ modelPath: llamaPath });const response = await model.invoke([ new HumanMessage({ content: "My name is John." }),]);console.log({ response });/* AIMessage { lc_serializable: true, lc_kwargs: { content: 'Hello John.', additional_kwargs: {} }, lc_namespace: [ 'langchain', 'schema' ], content: 'Hello John.', name: undefined, additional_kwargs: {} }*/ #### API Reference: * [ChatLlamaCpp](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_llama_cpp.ChatLlamaCpp.html) from `@langchain/community/chat_models/llama_cpp` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` ### System messages[​](#system-messages "Direct link to System messages") We can also provide a system message, note that with the `llama_cpp` module a system message will cause the creation of a new session. import { ChatLlamaCpp } from "@langchain/community/chat_models/llama_cpp";import { SystemMessage, HumanMessage } from "@langchain/core/messages";const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";const model = new ChatLlamaCpp({ modelPath: llamaPath });const response = await model.invoke([ new SystemMessage( "You are a pirate, responses must be very verbose and in pirate dialect, add 'Arr, m'hearty!' to each sentence." ), new HumanMessage("Tell me where Llamas come from?"),]);console.log({ response });/* AIMessage { lc_serializable: true, lc_kwargs: { content: "Arr, m'hearty! Llamas come from the land of Peru.", additional_kwargs: {} }, lc_namespace: [ 'langchain', 'schema' ], content: "Arr, m'hearty! Llamas come from the land of Peru.", name: undefined, additional_kwargs: {} }*/ #### API Reference: * [ChatLlamaCpp](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_llama_cpp.ChatLlamaCpp.html) from `@langchain/community/chat_models/llama_cpp` * [SystemMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.SystemMessage.html) from `@langchain/core/messages` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` ### Chains[​](#chains "Direct link to Chains") This module can also be used with chains, note that using more complex chains will require suitably powerful version of `llama2` such as the 70B version. import { ChatLlamaCpp } from "@langchain/community/chat_models/llama_cpp";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "@langchain/core/prompts";const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";const model = new ChatLlamaCpp({ modelPath: llamaPath, temperature: 0.5 });const prompt = PromptTemplate.fromTemplate( "What is a good name for a company that makes {product}?");const chain = new LLMChain({ llm: model, prompt });const response = await chain.invoke({ product: "colorful socks" });console.log({ response });/* { text: `I'm not sure what you mean by "colorful socks" but here are some ideas:\n` + '\n' + '- Sock-it to me!\n' + '- Socks Away\n' + '- Fancy Footwear' }*/ #### API Reference: * [ChatLlamaCpp](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_llama_cpp.ChatLlamaCpp.html) from `@langchain/community/chat_models/llama_cpp` * [LLMChain](https://v02.api.js.langchain.com/classes/langchain_chains.LLMChain.html) from `langchain/chains` * [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` ### Streaming[​](#streaming "Direct link to Streaming") We can also stream with Llama CPP, this can be using a raw 'single prompt' string: import { ChatLlamaCpp } from "@langchain/community/chat_models/llama_cpp";const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";const model = new ChatLlamaCpp({ modelPath: llamaPath, temperature: 0.7 });const stream = await model.stream("Tell me a short story about a happy Llama.");for await (const chunk of stream) { console.log(chunk.content);}/* Once upon a time , in a green and sunny field ...*/ #### API Reference: * [ChatLlamaCpp](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_llama_cpp.ChatLlamaCpp.html) from `@langchain/community/chat_models/llama_cpp` Or you can provide multiple messages, note that this takes the input and then submits a Llama2 formatted prompt to the model. import { ChatLlamaCpp } from "@langchain/community/chat_models/llama_cpp";import { SystemMessage, HumanMessage } from "@langchain/core/messages";const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";const llamaCpp = new ChatLlamaCpp({ modelPath: llamaPath, temperature: 0.7 });const stream = await llamaCpp.stream([ new SystemMessage( "You are a pirate, responses must be very verbose and in pirate dialect." ), new HumanMessage("Tell me about Llamas?"),]);for await (const chunk of stream) { console.log(chunk.content);}/* Ar rr r , me heart y ! Ye be ask in ' about llam as , e h ? ...*/ #### API Reference: * [ChatLlamaCpp](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_llama_cpp.ChatLlamaCpp.html) from `@langchain/community/chat_models/llama_cpp` * [SystemMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.SystemMessage.html) from `@langchain/core/messages` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` Using the `invoke` method, we can also achieve stream generation, and use `signal` to abort the generation. import { ChatLlamaCpp } from "@langchain/community/chat_models/llama_cpp";import { SystemMessage, HumanMessage } from "@langchain/core/messages";const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";const model = new ChatLlamaCpp({ modelPath: llamaPath, temperature: 0.7 });const controller = new AbortController();setTimeout(() => { controller.abort(); console.log("Aborted");}, 5000);await model.invoke( [ new SystemMessage( "You are a pirate, responses must be very verbose and in pirate dialect." ), new HumanMessage("Tell me about Llamas?"), ], { signal: controller.signal, callbacks: [ { handleLLMNewToken(token) { console.log(token); }, }, ], });/* Once upon a time , in a green and sunny field ... Aborted AbortError*/ #### API Reference: * [ChatLlamaCpp](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_llama_cpp.ChatLlamaCpp.html) from `@langchain/community/chat_models/llama_cpp` * [SystemMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.SystemMessage.html) from `@langchain/core/messages` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Groq ](/v0.2/docs/integrations/chat/groq)[ Next Minimax ](/v0.2/docs/integrations/chat/minimax) * [Setup](#setup) * [Usage](#usage) * [Basic use](#basic-use) * [System messages](#system-messages) * [Chains](#chains) * [Streaming](#streaming) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/groq
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Groq On this page ChatGroq ======== Setup[​](#setup "Direct link to Setup") --------------------------------------- In order to use the Groq API you'll need an API key. You can sign up for a Groq account and create an API key [here](https://wow.groq.com/). You'll first need to install the [`@langchain/groq`](https://www.npmjs.com/package/@langchain/groq) package: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/groq yarn add @langchain/groq pnpm add @langchain/groq tip We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys. Usage[​](#usage "Direct link to Usage") --------------------------------------- import { ChatGroq } from "@langchain/groq";import { ChatPromptTemplate } from "@langchain/core/prompts";const model = new ChatGroq({ apiKey: process.env.GROQ_API_KEY,});const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["human", "{input}"],]);const chain = prompt.pipe(model);const response = await chain.invoke({ input: "Hello",});console.log("response", response);/**response AIMessage { content: "Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or a question you have?",} */ #### API Reference: * [ChatGroq](https://v02.api.js.langchain.com/classes/langchain_groq.ChatGroq.html) from `@langchain/groq` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` info You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/2ba59207-1383-4e42-b6a6-c1ddcfcd5710/r) Tool calling[​](#tool-calling "Direct link to Tool calling") ------------------------------------------------------------ Groq chat models support calling multiple functions to get all required data to answer a question. Here's an example: import { ChatGroq } from "@langchain/groq";// Mocked out function, could be a database/API call in productionfunction getCurrentWeather(location: string, _unit?: string) { if (location.toLowerCase().includes("tokyo")) { return JSON.stringify({ location, temperature: "10", unit: "celsius" }); } else if (location.toLowerCase().includes("san francisco")) { return JSON.stringify({ location, temperature: "72", unit: "fahrenheit", }); } else { return JSON.stringify({ location, temperature: "22", unit: "celsius" }); }}// Bind function to the model as a toolconst chat = new ChatGroq({ model: "mixtral-8x7b-32768", maxTokens: 128,}).bind({ tools: [ { type: "function", function: { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, }, ], tool_choice: "auto",});const res = await chat.invoke([ ["human", "What's the weather like in San Francisco?"],]);console.log(res.additional_kwargs.tool_calls);/* [ { id: 'call_01htk055jpftwbb9tvphyf9bnf', type: 'function', function: { name: 'get_current_weather', arguments: '{"location":"San Francisco, CA"}' } } ]*/ #### API Reference: * [ChatGroq](https://v02.api.js.langchain.com/classes/langchain_groq.ChatGroq.html) from `@langchain/groq` ### `.withStructuredOutput({ ... })`[​](#withstructuredoutput-- "Direct link to withstructuredoutput--") info The `.withStructuredOutput` method is in beta. It is actively being worked on, so the API may change. You can also use the `.withStructuredOutput({ ... })` method to coerce `ChatGroq` into returning a structured output. The method allows for passing in either a Zod object, or a valid JSON schema (like what is returned from [`zodToJsonSchema`](https://www.npmjs.com/package/zod-to-json-schema)). Using the method is simple. Just define your LLM and call `.withStructuredOutput({ ... })` on it, passing the desired schema. Here is an example using a Zod schema and the `functionCalling` mode (default mode): import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatGroq } from "@langchain/groq";import { z } from "zod";const model = new ChatGroq({ temperature: 0, model: "mixtral-8x7b-32768",});const calculatorSchema = z.object({ operation: z.enum(["add", "subtract", "multiply", "divide"]), number1: z.number(), number2: z.number(),});const modelWithStructuredOutput = model.withStructuredOutput(calculatorSchema);const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are VERY bad at math and must always use a calculator."], ["human", "Please help me!! What is 2 + 2?"],]);const chain = prompt.pipe(modelWithStructuredOutput);const result = await chain.invoke({});console.log(result);/* { operation: 'add', number1: 2, number2: 2 }*//** * You can also specify 'includeRaw' to return the parsed * and raw output in the result. */const includeRawModel = model.withStructuredOutput(calculatorSchema, { name: "calculator", includeRaw: true,});const includeRawChain = prompt.pipe(includeRawModel);const includeRawResult = await includeRawChain.invoke({});console.log(includeRawResult);/* { raw: AIMessage { content: '', additional_kwargs: { tool_calls: [ { "id": "call_01htk094ktfgxtkwj40n0ehg61", "type": "function", "function": { "name": "calculator", "arguments": "{\"operation\": \"add\", \"number1\": 2, \"number2\": 2}" } } ] }, response_metadata: { "tokenUsage": { "completionTokens": 197, "promptTokens": 1214, "totalTokens": 1411 }, "finish_reason": "tool_calls" } }, parsed: { operation: 'add', number1: 2, number2: 2 } }*/ #### API Reference: * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [ChatGroq](https://v02.api.js.langchain.com/classes/langchain_groq.ChatGroq.html) from `@langchain/groq` Streaming[​](#streaming "Direct link to Streaming") --------------------------------------------------- Groq's API also supports streaming token responses. The example below demonstrates how to use this feature. import { ChatGroq } from "@langchain/groq";import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";const model = new ChatGroq({ apiKey: process.env.GROQ_API_KEY,});const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["human", "{input}"],]);const outputParser = new StringOutputParser();const chain = prompt.pipe(model).pipe(outputParser);const response = await chain.stream({ input: "Hello",});let res = "";for await (const item of response) { res += item; console.log("stream:", res);}/**stream: Hellostream: Hello!stream: Hello! Istream: Hello! I'stream: Hello! I'mstream: Hello! I'm happystream: Hello! I'm happy tostream: Hello! I'm happy to assiststream: Hello! I'm happy to assist youstream: Hello! I'm happy to assist you instream: Hello! I'm happy to assist you in anystream: Hello! I'm happy to assist you in any waystream: Hello! I'm happy to assist you in any way Istream: Hello! I'm happy to assist you in any way I canstream: Hello! I'm happy to assist you in any way I can.stream: Hello! I'm happy to assist you in any way I can. Isstream: Hello! I'm happy to assist you in any way I can. Is therestream: Hello! I'm happy to assist you in any way I can. Is there somethingstream: Hello! I'm happy to assist you in any way I can. Is there something specificstream: Hello! I'm happy to assist you in any way I can. Is there something specific youstream: Hello! I'm happy to assist you in any way I can. Is there something specific you needstream: Hello! I'm happy to assist you in any way I can. Is there something specific you need helpstream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help withstream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with orstream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or astream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or a questionstream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or a question youstream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or a question you havestream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or a question you have? */ #### API Reference: * [ChatGroq](https://v02.api.js.langchain.com/classes/langchain_groq.ChatGroq.html) from `@langchain/groq` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` info You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/72832eb5-b9ae-4ce0-baa2-c2e95eca61a7/r) * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Google Vertex AI ](/v0.2/docs/integrations/chat/google_vertex_ai)[ Next Llama CPP ](/v0.2/docs/integrations/chat/llama_cpp) * [Setup](#setup) * [Usage](#usage) * [Tool calling](#tool-calling) * [`.withStructuredOutput({ ... })`](#withstructuredoutput--) * [Streaming](#streaming) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/minimax
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Minimax On this page Minimax ======= [Minimax](https://api.minimax.chat) is a Chinese startup that provides natural language processing models for companies and individuals. This example demonstrates using LangChain.js to interact with Minimax. Setup[​](#setup "Direct link to Setup") --------------------------------------- To use Minimax models, you'll need a [Minimax account](https://api.minimax.chat), an [API key](https://api.minimax.chat/user-center/basic-information/interface-key), and a [Group ID](https://api.minimax.chat/user-center/basic-information) tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/community yarn add @langchain/community pnpm add @langchain/community tip We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys. Basic usage[​](#basic-usage "Direct link to Basic usage") --------------------------------------------------------- import { ChatMinimax } from "@langchain/community/chat_models/minimax";import { HumanMessage } from "@langchain/core/messages";// Use abab5.5const abab5_5 = new ChatMinimax({ model: "abab5.5-chat", botSetting: [ { bot_name: "MM Assistant", content: "MM Assistant is an AI Assistant developed by minimax.", }, ],});const messages = [ new HumanMessage({ content: "Hello", }),];const res = await abab5_5.invoke(messages);console.log(res);/*AIChatMessage { text: 'Hello! How may I assist you today?', name: undefined, additional_kwargs: {} }}*/// use abab5const abab5 = new ChatMinimax({ proVersion: false, model: "abab5-chat", minimaxGroupId: process.env.MINIMAX_GROUP_ID, // In Node.js defaults to process.env.MINIMAX_GROUP_ID minimaxApiKey: process.env.MINIMAX_API_KEY, // In Node.js defaults to process.env.MINIMAX_API_KEY});const result = await abab5.invoke([ new HumanMessage({ content: "Hello", name: "XiaoMing", }),]);console.log(result);/*AIMessage { lc_serializable: true, lc_kwargs: { content: 'Hello! Can I help you with anything?', additional_kwargs: { function_call: undefined } }, lc_namespace: [ 'langchain', 'schema' ], content: 'Hello! Can I help you with anything?', name: undefined, additional_kwargs: { function_call: undefined }} */ #### API Reference: * [ChatMinimax](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_minimax.ChatMinimax.html) from `@langchain/community/chat_models/minimax` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` Chain model calls[​](#chain-model-calls "Direct link to Chain model calls") --------------------------------------------------------------------------- import { LLMChain } from "langchain/chains";import { ChatMinimax } from "@langchain/community/chat_models/minimax";import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate,} from "@langchain/core/prompts";// We can also construct an LLMChain from a ChatPromptTemplate and a chat model.const chat = new ChatMinimax({ temperature: 0.01 });const chatPrompt = ChatPromptTemplate.fromMessages([ SystemMessagePromptTemplate.fromTemplate( "You are a helpful assistant that translates {input_language} to {output_language}." ), HumanMessagePromptTemplate.fromTemplate("{text}"),]);const chainB = new LLMChain({ prompt: chatPrompt, llm: chat,});const resB = await chainB.invoke({ input_language: "English", output_language: "Chinese", text: "I love programming.",});console.log({ resB }); #### API Reference: * [LLMChain](https://v02.api.js.langchain.com/classes/langchain_chains.LLMChain.html) from `langchain/chains` * [ChatMinimax](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_minimax.ChatMinimax.html) from `@langchain/community/chat_models/minimax` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [HumanMessagePromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.HumanMessagePromptTemplate.html) from `@langchain/core/prompts` * [SystemMessagePromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.SystemMessagePromptTemplate.html) from `@langchain/core/prompts` With function calls[​](#with-function-calls "Direct link to With function calls") --------------------------------------------------------------------------------- import { ChatMinimax } from "@langchain/community/chat_models/minimax";import { HumanMessage } from "@langchain/core/messages";const functionSchema = { name: "get_weather", description: " Get weather information.", parameters: { type: "object", properties: { location: { type: "string", description: " The location to get the weather", }, }, required: ["location"], },};// Bind function arguments to the model.// All subsequent invoke calls will use the bound parameters.// "functions.parameters" must be formatted as JSON Schemaconst model = new ChatMinimax({ botSetting: [ { bot_name: "MM Assistant", content: "MM Assistant is an AI Assistant developed by minimax.", }, ],}).bind({ functions: [functionSchema],});const result = await model.invoke([ new HumanMessage({ content: " What is the weather like in NewYork tomorrow?", name: "I", }),]);console.log(result);/*AIMessage { lc_serializable: true, lc_kwargs: { content: '', additional_kwargs: { function_call: [Object] } }, lc_namespace: [ 'langchain', 'schema' ], content: '', name: undefined, additional_kwargs: { function_call: { name: 'get_weather', arguments: '{"location": "NewYork"}' } }}*/// Alternatively, you can pass function call arguments as an additional argument as a one-off:const minimax = new ChatMinimax({ model: "abab5.5-chat", botSetting: [ { bot_name: "MM Assistant", content: "MM Assistant is an AI Assistant developed by minimax.", }, ],});const result2 = await minimax.invoke( [new HumanMessage("What is the weather like in NewYork tomorrow?")], { functions: [functionSchema], });console.log(result2);/*AIMessage { lc_serializable: true, lc_kwargs: { content: '', additional_kwargs: { function_call: [Object] } }, lc_namespace: [ 'langchain', 'schema' ], content: '', name: undefined, additional_kwargs: { function_call: { name: 'get_weather', arguments: '{"location": "NewYork"}' } }} */ #### API Reference: * [ChatMinimax](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_minimax.ChatMinimax.html) from `@langchain/community/chat_models/minimax` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` Functions with Zod[​](#functions-with-zod "Direct link to Functions with Zod") ------------------------------------------------------------------------------ import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";import { ChatMinimax } from "@langchain/community/chat_models/minimax";import { HumanMessage } from "@langchain/core/messages";const extractionFunctionZodSchema = z.object({ location: z.string().describe(" The location to get the weather"),});// Bind function arguments to the model.// "functions.parameters" must be formatted as JSON Schema.// We translate the above Zod schema into JSON schema using the "zodToJsonSchema" package.const model = new ChatMinimax({ model: "abab5.5-chat", botSetting: [ { bot_name: "MM Assistant", content: "MM Assistant is an AI Assistant developed by minimax.", }, ],}).bind({ functions: [ { name: "get_weather", description: " Get weather information.", parameters: zodToJsonSchema(extractionFunctionZodSchema), }, ],});const result = await model.invoke([ new HumanMessage({ content: " What is the weather like in Shanghai tomorrow?", name: "XiaoMing", }),]);console.log(result);/*AIMessage { lc_serializable: true, lc_kwargs: { content: '', additional_kwargs: { function_call: [Object] } }, lc_namespace: [ 'langchain', 'schema' ], content: '', name: undefined, additional_kwargs: { function_call: { name: 'get_weather', arguments: '{"location": "Shanghai"}' } }}*/ #### API Reference: * [ChatMinimax](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_minimax.ChatMinimax.html) from `@langchain/community/chat_models/minimax` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` With glyph[​](#with-glyph "Direct link to With glyph") ------------------------------------------------------ This feature can help users force the model to return content in the requested format. import { ChatMinimax } from "@langchain/community/chat_models/minimax";import { ChatPromptTemplate, HumanMessagePromptTemplate,} from "@langchain/core/prompts";import { HumanMessage } from "@langchain/core/messages";const model = new ChatMinimax({ model: "abab5.5-chat", botSetting: [ { bot_name: "MM Assistant", content: "MM Assistant is an AI Assistant developed by minimax.", }, ],}).bind({ replyConstraints: { sender_type: "BOT", sender_name: "MM Assistant", glyph: { type: "raw", raw_glyph: "The translated text:{{gen 'content'}}", }, },});const messagesTemplate = ChatPromptTemplate.fromMessages([ HumanMessagePromptTemplate.fromTemplate( " Please help me translate the following sentence in English: {text}" ),]);const messages = await messagesTemplate.formatMessages({ text: "我是谁" });const result = await model.invoke(messages);console.log(result);/*AIMessage { lc_serializable: true, lc_kwargs: { content: 'The translated text: Who am I\x02', additional_kwargs: { function_call: undefined } }, lc_namespace: [ 'langchain', 'schema' ], content: 'The translated text: Who am I\x02', name: undefined, additional_kwargs: { function_call: undefined }}*/// use json_valueconst modelMinimax = new ChatMinimax({ model: "abab5.5-chat", botSetting: [ { bot_name: "MM Assistant", content: "MM Assistant is an AI Assistant developed by minimax.", }, ],}).bind({ replyConstraints: { sender_type: "BOT", sender_name: "MM Assistant", glyph: { type: "json_value", json_properties: { name: { type: "string", }, age: { type: "number", }, is_student: { type: "boolean", }, is_boy: { type: "boolean", }, courses: { type: "object", properties: { name: { type: "string", }, score: { type: "number", }, }, }, }, }, },});const result2 = await modelMinimax.invoke([ new HumanMessage({ content: " My name is Yue Wushuang, 18 years old this year, just finished the test with 99.99 points.", name: "XiaoMing", }),]);console.log(result2);/*AIMessage { lc_serializable: true, lc_kwargs: { content: '{\n' + ' "name": "Yue Wushuang",\n' + ' "is_student": true,\n' + ' "is_boy": false,\n' + ' "courses": {\n' + ' "name": "Mathematics",\n' + ' "score": 99.99\n' + ' },\n' + ' "age": 18\n' + ' }', additional_kwargs: { function_call: undefined } }} */ #### API Reference: * [ChatMinimax](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_minimax.ChatMinimax.html) from `@langchain/community/chat_models/minimax` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [HumanMessagePromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.HumanMessagePromptTemplate.html) from `@langchain/core/prompts` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` With sample messages[​](#with-sample-messages "Direct link to With sample messages") ------------------------------------------------------------------------------------ This feature can help the model better understand the return information the user wants to get, including but not limited to the content, format, and response mode of the information. import { ChatMinimax } from "@langchain/community/chat_models/minimax";import { AIMessage, HumanMessage } from "@langchain/core/messages";const model = new ChatMinimax({ model: "abab5.5-chat", botSetting: [ { bot_name: "MM Assistant", content: "MM Assistant is an AI Assistant developed by minimax.", }, ],}).bind({ sampleMessages: [ new HumanMessage({ content: " Turn A5 into red and modify the content to minimax.", }), new AIMessage({ content: "select A5 color red change minimax", }), ],});const result = await model.invoke([ new HumanMessage({ content: ' Please reply to my content according to the following requirements: According to the following interface list, give the order and parameters of calling the interface for the content I gave. You just need to give the order and parameters of calling the interface, and do not give any other output. The following is the available interface list: select: select specific table position, input parameter use letters and numbers to determine, for example "B13"; color: dye the selected table position, input parameters use the English name of the color, for example "red"; change: modify the selected table position, input parameters use strings.', }), new HumanMessage({ content: " Process B6 to gray and modify the content to question.", }),]);console.log(result); #### API Reference: * [ChatMinimax](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_minimax.ChatMinimax.html) from `@langchain/community/chat_models/minimax` * [AIMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.AIMessage.html) from `@langchain/core/messages` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` With plugins[​](#with-plugins "Direct link to With plugins") ------------------------------------------------------------ This feature supports calling tools like a search engine to get additional data that can assist the model. import { ChatMinimax } from "@langchain/community/chat_models/minimax";import { HumanMessage } from "@langchain/core/messages";const model = new ChatMinimax({ model: "abab5.5-chat", botSetting: [ { bot_name: "MM Assistant", content: "MM Assistant is an AI Assistant developed by minimax.", }, ],}).bind({ plugins: ["plugin_web_search"],});const result = await model.invoke([ new HumanMessage({ content: " What is the weather like in NewYork tomorrow?", }),]);console.log(result);/*AIMessage { lc_serializable: true, lc_kwargs: { content: 'The weather in Shanghai tomorrow is expected to be hot. Please note that this is just a forecast and the actual weather conditions may vary.', additional_kwargs: { function_call: undefined } }, lc_namespace: [ 'langchain', 'schema' ], content: 'The weather in Shanghai tomorrow is expected to be hot. Please note that this is just a forecast and the actual weather conditions may vary.', name: undefined, additional_kwargs: { function_call: undefined }}*/ #### API Reference: * [ChatMinimax](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_minimax.ChatMinimax.html) from `@langchain/community/chat_models/minimax` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Llama CPP ](/v0.2/docs/integrations/chat/llama_cpp)[ Next Mistral AI ](/v0.2/docs/integrations/chat/mistral) * [Setup](#setup) * [Basic usage](#basic-usage) * [Chain model calls](#chain-model-calls) * [With function calls](#with-function-calls) * [Functions with Zod](#functions-with-zod) * [With glyph](#with-glyph) * [With sample messages](#with-sample-messages) * [With plugins](#with-plugins) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/ollama
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Ollama On this page ChatOllama ========== [Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2, locally. Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage. This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance as a chat model. For a complete list of supported models and model variants, see the [Ollama model library](https://github.com/jmorganca/ollama#model-library). Setup[​](#setup "Direct link to Setup") --------------------------------------- Follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/community yarn add @langchain/community pnpm add @langchain/community Usage[​](#usage "Direct link to Usage") --------------------------------------- import { ChatOllama } from "@langchain/community/chat_models/ollama";import { StringOutputParser } from "@langchain/core/output_parsers";const model = new ChatOllama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value});const stream = await model .pipe(new StringOutputParser()) .stream(`Translate "I love programming" into German.`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/* Thank you for your question! I'm happy to help. However, I must point out that the phrase "I love programming" is not grammatically correct in German. The word "love" does not have a direct translation in German, and it would be more appropriate to say "I enjoy programming" or "I am passionate about programming." In German, you can express your enthusiasm for something like this: * Ich möchte Programmieren (I want to program) * Ich mag Programmieren (I like to program) * Ich bin passioniert über Programmieren (I am passionate about programming) I hope this helps! Let me know if you have any other questions.*/ #### API Reference: * [ChatOllama](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_ollama.ChatOllama.html) from `@langchain/community/chat_models/ollama` * [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` JSON mode[​](#json-mode "Direct link to JSON mode") --------------------------------------------------- Ollama also supports a JSON mode that coerces model outputs to only return JSON. Here's an example of how this can be useful for extraction: import { ChatOllama } from "@langchain/community/chat_models/ollama";import { ChatPromptTemplate } from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromMessages([ [ "system", `You are an expert translator. Format all responses as JSON objects with two keys: "original" and "translated".`, ], ["human", `Translate "{input}" into {language}.`],]);const model = new ChatOllama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value format: "json",});const chain = prompt.pipe(model);const result = await chain.invoke({ input: "I love programming", language: "German",});console.log(result);/* AIMessage { content: '{"original": "I love programming", "translated": "Ich liebe das Programmieren"}', additional_kwargs: {} }*/ #### API Reference: * [ChatOllama](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_ollama.ChatOllama.html) from `@langchain/community/chat_models/ollama` * [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` You can see a simple LangSmith trace of this here: [https://smith.langchain.com/public/92aebeca-d701-4de0-a845-f55df04eff04/r](https://smith.langchain.com/public/92aebeca-d701-4de0-a845-f55df04eff04/r) Multimodal models[​](#multimodal-models "Direct link to Multimodal models") --------------------------------------------------------------------------- Ollama supports open source multimodal models like [LLaVA](https://ollama.ai/library/llava) in versions 0.1.15 and up. You can pass images as part of a message's `content` field to multimodal-capable models like this: import { ChatOllama } from "@langchain/community/chat_models/ollama";import { HumanMessage } from "@langchain/core/messages";import * as fs from "node:fs/promises";const imageData = await fs.readFile("./hotdog.jpg");const chat = new ChatOllama({ model: "llava", baseUrl: "http://127.0.0.1:11434",});const res = await chat.invoke([ new HumanMessage({ content: [ { type: "text", text: "What is in this image?", }, { type: "image_url", image_url: `data:image/jpeg;base64,${imageData.toString("base64")}`, }, ], }),]);console.log(res);/* AIMessage { content: ' The image shows a hot dog with ketchup on it, placed on top of a bun. It appears to be a close-up view, possibly taken in a kitchen setting or at an outdoor event.', name: undefined, additional_kwargs: {} }*/ #### API Reference: * [ChatOllama](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_ollama.ChatOllama.html) from `@langchain/community/chat_models/ollama` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` This will currently not use the image's position within the prompt message as additional information, and will just pass the image along as context with the rest of the prompt messages. * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous NIBittensorChatModel ](/v0.2/docs/integrations/chat/ni_bittensor)[ Next Ollama Functions ](/v0.2/docs/integrations/chat/ollama_functions) * [Setup](#setup) * [Usage](#usage) * [JSON mode](#json-mode) * [Multimodal models](#multimodal-models) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/ni_bittensor
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * NIBittensorChatModel NIBittensorChatModel ==================== danger This module has been deprecated and is no longer supported. The documentation below will not work in versions 0.2.0 or later. LangChain.js offers experimental support for Neural Internet's Bittensor chat models. Here's an example: import { NIBittensorChatModel } from "langchain/experimental/chat_models/bittensor";import { HumanMessage } from "@langchain/core/messages";const chat = new NIBittensorChatModel();const message = new HumanMessage("What is bittensor?");const res = await chat.invoke([message]);console.log({ res });/* { res: "\nBittensor is opensource protocol..." } */ * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Mistral AI ](/v0.2/docs/integrations/chat/mistral)[ Next Ollama ](/v0.2/docs/integrations/chat/ollama) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/ollama_functions
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * Ollama Functions On this page Ollama Functions ================ LangChain offers an experimental wrapper around open source models run locally via [Ollama](https://github.com/jmorganca/ollama) that gives it the same API as OpenAI Functions. Note that more powerful and capable models will perform better with complex schema and/or multiple functions. The examples below use [Mistral](https://ollama.ai/library/mistral). Setup[​](#setup "Direct link to Setup") --------------------------------------- Follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance. Initialize model[​](#initialize-model "Direct link to Initialize model") ------------------------------------------------------------------------ You can initialize this wrapper the same way you'd initialize a standard `ChatOllama` instance: import { OllamaFunctions } from "langchain/experimental/chat_models/ollama_functions";const model = new OllamaFunctions({ temperature: 0.1, model: "mistral",}); Passing in functions[​](#passing-in-functions "Direct link to Passing in functions") ------------------------------------------------------------------------------------ You can now pass in functions the same way as OpenAI: import { OllamaFunctions } from "@langchain/community/experimental/chat_models/ollama_functions";import { HumanMessage } from "@langchain/core/messages";const model = new OllamaFunctions({ temperature: 0.1, model: "mistral",}).bind({ functions: [ { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, ], // You can set the `function_call` arg to force the model to use a function function_call: { name: "get_current_weather", },});const response = await model.invoke([ new HumanMessage({ content: "What's the weather in Boston?", }),]);console.log(response);/* AIMessage { content: '', additional_kwargs: { function_call: { name: 'get_current_weather', arguments: '{"location":"Boston, MA","unit":"fahrenheit"}' } } }*/ #### API Reference: * [OllamaFunctions](https://v02.api.js.langchain.com/classes/langchain_community_experimental_chat_models_ollama_functions.OllamaFunctions.html) from `@langchain/community/experimental/chat_models/ollama_functions` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` Using for extraction[​](#using-for-extraction "Direct link to Using for extraction") ------------------------------------------------------------------------------------ import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";import { OllamaFunctions } from "@langchain/community/experimental/chat_models/ollama_functions";import { PromptTemplate } from "@langchain/core/prompts";import { JsonOutputFunctionsParser } from "@langchain/core/output_parsers/openai_functions";const EXTRACTION_TEMPLATE = `Extract and save the relevant entities mentioned in the following passage together with their properties.Passage:{input}`;const prompt = PromptTemplate.fromTemplate(EXTRACTION_TEMPLATE);// Use Zod for easier schema declarationconst schema = z.object({ people: z.array( z.object({ name: z.string().describe("The name of a person"), height: z.number().describe("The person's height"), hairColor: z.optional(z.string()).describe("The person's hair color"), }) ),});const model = new OllamaFunctions({ temperature: 0.1, model: "mistral",}).bind({ functions: [ { name: "information_extraction", description: "Extracts the relevant information from the passage.", parameters: { type: "object", properties: zodToJsonSchema(schema), }, }, ], function_call: { name: "information_extraction", },});// Use a JsonOutputFunctionsParser to get the parsed JSON response directly.const chain = await prompt.pipe(model).pipe(new JsonOutputFunctionsParser());const response = await chain.invoke({ input: "Alex is 5 feet tall. Claudia is 1 foot taller than Alex and jumps higher than him. Claudia has orange hair and Alex is blonde.",});console.log(response);/* { people: [ { name: 'Alex', height: 5, hairColor: 'blonde' }, { name: 'Claudia', height: 6, hairColor: 'orange' } ] }*/ #### API Reference: * [OllamaFunctions](https://v02.api.js.langchain.com/classes/langchain_community_experimental_chat_models_ollama_functions.OllamaFunctions.html) from `@langchain/community/experimental/chat_models/ollama_functions` * [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [JsonOutputFunctionsParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers_openai_functions.JsonOutputFunctionsParser.html) from `@langchain/core/output_parsers/openai_functions` You can see a LangSmith trace of what this looks like here: [https://smith.langchain.com/public/31457ea4-71ca-4e29-a1e0-aa80e6828883/r](https://smith.langchain.com/public/31457ea4-71ca-4e29-a1e0-aa80e6828883/r) Customization[​](#customization "Direct link to Customization") --------------------------------------------------------------- Behind the scenes, this uses Ollama's JSON mode to constrain output to JSON, then passes tools schemas as JSON schema into the prompt. Because different models have different strengths, it may be helpful to pass in your own system prompt. Here's an example: import { OllamaFunctions } from "@langchain/community/experimental/chat_models/ollama_functions";import { HumanMessage } from "@langchain/core/messages";// Custom system prompt to format tools. You must encourage the model// to wrap output in a JSON object with "tool" and "tool_input" properties.const toolSystemPromptTemplate = `You have access to the following tools:{tools}To use a tool, respond with a JSON object with the following structure:{{ "tool": <name of the called tool>, "tool_input": <parameters for the tool matching the above JSON schema>}}`;const model = new OllamaFunctions({ temperature: 0.1, model: "mistral", toolSystemPromptTemplate,}).bind({ functions: [ { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, ], // You can set the `function_call` arg to force the model to use a function function_call: { name: "get_current_weather", },});const response = await model.invoke([ new HumanMessage({ content: "What's the weather in Boston?", }),]);console.log(response);/* AIMessage { content: '', additional_kwargs: { function_call: { name: 'get_current_weather', arguments: '{"location":"Boston, MA","unit":"fahrenheit"}' } } }*/ #### API Reference: * [OllamaFunctions](https://v02.api.js.langchain.com/classes/langchain_community_experimental_chat_models_ollama_functions.OllamaFunctions.html) from `@langchain/community/experimental/chat_models/ollama_functions` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous Ollama ](/v0.2/docs/integrations/chat/ollama)[ Next OpenAI ](/v0.2/docs/integrations/chat/openai) * [Setup](#setup) * [Initialize model](#initialize-model) * [Passing in functions](#passing-in-functions) * [Using for extraction](#using-for-extraction) * [Customization](#customization) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/premai
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * PremAI On this page ChatPrem ======== Setup[​](#setup "Direct link to Setup") --------------------------------------- 1. Create a Prem AI account and get your API key [here](https://app.premai.io/accounts/signup/). 2. Export or set your API key inline. The ChatPrem class defaults to `process.env.PREM_API_KEY`. export PREM_API_KEY=your-api-key You can use models provided by Prem AI as follows: tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/community yarn add @langchain/community pnpm add @langchain/community import { ChatPrem } from "@langchain/community/chat_models/premai";import { HumanMessage } from "@langchain/core/messages";const model = new ChatPrem({ // In Node.js defaults to process.env.PREM_API_KEY apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PREM_PROJECT_ID project_id: "YOUR-PROJECT_ID",});console.log(await model.invoke([new HumanMessage("Hello there!")])); #### API Reference: * [ChatPrem](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_premai.ChatPrem.html) from `@langchain/community/chat_models/premai` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous OpenAI ](/v0.2/docs/integrations/chat/openai)[ Next PromptLayer OpenAI ](/v0.2/docs/integrations/chat/prompt_layer_openai) * [Setup](#setup) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/prompt_layer_openai
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * PromptLayer OpenAI PromptLayerChatOpenAI ===================== You can pass in the optional `returnPromptLayerId` boolean to get a `promptLayerRequestId` like below. Here is an example of getting the PromptLayerChatOpenAI requestID: import { PromptLayerChatOpenAI } from "langchain/llms/openai";const chat = new PromptLayerChatOpenAI({ returnPromptLayerId: true,});const respA = await chat.generate([ [ new SystemMessage( "You are a helpful assistant that translates English to French." ), ],]);console.log(JSON.stringify(respA, null, 3));/* { "generations": [ [ { "text": "Bonjour! Je suis un assistant utile qui peut vous aider à traduire de l'anglais vers le français. Que puis-je faire pour vous aujourd'hui?", "message": { "type": "ai", "data": { "content": "Bonjour! Je suis un assistant utile qui peut vous aider à traduire de l'anglais vers le français. Que puis-je faire pour vous aujourd'hui?" } }, "generationInfo": { "promptLayerRequestId": 2300682 } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 35, "promptTokens": 19, "totalTokens": 54 } } }*/ * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous PremAI ](/v0.2/docs/integrations/chat/premai)[ Next TogetherAI ](/v0.2/docs/integrations/chat/togetherai) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.2/docs/integrations/chat/web_llm
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386). [ ![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png) ](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com) [More](#) * [People](/v0.2/docs/people/) * [Community](/v0.2/docs/community) * [Tutorials](/v0.2/docs/additional_resources/tutorials) * [Contributing](/v0.2/docs/contributing) [v0.2](#) * [v0.2](/v0.2/docs/introduction) * [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.2/docs/integrations/platforms/) * [Providers](/v0.2/docs/integrations/platforms/) * [Anthropic](/v0.2/docs/integrations/platforms/anthropic) * [AWS](/v0.2/docs/integrations/platforms/aws) * [Google](/v0.2/docs/integrations/platforms/google) * [Microsoft](/v0.2/docs/integrations/platforms/microsoft) * [OpenAI](/v0.2/docs/integrations/platforms/openai) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * [Chat models](/v0.2/docs/integrations/chat/) * [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi) * [Anthropic](/v0.2/docs/integrations/chat/anthropic) * [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools) * [Azure OpenAI](/v0.2/docs/integrations/chat/azure) * [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin) * [Bedrock](/v0.2/docs/integrations/chat/bedrock) * [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai) * [Cohere](/v0.2/docs/integrations/chat/cohere) * [Fake LLM](/v0.2/docs/integrations/chat/fake) * [Fireworks](/v0.2/docs/integrations/chat/fireworks) * [Friendli](/v0.2/docs/integrations/chat/friendli) * [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai) * [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm) * [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai) * [Groq](/v0.2/docs/integrations/chat/groq) * [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp) * [Minimax](/v0.2/docs/integrations/chat/minimax) * [Mistral AI](/v0.2/docs/integrations/chat/mistral) * [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor) * [Ollama](/v0.2/docs/integrations/chat/ollama) * [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions) * [OpenAI](/v0.2/docs/integrations/chat/openai) * [PremAI](/v0.2/docs/integrations/chat/premai) * [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai) * [TogetherAI](/v0.2/docs/integrations/chat/togetherai) * [WebLLM](/v0.2/docs/integrations/chat/web_llm) * [YandexGPT](/v0.2/docs/integrations/chat/yandex) * [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai) * [LLMs](/v0.2/docs/integrations/llms/) * [Embedding models](/v0.2/docs/integrations/text_embedding) * [Document loaders](/v0.2/docs/integrations/document_loaders) * [Document transformers](/v0.2/docs/integrations/document_transformers) * [Vector stores](/v0.2/docs/integrations/vectorstores) * [Retrievers](/v0.2/docs/integrations/retrievers) * [Tools](/v0.2/docs/integrations/tools) * [Toolkits](/v0.2/docs/integrations/toolkits) * [Stores](/v0.2/docs/integrations/stores/) * [](/v0.2/) * [Components](/v0.2/docs/integrations/components) * [Chat models](/v0.2/docs/integrations/chat/) * WebLLM On this page WebLLM ====== Compatibility Only available in web environments. You can run LLMs directly in your web browser using LangChain's [WebLLM](https://webllm.mlc.ai) integration. Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll need to install the [WebLLM SDK](https://www.npmjs.com/package/@mlc-ai/web-llm) module to communicate with your local model. tip See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages). * npm * Yarn * pnpm npm install -S @mlc-ai/web-llm @langchain/community yarn add @mlc-ai/web-llm @langchain/community pnpm add @mlc-ai/web-llm @langchain/community Usage[​](#usage "Direct link to Usage") --------------------------------------- Note that the first time a model is called, WebLLM will download the full weights for that model. This can be multiple gigabytes, and may not be possible for all end-users of your application depending on their internet connection and computer specs. While the browser will cache future invocations of that model, we recommend using the smallest possible model you can. We also recommend using a [separate web worker](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers) when invoking and loading your models to not block execution. // Must be run in a web environment, e.g. a web workerimport { ChatWebLLM } from "@langchain/community/chat_models/webllm";import { HumanMessage } from "@langchain/core/messages";// Initialize the ChatWebLLM model with the model record and chat options.// Note that if the appConfig field is set, the list of model records// must include the selected model record for the engine.// You can import a list of models available by default here:// https://github.com/mlc-ai/web-llm/blob/main/src/config.ts//// Or by importing it via:// import { prebuiltAppConfig } from "@mlc-ai/web-llm";const model = new ChatWebLLM({ model: "Phi2-q4f32_1", chatOptions: { temperature: 0.5, },});// Call the model with a message and await the response.const response = await model.invoke([ new HumanMessage({ content: "What is 1 + 1?" }),]);console.log(response);/*AIMessage { content: ' 2\n',}*/ #### API Reference: * [ChatWebLLM](https://v02.api.js.langchain.com/classes/langchain_community_chat_models_webllm.ChatWebLLM.html) from `@langchain/community/chat_models/webllm` * [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` Streaming is also supported. Example[​](#example "Direct link to Example") --------------------------------------------- For a full end-to-end example, check out [this project](https://github.com/jacoblee93/fully-local-pdf-chatbot). * * * #### Was this page helpful? #### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E). [ Previous TogetherAI ](/v0.2/docs/integrations/chat/togetherai)[ Next YandexGPT ](/v0.2/docs/integrations/chat/yandex) * [Setup](#setup) * [Usage](#usage) * [Example](#example) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.