id
stringlengths 14
17
| text
stringlengths 42
2.1k
|
---|---|
4e9727215e95-1300 | Refer to the Supabase blog post for more information.📄️ TigrisTigris makes it easy to build AI applications with vector embeddings.📄️ TypeORMTo enable vector search in a generic PostgreSQL database, LangChainJS supports using TypeORM with the pgvector Postgres extension.📄️ TypesenseVector store that utilizes the Typesense search engine.📄️ USearchOnly available on Node.js.📄️ VectaraVectara is a platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy.📄️ WeaviateWeaviate is an open source vector database that stores both objects and vectors, allowing for combining vector search with structured filtering. LangChain connects to Weaviate via the weaviate-ts-client package, the official Typescript client for Weaviate.📄️ XataXata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a UI for managing your data.📄️ ZepZep is an open source long-term memory store for LLM applications. Zep makes it easy to add relevant documents,PreviousVector storesNextMemory
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference |
4e9727215e95-1301 | ModulesData connectionVector storesIntegrationsVector Stores: Integrations📄️ MemoryMemoryVectorStore is an in-memory, ephemeral vectorstore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. The default similarity metric is cosine similarity, but can be changed to any of the similarity metrics supported by ml-distance.📄️ AnalyticDBAnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.📄️ ChromaChroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.📄️ ElasticsearchOnly available on Node.js.📄️ FaissOnly available on Node.js.📄️ HNSWLibOnly available on Node.js.📄️ LanceDBLanceDB is an embedded vector database for AI applications. It is open source and distributed with an Apache-2.0 license.📄️ MilvusMilvus is a vector database built for embeddings similarity search and AI applications.📄️ MongoDB AtlasOnly available on Node.js.📄️ MyScaleOnly available on Node.js.📄️ OpenSearchOnly available on Node.js.📄️ PineconeOnly available on Node.js.📄️ PrismaFor augmenting existing models in PostgreSQL database with vector search, Langchain supports using Prisma together with PostgreSQL and pgvector Postgres extension.📄️ QdrantQdrant is a vector similarity search engine. |
4e9727215e95-1302 | It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload.📄️ RedisRedis is a fast open source, in-memory data store.📄️ SingleStoreSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premise. It provides vector storage, as well as vector functions like dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching.📄️ SupabaseLangchain supports using Supabase Postgres database as a vector store, using the pgvector postgres extension. Refer to the Supabase blog post for more information.📄️ TigrisTigris makes it easy to build AI applications with vector embeddings.📄️ TypeORMTo enable vector search in a generic PostgreSQL database, LangChainJS supports using TypeORM with the pgvector Postgres extension.📄️ TypesenseVector store that utilizes the Typesense search engine.📄️ USearchOnly available on Node.js.📄️ VectaraVectara is a platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy.📄️ WeaviateWeaviate is an open source vector database that stores both objects and vectors, allowing for combining vector search with structured filtering. |
4e9727215e95-1303 | LangChain connects to Weaviate via the weaviate-ts-client package, the official Typescript client for Weaviate.📄️ XataXata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a UI for managing your data.📄️ ZepZep is an open source long-term memory store for LLM applications. Zep makes it easy to add relevant documents,PreviousVector storesNextMemory |
4e9727215e95-1304 | Vector Stores: Integrations📄️ MemoryMemoryVectorStore is an in-memory, ephemeral vectorstore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. The default similarity metric is cosine similarity, but can be changed to any of the similarity metrics supported by ml-distance.📄️ AnalyticDBAnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.📄️ ChromaChroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.📄️ ElasticsearchOnly available on Node.js.📄️ FaissOnly available on Node.js.📄️ HNSWLibOnly available on Node.js.📄️ LanceDBLanceDB is an embedded vector database for AI applications. It is open source and distributed with an Apache-2.0 license.📄️ MilvusMilvus is a vector database built for embeddings similarity search and AI applications.📄️ MongoDB AtlasOnly available on Node.js.📄️ MyScaleOnly available on Node.js.📄️ OpenSearchOnly available on Node.js.📄️ PineconeOnly available on Node.js.📄️ PrismaFor augmenting existing models in PostgreSQL database with vector search, Langchain supports using Prisma together with PostgreSQL and pgvector Postgres extension.📄️ QdrantQdrant is a vector similarity search engine. |
4e9727215e95-1305 | It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload.📄️ RedisRedis is a fast open source, in-memory data store.📄️ SingleStoreSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premise. It provides vector storage, as well as vector functions like dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching.📄️ SupabaseLangchain supports using Supabase Postgres database as a vector store, using the pgvector postgres extension. Refer to the Supabase blog post for more information.📄️ TigrisTigris makes it easy to build AI applications with vector embeddings.📄️ TypeORMTo enable vector search in a generic PostgreSQL database, LangChainJS supports using TypeORM with the pgvector Postgres extension.📄️ TypesenseVector store that utilizes the Typesense search engine.📄️ USearchOnly available on Node.js.📄️ VectaraVectara is a platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy.📄️ WeaviateWeaviate is an open source vector database that stores both objects and vectors, allowing for combining vector search with structured filtering.
LangChain connects to Weaviate via the weaviate-ts-client package, the official Typescript client for Weaviate.📄️ XataXata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a UI for managing your data.📄️ ZepZep is an open source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, |
4e9727215e95-1306 | MemoryVectorStore is an in-memory, ephemeral vectorstore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. The default similarity metric is cosine similarity, but can be changed to any of the similarity metrics supported by ml-distance.
AnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.
Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.
LanceDB is an embedded vector database for AI applications. It is open source and distributed with an Apache-2.0 license.
Milvus is a vector database built for embeddings similarity search and AI applications.
For augmenting existing models in PostgreSQL database with vector search, Langchain supports using Prisma together with PostgreSQL and pgvector Postgres extension.
Qdrant is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload.
Redis is a fast open source, in-memory data store.
SingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premise. It provides vector storage, as well as vector functions like dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching.
Langchain supports using Supabase Postgres database as a vector store, using the pgvector postgres extension. Refer to the Supabase blog post for more information.
Tigris makes it easy to build AI applications with vector embeddings.
To enable vector search in a generic PostgreSQL database, LangChainJS supports using TypeORM with the pgvector Postgres extension.
Vector store that utilizes the Typesense search engine. |
4e9727215e95-1307 | Vector store that utilizes the Typesense search engine.
Vectara is a platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy.
Weaviate is an open source vector database that stores both objects and vectors, allowing for combining vector search with structured filtering. LangChain connects to Weaviate via the weaviate-ts-client package, the official Typescript client for Weaviate.
Xata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a UI for managing your data.
Zep is an open source long-term memory store for LLM applications. Zep makes it easy to add relevant documents,
Page Title: MemoryVectorStore | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsMemoryMemoryVectorStoreMemoryVectorStore is an in-memory, ephemeral vectorstore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. |
4e9727215e95-1308 | The default similarity metric is cosine similarity, but can be changed to any of the similarity metrics supported by ml-distance.UsageCreate a new index from textsimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await |
4e9727215e95-1309 | MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textUse a custom similarity metricimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { similarity } from "ml-distance";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { similarity: similarity.pearson });const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiPreviousIntegrationsNextAnalyticDBCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-1310 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsMemoryMemoryVectorStoreMemoryVectorStore is an in-memory, ephemeral vectorstore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. |
4e9727215e95-1311 | The default similarity metric is cosine similarity, but can be changed to any of the similarity metrics supported by ml-distance.UsageCreate a new index from textsimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into |
4e9727215e95-1312 | the vector storeconst vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textUse a custom similarity metricimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { similarity } from "ml-distance";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { similarity: similarity.pearson });const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiPreviousIntegrationsNextAnalyticDB
ModulesData connectionVector storesIntegrationsMemoryMemoryVectorStoreMemoryVectorStore is an in-memory, ephemeral vectorstore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. |
4e9727215e95-1313 | The default similarity metric is cosine similarity, but can be changed to any of the similarity metrics supported by ml-distance.UsageCreate a new index from textsimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into |
4e9727215e95-1314 | the vector storeconst vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textUse a custom similarity metricimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { similarity } from "ml-distance";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { similarity: similarity.pearson });const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiPreviousIntegrationsNextAnalyticDB
MemoryVectorStoreMemoryVectorStore is an in-memory, ephemeral vectorstore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. |
4e9727215e95-1315 | The default similarity metric is cosine similarity, but can be changed to any of the similarity metrics supported by ml-distance.UsageCreate a new index from textsimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// |
4e9727215e95-1316 | Load the docs into the vector storeconst vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textUse a custom similarity metricimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { similarity } from "ml-distance";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { similarity: similarity.pearson });const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openai
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { similarity } from "ml-distance";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { similarity: similarity.pearson });const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);
AnalyticDB |
4e9727215e95-1317 | AnalyticDB
Page Title: AnalyticDB | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsAnalyticDBOn this pageAnalyticDBAnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.AnalyticDB for PostgreSQL is developed based on the open source Greenplum Database project and is enhanced with in-depth extensions by Alibaba Cloud. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store. |
4e9727215e95-1318 | AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries.This notebook shows how to use functionality related to the AnalyticDB vector database.To run, you should have an AnalyticDB instance up and running:Using AnalyticDB Cloud Vector Database.CompatibilityOnly available on Node.js.SetupLangChain.js accepts node-postgres as the connections pool for AnalyticDB vectorstore.npmYarnpnpmnpm install -S pgyarn add pgpnpm add pgAnd we need pg-copy-streams to add batch vectors quickly.npmYarnpnpmnpm install -S pg-copy-streamsyarn add pg-copy-streamspnpm add pg-copy-streamsUsageimport { AnalyticDBVectorStore } from "langchain/vectorstores/analyticdb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const connectionOptions = { host: process.env.ANALYTICDB_HOST || "localhost", port: Number(process.env.ANALYTICDB_PORT) || 5432, database: process.env.ANALYTICDB_DATABASE || "your_database", user: process.env.ANALYTICDB_USERNAME || "username", password: process.env.ANALYTICDB_PASSWORD || "password",};const vectorStore = await AnalyticDBVectorStore.fromTexts( ["foo", "bar", "baz"], |
4e9727215e95-1319 | [{ page: 1 }, { page: 2 }, { page: 3 }], new OpenAIEmbeddings(), { connectionOptions });const result = await vectorStore.similaritySearch("foo", 1);console.log(JSON.stringify(result));// [{"pageContent":"foo","metadata":{"page":1}}]await vectorStore.addDocuments([{ pageContent: "foo", metadata: { page: 4 } }]);const filterResult = await vectorStore.similaritySearch("foo", 1, { page: 4,});console.log(JSON.stringify(filterResult));// [{"pageContent":"foo","metadata":{"page":4}}]const filterWithScoreResult = await vectorStore.similaritySearchWithScore( "foo", 1, { page: 3 });console.log(JSON.stringify(filterWithScoreResult));// [[{"pageContent":"baz","metadata":{"page":3}},0.26075905561447144]]const filterNoMatchResult = await vectorStore.similaritySearchWithScore( "foo", 1, { page: 5 });console.log(JSON.stringify(filterNoMatchResult));// []// need to manually close the Connection poolawait vectorStore.end();API Reference:AnalyticDBVectorStore from langchain/vectorstores/analyticdbOpenAIEmbeddings from langchain/embeddings/openaiPreviousMemoryNextChromaSetupUsageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-1320 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsAnalyticDBOn this pageAnalyticDBAnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.AnalyticDB for PostgreSQL is developed based on the open source Greenplum Database project and is enhanced with in-depth extensions by Alibaba Cloud. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store. |
4e9727215e95-1321 | AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries.This notebook shows how to use functionality related to the AnalyticDB vector database.To run, you should have an AnalyticDB instance up and running:Using AnalyticDB Cloud Vector Database.CompatibilityOnly available on Node.js.SetupLangChain.js accepts node-postgres as the connections pool for AnalyticDB vectorstore.npmYarnpnpmnpm install -S pgyarn add pgpnpm add pgAnd we need pg-copy-streams to add batch vectors quickly.npmYarnpnpmnpm install -S pg-copy-streamsyarn add pg-copy-streamspnpm add pg-copy-streamsUsageimport { AnalyticDBVectorStore } from "langchain/vectorstores/analyticdb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const connectionOptions = { host: process.env.ANALYTICDB_HOST || "localhost", port: Number(process.env.ANALYTICDB_PORT) || 5432, database: process.env.ANALYTICDB_DATABASE || "your_database", user: process.env.ANALYTICDB_USERNAME || "username", password: process.env.ANALYTICDB_PASSWORD || "password",};const vectorStore = await |
4e9727215e95-1322 | AnalyticDBVectorStore.fromTexts( ["foo", "bar", "baz"], [{ page: 1 }, { page: 2 }, { page: 3 }], new OpenAIEmbeddings(), { connectionOptions });const result = await vectorStore.similaritySearch("foo", 1);console.log(JSON.stringify(result));// [{"pageContent":"foo","metadata":{"page":1}}]await vectorStore.addDocuments([{ pageContent: "foo", metadata: { page: 4 } }]);const filterResult = await vectorStore.similaritySearch("foo", 1, { page: 4,});console.log(JSON.stringify(filterResult));// [{"pageContent":"foo","metadata":{"page":4}}]const filterWithScoreResult = await vectorStore.similaritySearchWithScore( "foo", 1, { page: 3 });console.log(JSON.stringify(filterWithScoreResult));// [[{"pageContent":"baz","metadata":{"page":3}},0.26075905561447144]]const filterNoMatchResult = await vectorStore.similaritySearchWithScore( "foo", 1, { page: 5 });console.log(JSON.stringify(filterNoMatchResult));// []// need to manually close the Connection poolawait vectorStore.end();API Reference:AnalyticDBVectorStore from langchain/vectorstores/analyticdbOpenAIEmbeddings from langchain/embeddings/openaiPreviousMemoryNextChromaSetupUsage |
4e9727215e95-1323 | ModulesData connectionVector storesIntegrationsAnalyticDBOn this pageAnalyticDBAnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.AnalyticDB for PostgreSQL is developed based on the open source Greenplum Database project and is enhanced with in-depth extensions by Alibaba Cloud. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store.
AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries.This notebook shows how to use functionality related to the AnalyticDB vector database.To run, you should have an AnalyticDB instance up and running:Using AnalyticDB Cloud Vector Database.CompatibilityOnly available on Node.js.SetupLangChain.js accepts node-postgres as the connections pool for AnalyticDB vectorstore.npmYarnpnpmnpm install -S pgyarn add pgpnpm add pgAnd we need pg-copy-streams to add batch vectors quickly.npmYarnpnpmnpm install -S pg-copy-streamsyarn add pg-copy-streamspnpm add pg-copy-streamsUsageimport { AnalyticDBVectorStore } from "langchain/vectorstores/analyticdb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const connectionOptions = { host: process.env.ANALYTICDB_HOST || "localhost", port: Number(process.env.ANALYTICDB_PORT) || 5432, database: process.env.ANALYTICDB_DATABASE || "your_database", user: process.env.ANALYTICDB_USERNAME || "username", password: process.env.ANALYTICDB_PASSWORD || "password",};const vectorStore = await |
4e9727215e95-1324 | AnalyticDBVectorStore.fromTexts( ["foo", "bar", "baz"], [{ page: 1 }, { page: 2 }, { page: 3 }], new OpenAIEmbeddings(), { connectionOptions });const result = await vectorStore.similaritySearch("foo", 1);console.log(JSON.stringify(result));// [{"pageContent":"foo","metadata":{"page":1}}]await vectorStore.addDocuments([{ pageContent: "foo", metadata: { page: 4 } }]);const filterResult = await vectorStore.similaritySearch("foo", 1, { page: 4,});console.log(JSON.stringify(filterResult));// [{"pageContent":"foo","metadata":{"page":4}}]const filterWithScoreResult = await vectorStore.similaritySearchWithScore( "foo", 1, { page: 3 });console.log(JSON.stringify(filterWithScoreResult));// [[{"pageContent":"baz","metadata":{"page":3}},0.26075905561447144]]const filterNoMatchResult = await vectorStore.similaritySearchWithScore( "foo", 1, { page: 5 });console.log(JSON.stringify(filterNoMatchResult));// []// need to manually close the Connection poolawait vectorStore.end();API Reference:AnalyticDBVectorStore from langchain/vectorstores/analyticdbOpenAIEmbeddings from langchain/embeddings/openaiPreviousMemoryNextChromaSetupUsage |
4e9727215e95-1325 | ModulesData connectionVector storesIntegrationsAnalyticDBOn this pageAnalyticDBAnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.AnalyticDB for PostgreSQL is developed based on the open source Greenplum Database project and is enhanced with in-depth extensions by Alibaba Cloud. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store.
AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries.This notebook shows how to use functionality related to the AnalyticDB vector database.To run, you should have an AnalyticDB instance up and running:Using AnalyticDB Cloud Vector Database.CompatibilityOnly available on Node.js.SetupLangChain.js accepts node-postgres as the connections pool for AnalyticDB vectorstore.npmYarnpnpmnpm install -S pgyarn add pgpnpm add pgAnd we need pg-copy-streams to add batch vectors quickly.npmYarnpnpmnpm install -S pg-copy-streamsyarn add pg-copy-streamspnpm add pg-copy-streamsUsageimport { AnalyticDBVectorStore } from "langchain/vectorstores/analyticdb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const connectionOptions = { host: process.env.ANALYTICDB_HOST || "localhost", port: Number(process.env.ANALYTICDB_PORT) || 5432, database: process.env.ANALYTICDB_DATABASE || "your_database", user: process.env.ANALYTICDB_USERNAME || "username", password: process.env.ANALYTICDB_PASSWORD || "password",};const vectorStore = await |
4e9727215e95-1326 | AnalyticDBVectorStore.fromTexts( ["foo", "bar", "baz"], [{ page: 1 }, { page: 2 }, { page: 3 }], new OpenAIEmbeddings(), { connectionOptions });const result = await vectorStore.similaritySearch("foo", 1);console.log(JSON.stringify(result));// [{"pageContent":"foo","metadata":{"page":1}}]await vectorStore.addDocuments([{ pageContent: "foo", metadata: { page: 4 } }]);const filterResult = await vectorStore.similaritySearch("foo", 1, { page: 4,});console.log(JSON.stringify(filterResult));// [{"pageContent":"foo","metadata":{"page":4}}]const filterWithScoreResult = await vectorStore.similaritySearchWithScore( "foo", 1, { page: 3 });console.log(JSON.stringify(filterWithScoreResult));// [[{"pageContent":"baz","metadata":{"page":3}},0.26075905561447144]]const filterNoMatchResult = await vectorStore.similaritySearchWithScore( "foo", 1, { page: 5 });console.log(JSON.stringify(filterNoMatchResult));// []// need to manually close the Connection poolawait vectorStore.end();API Reference:AnalyticDBVectorStore from langchain/vectorstores/analyticdbOpenAIEmbeddings from langchain/embeddings/openaiPreviousMemoryNextChroma
AnalyticDBAnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.AnalyticDB for PostgreSQL is developed based on the open source Greenplum Database project and is enhanced with in-depth extensions by Alibaba Cloud. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store. |
4e9727215e95-1327 | AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries.This notebook shows how to use functionality related to the AnalyticDB vector database.To run, you should have an AnalyticDB instance up and running:Using AnalyticDB Cloud Vector Database.CompatibilityOnly available on Node.js.SetupLangChain.js accepts node-postgres as the connections pool for AnalyticDB vectorstore.npmYarnpnpmnpm install -S pgyarn add pgpnpm add pgAnd we need pg-copy-streams to add batch vectors quickly.npmYarnpnpmnpm install -S pg-copy-streamsyarn add pg-copy-streamspnpm add pg-copy-streamsUsageimport { AnalyticDBVectorStore } from "langchain/vectorstores/analyticdb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const connectionOptions = { host: process.env.ANALYTICDB_HOST || "localhost", port: Number(process.env.ANALYTICDB_PORT) || 5432, database: process.env.ANALYTICDB_DATABASE || "your_database", user: process.env.ANALYTICDB_USERNAME || "username", password: process.env.ANALYTICDB_PASSWORD || "password",};const vectorStore |
4e9727215e95-1328 | = await AnalyticDBVectorStore.fromTexts( ["foo", "bar", "baz"], [{ page: 1 }, { page: 2 }, { page: 3 }], new OpenAIEmbeddings(), { connectionOptions });const result = await vectorStore.similaritySearch("foo", 1);console.log(JSON.stringify(result));// [{"pageContent":"foo","metadata":{"page":1}}]await vectorStore.addDocuments([{ pageContent: "foo", metadata: { page: 4 } }]);const filterResult = await vectorStore.similaritySearch("foo", 1, { page: 4,});console.log(JSON.stringify(filterResult));// [{"pageContent":"foo","metadata":{"page":4}}]const filterWithScoreResult = await vectorStore.similaritySearchWithScore( "foo", 1, { page: 3 });console.log(JSON.stringify(filterWithScoreResult));// [[{"pageContent":"baz","metadata":{"page":3}},0.26075905561447144]]const filterNoMatchResult = await vectorStore.similaritySearchWithScore( "foo", 1, { page: 5 });console.log(JSON.stringify(filterNoMatchResult));// []// need to manually close the Connection poolawait vectorStore.end();API Reference:AnalyticDBVectorStore from langchain/vectorstores/analyticdbOpenAIEmbeddings from langchain/embeddings/openai
AnalyticDB for PostgreSQL is developed based on the open source Greenplum Database project and is enhanced with in-depth extensions by Alibaba Cloud. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries. |
4e9727215e95-1329 | This notebook shows how to use functionality related to the AnalyticDB vector database.
To run, you should have an AnalyticDB instance up and running:
LangChain.js accepts node-postgres as the connections pool for AnalyticDB vectorstore.
npmYarnpnpmnpm install -S pgyarn add pgpnpm add pg
npm install -S pgyarn add pgpnpm add pg
npm install -S pg
yarn add pg
pnpm add pg
And we need pg-copy-streams to add batch vectors quickly.
npmYarnpnpmnpm install -S pg-copy-streamsyarn add pg-copy-streamspnpm add pg-copy-streams
npm install -S pg-copy-streamsyarn add pg-copy-streamspnpm add pg-copy-streams
npm install -S pg-copy-streams
yarn add pg-copy-streams
pnpm add pg-copy-streams |
4e9727215e95-1330 | pnpm add pg-copy-streams
import { AnalyticDBVectorStore } from "langchain/vectorstores/analyticdb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const connectionOptions = { host: process.env.ANALYTICDB_HOST || "localhost", port: Number(process.env.ANALYTICDB_PORT) || 5432, database: process.env.ANALYTICDB_DATABASE || "your_database", user: process.env.ANALYTICDB_USERNAME || "username", password: process.env.ANALYTICDB_PASSWORD || "password",};const vectorStore = await AnalyticDBVectorStore.fromTexts( ["foo", "bar", "baz"], [{ page: 1 }, { page: 2 }, { page: 3 }], new OpenAIEmbeddings(), { connectionOptions });const result = await vectorStore.similaritySearch("foo", 1);console.log(JSON.stringify(result));// [{"pageContent":"foo","metadata":{"page":1}}]await vectorStore.addDocuments([{ pageContent: "foo", metadata: { page: 4 } }]);const filterResult = await vectorStore.similaritySearch("foo", 1, { page: 4,});console.log(JSON.stringify(filterResult));// [{"pageContent":"foo","metadata":{"page":4}}]const filterWithScoreResult = await vectorStore.similaritySearchWithScore( "foo", 1, { page: 3 });console.log(JSON.stringify(filterWithScoreResult));// [[{"pageContent":"baz","metadata":{"page":3}},0.26075905561447144]]const filterNoMatchResult = await vectorStore.similaritySearchWithScore( "foo", 1, { page: 5 });console.log(JSON.stringify(filterNoMatchResult));// []// need to manually close the Connection poolawait vectorStore.end(); |
4e9727215e95-1331 | API Reference:AnalyticDBVectorStore from langchain/vectorstores/analyticdbOpenAIEmbeddings from langchain/embeddings/openai
Chroma
Page Title: Chroma | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsChromaOn this pageChromaChroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. |
4e9727215e95-1332 | WebsiteDocumentationTwitterDiscordSetupRun Chroma with Docker on your computergit clone git@github.com:chroma-core/chroma.gitdocker-compose up -d --buildInstall the Chroma JS SDK.npmYarnpnpmnpm install -S chromadbyarn add chromadbpnpm add chromadbChroma is fully-typed, fully-tested and fully-documented.Like any other database, you can:.add.get.update.upsert.delete.peekand .query runs the similarity search.View full docs at docs.Usage, Index and query Documentsimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Create vector store and index the docsconst vectorStore |
4e9727215e95-1333 | = await Chroma.fromDocuments(docs, new OpenAIEmbeddings(), { collectionName: "a-test-collection", url: "http://localhost:8000", // Optional, will default to this value});// Search for the most similar documentconst response = await vectorStore.similaritySearch("hello", 1);console.log(response);/*[ Document { pageContent: 'Foo\nBar\nBaz\n\n', metadata: { source: 'src/document_loaders/example_data/example.txt' } }]*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textUsage, Index and query textsimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// text sample from Godel, Escher, Bachconst vectorStore = await Chroma.fromTexts( [ `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little Harmonic Labyrinth of the dreaded Majotaur?`, "Achilles: Yiikes! What is that? ", `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, "Achilles: Oh, no! ", "Tortoise: But it's only a myth. Courage, Achilles. |
4e9727215e95-1334 | ", ], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { collectionName: "godel-escher-bach", });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/// You can also filter by metadataconst filteredResponse = await vectorStore.similaritySearch("scared", 2, { id: 1,});console.log(filteredResponse);/*[ Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiUsage, Query docs from existing collectionimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await Chroma.fromExistingCollection( new OpenAIEmbeddings(), { collectionName: "godel-escher-bach" });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? |
4e9727215e95-1335 | ', metadata: { id: 1 } }]*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiUsage, delete docsimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings();const vectorStore = new Chroma(embeddings, { collectionName: "test-deletion",});const documents = [ { pageContent: `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little Harmonic Labyrinth of the dreaded Majotaur?`, metadata: { speaker: "Tortoise", }, }, { pageContent: "Achilles: Yiikes! What is that? ", metadata: { speaker: "Achilles", }, }, { pageContent: `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, metadata: { speaker: "Tortoise", }, }, { pageContent: "Achilles: Oh, no! ", metadata: { speaker: "Achilles", }, }, { pageContent: "Tortoise: But it's only a myth. Courage, Achilles. |
4e9727215e95-1336 | ", metadata: { speaker: "Tortoise", }, },];// Also supports an additional {ids: []} parameter for upsertionconst ids = await vectorStore.addDocuments(documents);const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/// You can also pass a "filter" parameter insteadawait vectorStore.delete({ ids });const response2 = await vectorStore.similaritySearch("scared", 2);console.log(response2);/* []*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiPreviousAnalyticDBNextElasticsearchSetupUsage, Index and query DocumentsUsage, Index and query textsUsage, Query docs from existing collectionUsage, delete docsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsChromaOn this pageChromaChroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. |
4e9727215e95-1337 | WebsiteDocumentationTwitterDiscordSetupRun Chroma with Docker on your computergit clone git@github.com:chroma-core/chroma.gitdocker-compose up -d --buildInstall the Chroma JS SDK.npmYarnpnpmnpm install -S chromadbyarn add chromadbpnpm add chromadbChroma is fully-typed, fully-tested and fully-documented.Like any other database, you can:.add.get.update.upsert.delete.peekand .query runs the similarity search.View full docs at docs.Usage, Index and query Documentsimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Create vector store and index the docsconst vectorStore |
4e9727215e95-1338 | = await Chroma.fromDocuments(docs, new OpenAIEmbeddings(), { collectionName: "a-test-collection", url: "http://localhost:8000", // Optional, will default to this value});// Search for the most similar documentconst response = await vectorStore.similaritySearch("hello", 1);console.log(response);/*[ Document { pageContent: 'Foo\nBar\nBaz\n\n', metadata: { source: 'src/document_loaders/example_data/example.txt' } }]*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textUsage, Index and query textsimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// text sample from Godel, Escher, Bachconst vectorStore = await Chroma.fromTexts( [ `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little Harmonic Labyrinth of the dreaded Majotaur?`, "Achilles: Yiikes! What is that? ", `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, "Achilles: Oh, no! ", "Tortoise: But it's only a myth. Courage, Achilles. |
4e9727215e95-1339 | ", ], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { collectionName: "godel-escher-bach", });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/// You can also filter by metadataconst filteredResponse = await vectorStore.similaritySearch("scared", 2, { id: 1,});console.log(filteredResponse);/*[ Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiUsage, Query docs from existing collectionimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await Chroma.fromExistingCollection( new OpenAIEmbeddings(), { collectionName: "godel-escher-bach" });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? |
4e9727215e95-1340 | ', metadata: { id: 1 } }]*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiUsage, delete docsimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings();const vectorStore = new Chroma(embeddings, { collectionName: "test-deletion",});const documents = [ { pageContent: `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little Harmonic Labyrinth of the dreaded Majotaur?`, metadata: { speaker: "Tortoise", }, }, { pageContent: "Achilles: Yiikes! What is that? ", metadata: { speaker: "Achilles", }, }, { pageContent: `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, metadata: { speaker: "Tortoise", }, }, { pageContent: "Achilles: Oh, no! ", metadata: { speaker: "Achilles", }, }, { pageContent: "Tortoise: But it's only a myth. Courage, Achilles. |
4e9727215e95-1341 | ", metadata: { speaker: "Tortoise", }, },];// Also supports an additional {ids: []} parameter for upsertionconst ids = await vectorStore.addDocuments(documents);const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/// You can also pass a "filter" parameter insteadawait vectorStore.delete({ ids });const response2 = await vectorStore.similaritySearch("scared", 2);console.log(response2);/* []*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiPreviousAnalyticDBNextElasticsearchSetupUsage, Index and query DocumentsUsage, Index and query textsUsage, Query docs from existing collectionUsage, delete docs
ModulesData connectionVector storesIntegrationsChromaOn this pageChromaChroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. |
4e9727215e95-1342 | WebsiteDocumentationTwitterDiscordSetupRun Chroma with Docker on your computergit clone git@github.com:chroma-core/chroma.gitdocker-compose up -d --buildInstall the Chroma JS SDK.npmYarnpnpmnpm install -S chromadbyarn add chromadbpnpm add chromadbChroma is fully-typed, fully-tested and fully-documented.Like any other database, you can:.add.get.update.upsert.delete.peekand .query runs the similarity search.View full docs at docs.Usage, Index and query Documentsimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Create vector store and index the docsconst vectorStore |
4e9727215e95-1343 | = await Chroma.fromDocuments(docs, new OpenAIEmbeddings(), { collectionName: "a-test-collection", url: "http://localhost:8000", // Optional, will default to this value});// Search for the most similar documentconst response = await vectorStore.similaritySearch("hello", 1);console.log(response);/*[ Document { pageContent: 'Foo\nBar\nBaz\n\n', metadata: { source: 'src/document_loaders/example_data/example.txt' } }]*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textUsage, Index and query textsimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// text sample from Godel, Escher, Bachconst vectorStore = await Chroma.fromTexts( [ `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little Harmonic Labyrinth of the dreaded Majotaur?`, "Achilles: Yiikes! What is that? ", `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, "Achilles: Oh, no! ", "Tortoise: But it's only a myth. Courage, Achilles. |
4e9727215e95-1344 | ", ], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { collectionName: "godel-escher-bach", });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/// You can also filter by metadataconst filteredResponse = await vectorStore.similaritySearch("scared", 2, { id: 1,});console.log(filteredResponse);/*[ Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiUsage, Query docs from existing collectionimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await Chroma.fromExistingCollection( new OpenAIEmbeddings(), { collectionName: "godel-escher-bach" });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? |
4e9727215e95-1345 | ', metadata: { id: 1 } }]*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiUsage, delete docsimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings();const vectorStore = new Chroma(embeddings, { collectionName: "test-deletion",});const documents = [ { pageContent: `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little Harmonic Labyrinth of the dreaded Majotaur?`, metadata: { speaker: "Tortoise", }, }, { pageContent: "Achilles: Yiikes! What is that? ", metadata: { speaker: "Achilles", }, }, { pageContent: `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, metadata: { speaker: "Tortoise", }, }, { pageContent: "Achilles: Oh, no! ", metadata: { speaker: "Achilles", }, }, { pageContent: "Tortoise: But it's only a myth. Courage, Achilles. |
4e9727215e95-1346 | ", metadata: { speaker: "Tortoise", }, },];// Also supports an additional {ids: []} parameter for upsertionconst ids = await vectorStore.addDocuments(documents);const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/// You can also pass a "filter" parameter insteadawait vectorStore.delete({ ids });const response2 = await vectorStore.similaritySearch("scared", 2);console.log(response2);/* []*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiPreviousAnalyticDBNextElasticsearchSetupUsage, Index and query DocumentsUsage, Index and query textsUsage, Query docs from existing collectionUsage, delete docs
ModulesData connectionVector storesIntegrationsChromaOn this pageChromaChroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. |
4e9727215e95-1347 | WebsiteDocumentationTwitterDiscordSetupRun Chroma with Docker on your computergit clone git@github.com:chroma-core/chroma.gitdocker-compose up -d --buildInstall the Chroma JS SDK.npmYarnpnpmnpm install -S chromadbyarn add chromadbpnpm add chromadbChroma is fully-typed, fully-tested and fully-documented.Like any other database, you can:.add.get.update.upsert.delete.peekand .query runs the similarity search.View full docs at docs.Usage, Index and query Documentsimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Create vector store and index the docsconst vectorStore |
4e9727215e95-1348 | = await Chroma.fromDocuments(docs, new OpenAIEmbeddings(), { collectionName: "a-test-collection", url: "http://localhost:8000", // Optional, will default to this value});// Search for the most similar documentconst response = await vectorStore.similaritySearch("hello", 1);console.log(response);/*[ Document { pageContent: 'Foo\nBar\nBaz\n\n', metadata: { source: 'src/document_loaders/example_data/example.txt' } }]*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textUsage, Index and query textsimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// text sample from Godel, Escher, Bachconst vectorStore = await Chroma.fromTexts( [ `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little Harmonic Labyrinth of the dreaded Majotaur?`, "Achilles: Yiikes! What is that? ", `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, "Achilles: Oh, no! ", "Tortoise: But it's only a myth. Courage, Achilles. |
4e9727215e95-1349 | ", ], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { collectionName: "godel-escher-bach", });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/// You can also filter by metadataconst filteredResponse = await vectorStore.similaritySearch("scared", 2, { id: 1,});console.log(filteredResponse);/*[ Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiUsage, Query docs from existing collectionimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await Chroma.fromExistingCollection( new OpenAIEmbeddings(), { collectionName: "godel-escher-bach" });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? |
4e9727215e95-1350 | ', metadata: { id: 1 } }]*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiUsage, delete docsimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings();const vectorStore = new Chroma(embeddings, { collectionName: "test-deletion",});const documents = [ { pageContent: `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little Harmonic Labyrinth of the dreaded Majotaur?`, metadata: { speaker: "Tortoise", }, }, { pageContent: "Achilles: Yiikes! What is that? ", metadata: { speaker: "Achilles", }, }, { pageContent: `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, metadata: { speaker: "Tortoise", }, }, { pageContent: "Achilles: Oh, no! ", metadata: { speaker: "Achilles", }, }, { pageContent: "Tortoise: But it's only a myth. Courage, Achilles. |
4e9727215e95-1351 | ", metadata: { speaker: "Tortoise", }, },];// Also supports an additional {ids: []} parameter for upsertionconst ids = await vectorStore.addDocuments(documents);const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/// You can also pass a "filter" parameter insteadawait vectorStore.delete({ ids });const response2 = await vectorStore.similaritySearch("scared", 2);console.log(response2);/* []*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiPreviousAnalyticDBNextElasticsearch
ChromaChroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. |
4e9727215e95-1352 | WebsiteDocumentationTwitterDiscordSetupRun Chroma with Docker on your computergit clone git@github.com:chroma-core/chroma.gitdocker-compose up -d --buildInstall the Chroma JS SDK.npmYarnpnpmnpm install -S chromadbyarn add chromadbpnpm add chromadbChroma is fully-typed, fully-tested and fully-documented.Like any other database, you can:.add.get.update.upsert.delete.peekand .query runs the similarity search.View full docs at docs.Usage, Index and query Documentsimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Create vector store and index the docsconst vectorStore |
4e9727215e95-1353 | = await Chroma.fromDocuments(docs, new OpenAIEmbeddings(), { collectionName: "a-test-collection", url: "http://localhost:8000", // Optional, will default to this value});// Search for the most similar documentconst response = await vectorStore.similaritySearch("hello", 1);console.log(response);/*[ Document { pageContent: 'Foo\nBar\nBaz\n\n', metadata: { source: 'src/document_loaders/example_data/example.txt' } }]*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textUsage, Index and query textsimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// text sample from Godel, Escher, Bachconst vectorStore = await Chroma.fromTexts( [ `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little Harmonic Labyrinth of the dreaded Majotaur?`, "Achilles: Yiikes! What is that? ", `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, "Achilles: Oh, no! ", "Tortoise: But it's only a myth. Courage, Achilles. |
4e9727215e95-1354 | ", ], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { collectionName: "godel-escher-bach", });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/// You can also filter by metadataconst filteredResponse = await vectorStore.similaritySearch("scared", 2, { id: 1,});console.log(filteredResponse);/*[ Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiUsage, Query docs from existing collectionimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await Chroma.fromExistingCollection( new OpenAIEmbeddings(), { collectionName: "godel-escher-bach" });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? |
4e9727215e95-1355 | ', metadata: { id: 1 } }]*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiUsage, delete docsimport { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings();const vectorStore = new Chroma(embeddings, { collectionName: "test-deletion",});const documents = [ { pageContent: `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little Harmonic Labyrinth of the dreaded Majotaur?`, metadata: { speaker: "Tortoise", }, }, { pageContent: "Achilles: Yiikes! What is that? ", metadata: { speaker: "Achilles", }, }, { pageContent: `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, metadata: { speaker: "Tortoise", }, }, { pageContent: "Achilles: Oh, no! ", metadata: { speaker: "Achilles", }, }, { pageContent: "Tortoise: But it's only a myth. Courage, Achilles. |
4e9727215e95-1356 | ", metadata: { speaker: "Tortoise", }, },];// Also supports an additional {ids: []} parameter for upsertionconst ids = await vectorStore.addDocuments(documents);const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/// You can also pass a "filter" parameter insteadawait vectorStore.delete({ ids });const response2 = await vectorStore.similaritySearch("scared", 2);console.log(response2);/* []*/API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openai
git clone git@github.com:chroma-core/chroma.gitdocker-compose up -d --build
npmYarnpnpmnpm install -S chromadbyarn add chromadbpnpm add chromadb
npm install -S chromadbyarn add chromadbpnpm add chromadb
npm install -S chromadb
yarn add chromadb
pnpm add chromadb
Chroma is fully-typed, fully-tested and fully-documented.
Like any other database, you can:
View full docs at docs. |
4e9727215e95-1357 | Like any other database, you can:
View full docs at docs.
import { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Create vector store and index the docsconst vectorStore = await Chroma.fromDocuments(docs, new OpenAIEmbeddings(), { collectionName: "a-test-collection", url: "http://localhost:8000", // Optional, will default to this value});// Search for the most similar documentconst response = await vectorStore.similaritySearch("hello", 1);console.log(response);/*[ Document { pageContent: 'Foo\nBar\nBaz\n\n', metadata: { source: 'src/document_loaders/example_data/example.txt' } }]*/
API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/text |
4e9727215e95-1358 | import { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// text sample from Godel, Escher, Bachconst vectorStore = await Chroma.fromTexts( [ `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little Harmonic Labyrinth of the dreaded Majotaur?`, "Achilles: Yiikes! What is that? ", `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, "Achilles: Oh, no! ", "Tortoise: But it's only a myth. Courage, Achilles. ", ], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { collectionName: "godel-escher-bach", });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/// You can also filter by metadataconst filteredResponse = await vectorStore.similaritySearch("scared", 2, { id: |
4e9727215e95-1359 | filteredResponse = await vectorStore.similaritySearch("scared", 2, { id: 1,});console.log(filteredResponse);/*[ Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/ |
4e9727215e95-1360 | API Reference:Chroma from langchain/vectorstores/chromaOpenAIEmbeddings from langchain/embeddings/openai
import { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await Chroma.fromExistingCollection( new OpenAIEmbeddings(), { collectionName: "godel-escher-bach" });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/ |
4e9727215e95-1361 | import { Chroma } from "langchain/vectorstores/chroma";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings();const vectorStore = new Chroma(embeddings, { collectionName: "test-deletion",});const documents = [ { pageContent: `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little Harmonic Labyrinth of the dreaded Majotaur?`, metadata: { speaker: "Tortoise", }, }, { pageContent: "Achilles: Yiikes! What is that? ", metadata: { speaker: "Achilles", }, }, { pageContent: `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, metadata: { speaker: "Tortoise", }, }, { pageContent: "Achilles: Oh, no! ", metadata: { speaker: "Achilles", }, }, { pageContent: "Tortoise: But it's only a myth. Courage, Achilles. ", metadata: { speaker: "Tortoise", }, },];// Also supports an additional {ids: []} parameter for upsertionconst ids = await vectorStore.addDocuments(documents);const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no! |
4e9727215e95-1362 | ', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that? ', metadata: { id: 1 } }]*/// You can also pass a "filter" parameter insteadawait vectorStore.delete({ ids });const response2 = await vectorStore.similaritySearch("scared", 2);console.log(response2);/* []*/
Elasticsearch
SetupUsage, Index and query DocumentsUsage, Index and query textsUsage, Query docs from existing collectionUsage, delete docs
Page Title: Elasticsearch | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsElasticsearchOn this pageElasticsearchCompatibilityOnly available on Node.js.Elasticsearch is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. It supports also vector search using the k-nearest neighbor (kNN) algorithm and also custom models for Natural Language Processing (NLP). |
4e9727215e95-1363 | You can read more about the support of vector search in Elasticsearch here.LangChain.js accepts @elastic/elasticsearch as the client for Elasticsearch vectorstore.SetupnpmYarnpnpmnpm install -S @elastic/elasticsearchyarn add @elastic/elasticsearchpnpm add @elastic/elasticsearchYou'll also need to have an Elasticsearch instance running. You can use the official Docker image to get started, or you can use Elastic Cloud the official cloud service provided by Elastic.For connecting to Elastic Cloud you can read the documentation reported here for obtaining an API key.Example: index docs, vector search and LLM integrationBelow is an example that indexes 4 documents in Elasticsearch,
runs a vector search query, and finally uses an LLM to answer a question in natural language |
4e9727215e95-1364 | runs a vector search query, and finally uses an LLM to answer a question in natural language
based on the retrieved documents.import { Client, ClientOptions } from "@elastic/elasticsearch";import { Document } from "langchain/document";import { OpenAI } from "langchain/llms/openai";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { ElasticClientArgs, ElasticVectorSearch,} from "langchain/vectorstores/elasticsearch";import { VectorDBQAChain } from "langchain/chains";// to run this first run Elastic's docker-container with `docker-compose up -d --build`export async function run() { const config: ClientOptions = { node: process.env.ELASTIC_URL ? ? "http://127.0.0.1:9200", }; if (process.env.ELASTIC_API_KEY) { config.auth = { apiKey: process.env.ELASTIC_API_KEY, }; } else if (process.env.ELASTIC_USERNAME && process.env.ELASTIC_PASSWORD) { config.auth = { username: process.env.ELASTIC_USERNAME, password: process.env.ELASTIC_PASSWORD, }; } const clientArgs: ElasticClientArgs = { client: new Client(config), indexName: process.env.ELASTIC_INDEX ? ? |
4e9727215e95-1365 | "test_vectorstore", }; // Index documents const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "Elasticsearch is a powerful vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "Elasticsearch a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. |
4e9727215e95-1366 | ", }), ]; const embeddings = new OpenAIEmbeddings(undefined, { baseOptions: { temperature: 0 }, }); // await ElasticVectorSearch.fromDocuments(docs, embeddings, clientArgs); const vectorStore = new ElasticVectorSearch(embeddings, clientArgs); // Also supports an additional {ids: []} parameter for upsertion const ids = await vectorStore.addDocuments(docs); /* Search the vector DB independently with meta filters */ const results = await vectorStore.similaritySearch("fox jump", 1); console.log(JSON.stringify(results, null, 2)); /* [ { "pageContent": "the quick brown fox jumped over the lazy dog", "metadata": { "foo": "bar" } } ] */ /* Use as part of a chain (currently no metadata filters) for LLM query */ const model = new OpenAI(); const chain = VectorDBQAChain.fromLLM(model, vectorStore, { k: 1, returnSourceDocuments: true, }); const response = await chain.call({ query: "What is Elasticsearch?" }); console.log(JSON.stringify(response, null, 2)); /* { "text": " Elasticsearch is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. ", "sourceDocuments": [ { "pageContent": "Elasticsearch a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. |
4e9727215e95-1367 | ", "metadata": { "baz": "qux" } } ] } */ await vectorStore.delete({ ids }); const response2 = await chain.call({ query: "What is Elasticsearch?" }); console.log(JSON.stringify(response2, null, 2)); /* [] */}API Reference:Document from langchain/documentOpenAI from langchain/llms/openaiOpenAIEmbeddings from langchain/embeddings/openaiElasticClientArgs from langchain/vectorstores/elasticsearchElasticVectorSearch from langchain/vectorstores/elasticsearchVectorDBQAChain from langchain/chainsPreviousChromaNextFaissSetupExample: index docs, vector search and LLM integrationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsElasticsearchOn this pageElasticsearchCompatibilityOnly available on Node.js.Elasticsearch is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. It supports also vector search using the k-nearest neighbor (kNN) algorithm and also custom models for Natural Language Processing (NLP). |
4e9727215e95-1368 | You can read more about the support of vector search in Elasticsearch here.LangChain.js accepts @elastic/elasticsearch as the client for Elasticsearch vectorstore.SetupnpmYarnpnpmnpm install -S @elastic/elasticsearchyarn add @elastic/elasticsearchpnpm add @elastic/elasticsearchYou'll also need to have an Elasticsearch instance running. You can use the official Docker image to get started, or you can use Elastic Cloud the official cloud service provided by Elastic.For connecting to Elastic Cloud you can read the documentation reported here for obtaining an API key.Example: index docs, vector search and LLM integrationBelow is an example that indexes 4 documents in Elasticsearch,
runs a vector search query, and finally uses an LLM to answer a question in natural language |
4e9727215e95-1369 | runs a vector search query, and finally uses an LLM to answer a question in natural language
based on the retrieved documents.import { Client, ClientOptions } from "@elastic/elasticsearch";import { Document } from "langchain/document";import { OpenAI } from "langchain/llms/openai";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { ElasticClientArgs, ElasticVectorSearch,} from "langchain/vectorstores/elasticsearch";import { VectorDBQAChain } from "langchain/chains";// to run this first run Elastic's docker-container with `docker-compose up -d --build`export async function run() { const config: ClientOptions = { node: process.env.ELASTIC_URL ? ? "http://127.0.0.1:9200", }; if (process.env.ELASTIC_API_KEY) { config.auth = { apiKey: process.env.ELASTIC_API_KEY, }; } else if (process.env.ELASTIC_USERNAME && process.env.ELASTIC_PASSWORD) { config.auth = { username: process.env.ELASTIC_USERNAME, password: process.env.ELASTIC_PASSWORD, }; } const clientArgs: ElasticClientArgs = { client: new Client(config), indexName: process.env.ELASTIC_INDEX ? ? |
4e9727215e95-1370 | "test_vectorstore", }; // Index documents const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "Elasticsearch is a powerful vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "Elasticsearch a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. |
4e9727215e95-1371 | ", }), ]; const embeddings = new OpenAIEmbeddings(undefined, { baseOptions: { temperature: 0 }, }); // await ElasticVectorSearch.fromDocuments(docs, embeddings, clientArgs); const vectorStore = new ElasticVectorSearch(embeddings, clientArgs); // Also supports an additional {ids: []} parameter for upsertion const ids = await vectorStore.addDocuments(docs); /* Search the vector DB independently with meta filters */ const results = await vectorStore.similaritySearch("fox jump", 1); console.log(JSON.stringify(results, null, 2)); /* [ { "pageContent": "the quick brown fox jumped over the lazy dog", "metadata": { "foo": "bar" } } ] */ /* Use as part of a chain (currently no metadata filters) for LLM query */ const model = new OpenAI(); const chain = VectorDBQAChain.fromLLM(model, vectorStore, { k: 1, returnSourceDocuments: true, }); const response = await chain.call({ query: "What is Elasticsearch?" }); console.log(JSON.stringify(response, null, 2)); /* { "text": " Elasticsearch is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. ", "sourceDocuments": [ { "pageContent": "Elasticsearch a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. |
4e9727215e95-1372 | ", "metadata": { "baz": "qux" } } ] } */ await vectorStore.delete({ ids }); const response2 = await chain.call({ query: "What is Elasticsearch?" }); console.log(JSON.stringify(response2, null, 2)); /* [] */}API Reference:Document from langchain/documentOpenAI from langchain/llms/openaiOpenAIEmbeddings from langchain/embeddings/openaiElasticClientArgs from langchain/vectorstores/elasticsearchElasticVectorSearch from langchain/vectorstores/elasticsearchVectorDBQAChain from langchain/chainsPreviousChromaNextFaissSetupExample: index docs, vector search and LLM integration
ModulesData connectionVector storesIntegrationsElasticsearchOn this pageElasticsearchCompatibilityOnly available on Node.js.Elasticsearch is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. It supports also vector search using the k-nearest neighbor (kNN) algorithm and also custom models for Natural Language Processing (NLP).
You can read more about the support of vector search in Elasticsearch here.LangChain.js accepts @elastic/elasticsearch as the client for Elasticsearch vectorstore.SetupnpmYarnpnpmnpm install -S @elastic/elasticsearchyarn add @elastic/elasticsearchpnpm add @elastic/elasticsearchYou'll also need to have an Elasticsearch instance running. You can use the official Docker image to get started, or you can use Elastic Cloud the official cloud service provided by Elastic.For connecting to Elastic Cloud you can read the documentation reported here for obtaining an API key.Example: index docs, vector search and LLM integrationBelow is an example that indexes 4 documents in Elasticsearch,
runs a vector search query, and finally uses an LLM to answer a question in natural language |
4e9727215e95-1373 | runs a vector search query, and finally uses an LLM to answer a question in natural language
based on the retrieved documents.import { Client, ClientOptions } from "@elastic/elasticsearch";import { Document } from "langchain/document";import { OpenAI } from "langchain/llms/openai";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { ElasticClientArgs, ElasticVectorSearch,} from "langchain/vectorstores/elasticsearch";import { VectorDBQAChain } from "langchain/chains";// to run this first run Elastic's docker-container with `docker-compose up -d --build`export async function run() { const config: ClientOptions = { node: process.env.ELASTIC_URL ? ? "http://127.0.0.1:9200", }; if (process.env.ELASTIC_API_KEY) { config.auth = { apiKey: process.env.ELASTIC_API_KEY, }; } else if (process.env.ELASTIC_USERNAME && process.env.ELASTIC_PASSWORD) { config.auth = { username: process.env.ELASTIC_USERNAME, password: process.env.ELASTIC_PASSWORD, }; } const clientArgs: ElasticClientArgs = { client: new Client(config), indexName: process.env.ELASTIC_INDEX ? ? |
4e9727215e95-1374 | "test_vectorstore", }; // Index documents const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "Elasticsearch is a powerful vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "Elasticsearch a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. |
4e9727215e95-1375 | ", }), ]; const embeddings = new OpenAIEmbeddings(undefined, { baseOptions: { temperature: 0 }, }); // await ElasticVectorSearch.fromDocuments(docs, embeddings, clientArgs); const vectorStore = new ElasticVectorSearch(embeddings, clientArgs); // Also supports an additional {ids: []} parameter for upsertion const ids = await vectorStore.addDocuments(docs); /* Search the vector DB independently with meta filters */ const results = await vectorStore.similaritySearch("fox jump", 1); console.log(JSON.stringify(results, null, 2)); /* [ { "pageContent": "the quick brown fox jumped over the lazy dog", "metadata": { "foo": "bar" } } ] */ /* Use as part of a chain (currently no metadata filters) for LLM query */ const model = new OpenAI(); const chain = VectorDBQAChain.fromLLM(model, vectorStore, { k: 1, returnSourceDocuments: true, }); const response = await chain.call({ query: "What is Elasticsearch?" }); console.log(JSON.stringify(response, null, 2)); /* { "text": " Elasticsearch is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. ", "sourceDocuments": [ { "pageContent": "Elasticsearch a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. |
4e9727215e95-1376 | ", "metadata": { "baz": "qux" } } ] } */ await vectorStore.delete({ ids }); const response2 = await chain.call({ query: "What is Elasticsearch?" }); console.log(JSON.stringify(response2, null, 2)); /* [] */}API Reference:Document from langchain/documentOpenAI from langchain/llms/openaiOpenAIEmbeddings from langchain/embeddings/openaiElasticClientArgs from langchain/vectorstores/elasticsearchElasticVectorSearch from langchain/vectorstores/elasticsearchVectorDBQAChain from langchain/chainsPreviousChromaNextFaissSetupExample: index docs, vector search and LLM integration
ModulesData connectionVector storesIntegrationsElasticsearchOn this pageElasticsearchCompatibilityOnly available on Node.js.Elasticsearch is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. It supports also vector search using the k-nearest neighbor (kNN) algorithm and also custom models for Natural Language Processing (NLP).
You can read more about the support of vector search in Elasticsearch here.LangChain.js accepts @elastic/elasticsearch as the client for Elasticsearch vectorstore.SetupnpmYarnpnpmnpm install -S @elastic/elasticsearchyarn add @elastic/elasticsearchpnpm add @elastic/elasticsearchYou'll also need to have an Elasticsearch instance running. You can use the official Docker image to get started, or you can use Elastic Cloud the official cloud service provided by Elastic.For connecting to Elastic Cloud you can read the documentation reported here for obtaining an API key.Example: index docs, vector search and LLM integrationBelow is an example that indexes 4 documents in Elasticsearch,
runs a vector search query, and finally uses an LLM to answer a question in natural language |
4e9727215e95-1377 | runs a vector search query, and finally uses an LLM to answer a question in natural language
based on the retrieved documents.import { Client, ClientOptions } from "@elastic/elasticsearch";import { Document } from "langchain/document";import { OpenAI } from "langchain/llms/openai";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { ElasticClientArgs, ElasticVectorSearch,} from "langchain/vectorstores/elasticsearch";import { VectorDBQAChain } from "langchain/chains";// to run this first run Elastic's docker-container with `docker-compose up -d --build`export async function run() { const config: ClientOptions = { node: process.env.ELASTIC_URL ? ? "http://127.0.0.1:9200", }; if (process.env.ELASTIC_API_KEY) { config.auth = { apiKey: process.env.ELASTIC_API_KEY, }; } else if (process.env.ELASTIC_USERNAME && process.env.ELASTIC_PASSWORD) { config.auth = { username: process.env.ELASTIC_USERNAME, password: process.env.ELASTIC_PASSWORD, }; } const clientArgs: ElasticClientArgs = { client: new Client(config), indexName: process.env.ELASTIC_INDEX ? ? |
4e9727215e95-1378 | "test_vectorstore", }; // Index documents const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "Elasticsearch is a powerful vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "Elasticsearch a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. |
4e9727215e95-1379 | ", }), ]; const embeddings = new OpenAIEmbeddings(undefined, { baseOptions: { temperature: 0 }, }); // await ElasticVectorSearch.fromDocuments(docs, embeddings, clientArgs); const vectorStore = new ElasticVectorSearch(embeddings, clientArgs); // Also supports an additional {ids: []} parameter for upsertion const ids = await vectorStore.addDocuments(docs); /* Search the vector DB independently with meta filters */ const results = await vectorStore.similaritySearch("fox jump", 1); console.log(JSON.stringify(results, null, 2)); /* [ { "pageContent": "the quick brown fox jumped over the lazy dog", "metadata": { "foo": "bar" } } ] */ /* Use as part of a chain (currently no metadata filters) for LLM query */ const model = new OpenAI(); const chain = VectorDBQAChain.fromLLM(model, vectorStore, { k: 1, returnSourceDocuments: true, }); const response = await chain.call({ query: "What is Elasticsearch?" }); console.log(JSON.stringify(response, null, 2)); /* { "text": " Elasticsearch is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. ", "sourceDocuments": [ { "pageContent": "Elasticsearch a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. |
4e9727215e95-1380 | ", "metadata": { "baz": "qux" } } ] } */ await vectorStore.delete({ ids }); const response2 = await chain.call({ query: "What is Elasticsearch?" }); console.log(JSON.stringify(response2, null, 2)); /* [] */}API Reference:Document from langchain/documentOpenAI from langchain/llms/openaiOpenAIEmbeddings from langchain/embeddings/openaiElasticClientArgs from langchain/vectorstores/elasticsearchElasticVectorSearch from langchain/vectorstores/elasticsearchVectorDBQAChain from langchain/chainsPreviousChromaNextFaiss
ElasticsearchCompatibilityOnly available on Node.js.Elasticsearch is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. It supports also vector search using the k-nearest neighbor (kNN) algorithm and also custom models for Natural Language Processing (NLP).
You can read more about the support of vector search in Elasticsearch here.LangChain.js accepts @elastic/elasticsearch as the client for Elasticsearch vectorstore.SetupnpmYarnpnpmnpm install -S @elastic/elasticsearchyarn add @elastic/elasticsearchpnpm add @elastic/elasticsearchYou'll also need to have an Elasticsearch instance running. You can use the official Docker image to get started, or you can use Elastic Cloud the official cloud service provided by Elastic.For connecting to Elastic Cloud you can read the documentation reported here for obtaining an API key.Example: index docs, vector search and LLM integrationBelow is an example that indexes 4 documents in Elasticsearch,
runs a vector search query, and finally uses an LLM to answer a question in natural language |
4e9727215e95-1381 | runs a vector search query, and finally uses an LLM to answer a question in natural language
based on the retrieved documents.import { Client, ClientOptions } from "@elastic/elasticsearch";import { Document } from "langchain/document";import { OpenAI } from "langchain/llms/openai";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { ElasticClientArgs, ElasticVectorSearch,} from "langchain/vectorstores/elasticsearch";import { VectorDBQAChain } from "langchain/chains";// to run this first run Elastic's docker-container with `docker-compose up -d --build`export async function run() { const config: ClientOptions = { node: process.env.ELASTIC_URL ? ? "http://127.0.0.1:9200", }; if (process.env.ELASTIC_API_KEY) { config.auth = { apiKey: process.env.ELASTIC_API_KEY, }; } else if (process.env.ELASTIC_USERNAME && process.env.ELASTIC_PASSWORD) { config.auth = { username: process.env.ELASTIC_USERNAME, password: process.env.ELASTIC_PASSWORD, }; } const clientArgs: ElasticClientArgs = { client: new Client(config), indexName: process.env.ELASTIC_INDEX ? ? |
4e9727215e95-1382 | "test_vectorstore", }; // Index documents const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "Elasticsearch is a powerful vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "Elasticsearch a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. |
4e9727215e95-1383 | ", }), ]; const embeddings = new OpenAIEmbeddings(undefined, { baseOptions: { temperature: 0 }, }); // await ElasticVectorSearch.fromDocuments(docs, embeddings, clientArgs); const vectorStore = new ElasticVectorSearch(embeddings, clientArgs); // Also supports an additional {ids: []} parameter for upsertion const ids = await vectorStore.addDocuments(docs); /* Search the vector DB independently with meta filters */ const results = await vectorStore.similaritySearch("fox jump", 1); console.log(JSON.stringify(results, null, 2)); /* [ { "pageContent": "the quick brown fox jumped over the lazy dog", "metadata": { "foo": "bar" } } ] */ /* Use as part of a chain (currently no metadata filters) for LLM query */ const model = new OpenAI(); const chain = VectorDBQAChain.fromLLM(model, vectorStore, { k: 1, returnSourceDocuments: true, }); const response = await chain.call({ query: "What is Elasticsearch?" }); console.log(JSON.stringify(response, null, 2)); /* { "text": " Elasticsearch is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. ", "sourceDocuments": [ { "pageContent": "Elasticsearch a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. |
4e9727215e95-1384 | ", "metadata": { "baz": "qux" } } ] } */ await vectorStore.delete({ ids }); const response2 = await chain.call({ query: "What is Elasticsearch?" }); console.log(JSON.stringify(response2, null, 2)); /* [] */}API Reference:Document from langchain/documentOpenAI from langchain/llms/openaiOpenAIEmbeddings from langchain/embeddings/openaiElasticClientArgs from langchain/vectorstores/elasticsearchElasticVectorSearch from langchain/vectorstores/elasticsearchVectorDBQAChain from langchain/chains
Elasticsearch is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. It supports also vector search using the k-nearest neighbor (kNN) algorithm and also custom models for Natural Language Processing (NLP).
You can read more about the support of vector search in Elasticsearch here.
LangChain.js accepts @elastic/elasticsearch as the client for Elasticsearch vectorstore.
npmYarnpnpmnpm install -S @elastic/elasticsearchyarn add @elastic/elasticsearchpnpm add @elastic/elasticsearch
npm install -S @elastic/elasticsearchyarn add @elastic/elasticsearchpnpm add @elastic/elasticsearch
npm install -S @elastic/elasticsearch
yarn add @elastic/elasticsearch
pnpm add @elastic/elasticsearch
You'll also need to have an Elasticsearch instance running. You can use the official Docker image to get started, or you can use Elastic Cloud the official cloud service provided by Elastic.
For connecting to Elastic Cloud you can read the documentation reported here for obtaining an API key.
Below is an example that indexes 4 documents in Elasticsearch,
runs a vector search query, and finally uses an LLM to answer a question in natural language
based on the retrieved documents. |
4e9727215e95-1385 | based on the retrieved documents.
import { Client, ClientOptions } from "@elastic/elasticsearch";import { Document } from "langchain/document";import { OpenAI } from "langchain/llms/openai";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { ElasticClientArgs, ElasticVectorSearch,} from "langchain/vectorstores/elasticsearch";import { VectorDBQAChain } from "langchain/chains";// to run this first run Elastic's docker-container with `docker-compose up -d --build`export async function run() { const config: ClientOptions = { node: process.env.ELASTIC_URL ? ? "http://127.0.0.1:9200", }; if (process.env.ELASTIC_API_KEY) { config.auth = { apiKey: process.env.ELASTIC_API_KEY, }; } else if (process.env.ELASTIC_USERNAME && process.env.ELASTIC_PASSWORD) { config.auth = { username: process.env.ELASTIC_USERNAME, password: process.env.ELASTIC_PASSWORD, }; } const clientArgs: ElasticClientArgs = { client: new Client(config), indexName: process.env.ELASTIC_INDEX ? ? |
4e9727215e95-1386 | "test_vectorstore", }; // Index documents const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "Elasticsearch is a powerful vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "Elasticsearch a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. |
4e9727215e95-1387 | ", }), ]; const embeddings = new OpenAIEmbeddings(undefined, { baseOptions: { temperature: 0 }, }); // await ElasticVectorSearch.fromDocuments(docs, embeddings, clientArgs); const vectorStore = new ElasticVectorSearch(embeddings, clientArgs); // Also supports an additional {ids: []} parameter for upsertion const ids = await vectorStore.addDocuments(docs); /* Search the vector DB independently with meta filters */ const results = await vectorStore.similaritySearch("fox jump", 1); console.log(JSON.stringify(results, null, 2)); /* [ { "pageContent": "the quick brown fox jumped over the lazy dog", "metadata": { "foo": "bar" } } ] */ /* Use as part of a chain (currently no metadata filters) for LLM query */ const model = new OpenAI(); const chain = VectorDBQAChain.fromLLM(model, vectorStore, { k: 1, returnSourceDocuments: true, }); const response = await chain.call({ query: "What is Elasticsearch?" }); console.log(JSON.stringify(response, null, 2)); /* { "text": " Elasticsearch is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. ", "sourceDocuments": [ { "pageContent": "Elasticsearch a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads.
", "metadata": { "baz": "qux" } } ] } */ await vectorStore.delete({ ids }); const response2 = await chain.call({ query: "What is Elasticsearch?" }); console.log(JSON.stringify(response2, null, 2)); /* [] */} |
4e9727215e95-1388 | API Reference:Document from langchain/documentOpenAI from langchain/llms/openaiOpenAIEmbeddings from langchain/embeddings/openaiElasticClientArgs from langchain/vectorstores/elasticsearchElasticVectorSearch from langchain/vectorstores/elasticsearchVectorDBQAChain from langchain/chains
Faiss
SetupExample: index docs, vector search and LLM integration
Page Title: Faiss | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsFaissOn this pageFaissCompatibilityOnly available on Node.js.Faiss is a library for efficient similarity search and clustering of dense vectors.Langchainjs supports using Faiss as a vectorstore that can be saved to file. |
4e9727215e95-1389 | It also provides the ability to read the saved file from Python's implementation.SetupInstall the faiss-node, which is a Node.js bindings for Faiss.npmYarnpnpmnpm install -S faiss-nodeyarn add faiss-nodepnpm add faiss-nodeTo enable the ability to read the saved file from Python's implementation, the pickleparser also needs to install.npmYarnpnpmnpm install -S pickleparseryarn add pickleparserpnpm add pickleparserUsageCreate a new index from textsimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne);};API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await FaissStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar |
4e9727215e95-1390 | documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textMerging indexes and creating new index from another instanceimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { // Create an initial vector store const vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); // Create another vector store from texts const vectorStore2 = await FaissStore.fromTexts( ["Some text"], [{ id: 1 }], new OpenAIEmbeddings() ); // merge the first vector store into vectorStore2 await vectorStore2.mergeFrom(vectorStore); const resultOne = await vectorStore2.similaritySearch("hello world", 1); console.log(resultOne); // You can also create a new vector store from another FaissStore index const vectorStore3 = await FaissStore.fromIndex( vectorStore2, new OpenAIEmbeddings() ); const resultTwo = await vectorStore3.similaritySearch("Bye bye", 1); console.log(resultTwo);};API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiSave an index to file and load it againimport { FaissStore } from |
4e9727215e95-1391 | "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Create a vector store through any method, here from texts as an exampleconst vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directoryconst loadedVectorStore = await FaissStore.load( directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiLoad the saved file from Python's implementationimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// The directory of data saved from Pythonconst directory = "your/directory/here";// Load the vector store from the directoryconst loadedVectorStore = await FaissStore.loadFromPython( directory, new OpenAIEmbeddings());// Search for the most similar documentconst result = await loadedVectorStore.similaritySearch("test", 2);console.log("result", result);API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from |
4e9727215e95-1392 | langchain/embeddings/openaiPreviousElasticsearchNextHNSWLibSetupUsageCreate a new index from textsCreate a new index from a loaderMerging indexes and creating new index from another instanceSave an index to file and load it againLoad the saved file from Python's implementationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsFaissOn this pageFaissCompatibilityOnly available on Node.js.Faiss is a library for efficient similarity search and clustering of dense vectors.Langchainjs supports using Faiss as a vectorstore that can be saved to file. |
4e9727215e95-1393 | It also provides the ability to read the saved file from Python's implementation.SetupInstall the faiss-node, which is a Node.js bindings for Faiss.npmYarnpnpmnpm install -S faiss-nodeyarn add faiss-nodepnpm add faiss-nodeTo enable the ability to read the saved file from Python's implementation, the pickleparser also needs to install.npmYarnpnpmnpm install -S pickleparseryarn add pickleparserpnpm add pickleparserUsageCreate a new index from textsimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne);};API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await FaissStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar |
4e9727215e95-1394 | documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textMerging indexes and creating new index from another instanceimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { // Create an initial vector store const vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); // Create another vector store from texts const vectorStore2 = await FaissStore.fromTexts( ["Some text"], [{ id: 1 }], new OpenAIEmbeddings() ); // merge the first vector store into vectorStore2 await vectorStore2.mergeFrom(vectorStore); const resultOne = await vectorStore2.similaritySearch("hello world", 1); console.log(resultOne); // You can also create a new vector store from another FaissStore index const vectorStore3 = await FaissStore.fromIndex( vectorStore2, new OpenAIEmbeddings() ); const resultTwo = await vectorStore3.similaritySearch("Bye bye", 1); console.log(resultTwo);};API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiSave an index to file and load it againimport { FaissStore } from |
4e9727215e95-1395 | "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Create a vector store through any method, here from texts as an exampleconst vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directoryconst loadedVectorStore = await FaissStore.load( directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiLoad the saved file from Python's implementationimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// The directory of data saved from Pythonconst directory = "your/directory/here";// Load the vector store from the directoryconst loadedVectorStore = await FaissStore.loadFromPython( directory, new OpenAIEmbeddings());// Search for the most similar documentconst result = await loadedVectorStore.similaritySearch("test", 2);console.log("result", result);API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from
langchain/embeddings/openaiPreviousElasticsearchNextHNSWLibSetupUsageCreate a new index from textsCreate a new index from a loaderMerging indexes and creating new index from another instanceSave an index to file and load it againLoad the saved file from Python's implementation |
4e9727215e95-1396 | ModulesData connectionVector storesIntegrationsFaissOn this pageFaissCompatibilityOnly available on Node.js.Faiss is a library for efficient similarity search and clustering of dense vectors.Langchainjs supports using Faiss as a vectorstore that can be saved to file. |
4e9727215e95-1397 | It also provides the ability to read the saved file from Python's implementation.SetupInstall the faiss-node, which is a Node.js bindings for Faiss.npmYarnpnpmnpm install -S faiss-nodeyarn add faiss-nodepnpm add faiss-nodeTo enable the ability to read the saved file from Python's implementation, the pickleparser also needs to install.npmYarnpnpmnpm install -S pickleparseryarn add pickleparserpnpm add pickleparserUsageCreate a new index from textsimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne);};API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await FaissStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar |
4e9727215e95-1398 | documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textMerging indexes and creating new index from another instanceimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { // Create an initial vector store const vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); // Create another vector store from texts const vectorStore2 = await FaissStore.fromTexts( ["Some text"], [{ id: 1 }], new OpenAIEmbeddings() ); // merge the first vector store into vectorStore2 await vectorStore2.mergeFrom(vectorStore); const resultOne = await vectorStore2.similaritySearch("hello world", 1); console.log(resultOne); // You can also create a new vector store from another FaissStore index const vectorStore3 = await FaissStore.fromIndex( vectorStore2, new OpenAIEmbeddings() ); const resultTwo = await vectorStore3.similaritySearch("Bye bye", 1); console.log(resultTwo);};API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiSave an index to file and load it againimport { FaissStore } from |
4e9727215e95-1399 | "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Create a vector store through any method, here from texts as an exampleconst vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directoryconst loadedVectorStore = await FaissStore.load( directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiLoad the saved file from Python's implementationimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// The directory of data saved from Pythonconst directory = "your/directory/here";// Load the vector store from the directoryconst loadedVectorStore = await FaissStore.loadFromPython( directory, new OpenAIEmbeddings());// Search for the most similar documentconst result = await loadedVectorStore.similaritySearch("test", 2);console.log("result", result);API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from
langchain/embeddings/openaiPreviousElasticsearchNextHNSWLibSetupUsageCreate a new index from textsCreate a new index from a loaderMerging indexes and creating new index from another instanceSave an index to file and load it againLoad the saved file from Python's implementation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.