id
stringlengths 14
17
| text
stringlengths 42
2.1k
|
---|---|
4e9727215e95-1400 | ModulesData connectionVector storesIntegrationsFaissOn this pageFaissCompatibilityOnly available on Node.js.Faiss is a library for efficient similarity search and clustering of dense vectors.Langchainjs supports using Faiss as a vectorstore that can be saved to file. |
4e9727215e95-1401 | It also provides the ability to read the saved file from Python's implementation.SetupInstall the faiss-node, which is a Node.js bindings for Faiss.npmYarnpnpmnpm install -S faiss-nodeyarn add faiss-nodepnpm add faiss-nodeTo enable the ability to read the saved file from Python's implementation, the pickleparser also needs to install.npmYarnpnpmnpm install -S pickleparseryarn add pickleparserpnpm add pickleparserUsageCreate a new index from textsimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne);};API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await FaissStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar |
4e9727215e95-1402 | documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textMerging indexes and creating new index from another instanceimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { // Create an initial vector store const vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); // Create another vector store from texts const vectorStore2 = await FaissStore.fromTexts( ["Some text"], [{ id: 1 }], new OpenAIEmbeddings() ); // merge the first vector store into vectorStore2 await vectorStore2.mergeFrom(vectorStore); const resultOne = await vectorStore2.similaritySearch("hello world", 1); console.log(resultOne); // You can also create a new vector store from another FaissStore index const vectorStore3 = await FaissStore.fromIndex( vectorStore2, new OpenAIEmbeddings() ); const resultTwo = await vectorStore3.similaritySearch("Bye bye", 1); console.log(resultTwo);};API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiSave an index to file and load it againimport { FaissStore } from |
4e9727215e95-1403 | "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Create a vector store through any method, here from texts as an exampleconst vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directoryconst loadedVectorStore = await FaissStore.load( directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiLoad the saved file from Python's implementationimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// The directory of data saved from Pythonconst directory = "your/directory/here";// Load the vector store from the directoryconst loadedVectorStore = await FaissStore.loadFromPython( directory, new OpenAIEmbeddings());// Search for the most similar documentconst result = await loadedVectorStore.similaritySearch("test", 2);console.log("result", result);API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiPreviousElasticsearchNextHNSWLib
FaissCompatibilityOnly available on Node.js.Faiss is a library for efficient similarity search and clustering of dense vectors.Langchainjs supports using Faiss as a vectorstore that can be saved to file. |
4e9727215e95-1404 | It also provides the ability to read the saved file from Python's implementation.SetupInstall the faiss-node, which is a Node.js bindings for Faiss.npmYarnpnpmnpm install -S faiss-nodeyarn add faiss-nodepnpm add faiss-nodeTo enable the ability to read the saved file from Python's implementation, the pickleparser also needs to install.npmYarnpnpmnpm install -S pickleparseryarn add pickleparserpnpm add pickleparserUsageCreate a new index from textsimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne);};API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await FaissStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar |
4e9727215e95-1405 | documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textMerging indexes and creating new index from another instanceimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { // Create an initial vector store const vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); // Create another vector store from texts const vectorStore2 = await FaissStore.fromTexts( ["Some text"], [{ id: 1 }], new OpenAIEmbeddings() ); // merge the first vector store into vectorStore2 await vectorStore2.mergeFrom(vectorStore); const resultOne = await vectorStore2.similaritySearch("hello world", 1); console.log(resultOne); // You can also create a new vector store from another FaissStore index const vectorStore3 = await FaissStore.fromIndex( vectorStore2, new OpenAIEmbeddings() ); const resultTwo = await vectorStore3.similaritySearch("Bye bye", 1); console.log(resultTwo);};API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiSave an index to file and load it againimport { FaissStore } from |
4e9727215e95-1406 | "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Create a vector store through any method, here from texts as an exampleconst vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directoryconst loadedVectorStore = await FaissStore.load( directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiLoad the saved file from Python's implementationimport { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// The directory of data saved from Pythonconst directory = "your/directory/here";// Load the vector store from the directoryconst loadedVectorStore = await FaissStore.loadFromPython( directory, new OpenAIEmbeddings());// Search for the most similar documentconst result = await loadedVectorStore.similaritySearch("test", 2);console.log("result", result);API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openai
Faiss is a library for efficient similarity search and clustering of dense vectors.
Langchainjs supports using Faiss as a vectorstore that can be saved to file. It also provides the ability to read the saved file from Python's implementation. |
4e9727215e95-1407 | Install the faiss-node, which is a Node.js bindings for Faiss.
npmYarnpnpmnpm install -S faiss-nodeyarn add faiss-nodepnpm add faiss-node
npm install -S faiss-nodeyarn add faiss-nodepnpm add faiss-node
npm install -S faiss-node
yarn add faiss-node
pnpm add faiss-node
To enable the ability to read the saved file from Python's implementation, the pickleparser also needs to install.
npmYarnpnpmnpm install -S pickleparseryarn add pickleparserpnpm add pickleparser
npm install -S pickleparseryarn add pickleparserpnpm add pickleparser
npm install -S pickleparser
yarn add pickleparser
pnpm add pickleparser
import { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne);};
API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openai |
4e9727215e95-1408 | import { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await FaissStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);
API Reference:FaissStore from langchain/vectorstores/faissOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/text |
4e9727215e95-1409 | import { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { // Create an initial vector store const vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); // Create another vector store from texts const vectorStore2 = await FaissStore.fromTexts( ["Some text"], [{ id: 1 }], new OpenAIEmbeddings() ); // merge the first vector store into vectorStore2 await vectorStore2.mergeFrom(vectorStore); const resultOne = await vectorStore2.similaritySearch("hello world", 1); console.log(resultOne); // You can also create a new vector store from another FaissStore index const vectorStore3 = await FaissStore.fromIndex( vectorStore2, new OpenAIEmbeddings() ); const resultTwo = await vectorStore3.similaritySearch("Bye bye", 1); console.log(resultTwo);}; |
4e9727215e95-1410 | import { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Create a vector store through any method, here from texts as an exampleconst vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directoryconst loadedVectorStore = await FaissStore.load( directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);
import { FaissStore } from "langchain/vectorstores/faiss";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// The directory of data saved from Pythonconst directory = "your/directory/here";// Load the vector store from the directoryconst loadedVectorStore = await FaissStore.loadFromPython( directory, new OpenAIEmbeddings());// Search for the most similar documentconst result = await loadedVectorStore.similaritySearch("test", 2);console.log("result", result);
HNSWLib
SetupUsageCreate a new index from textsCreate a new index from a loaderMerging indexes and creating new index from another instanceSave an index to file and load it againLoad the saved file from Python's implementation
Page Title: HNSWLib | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-1411 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsHNSWLibOn this pageHNSWLibCompatibilityOnly available on Node.js.HNSWLib is an in-memory vectorstore that can be saved to a file. |
4e9727215e95-1412 | It uses HNSWLib.SetupcautionOn Windows, you might need to install Visual Studio first in order to properly build the hnswlib-node package.You can install it withnpmYarnpnpmnpm install hnswlib-nodeyarn add hnswlib-nodepnpm add hnswlib-nodeUsageCreate a new index from textsimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());// Search for the most similar documentconst result = await vectorStore.similaritySearch("hello world", 1);console.log(result);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from |
4e9727215e95-1413 | langchain/document_loaders/fs/textSave an index to a file and load it againimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Create a vector store through any method, here from texts as an exampleconst vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directoryconst loadedVectorStore = await HNSWLib.load(directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiFilter documentsimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const result = await vectorStore.similaritySearch( "hello world", 10, (document) => document.metadata.id === 3);// only "hello nice world" will be returnedconsole.log(result);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiDelete |
4e9727215e95-1414 | indeximport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Save the vector store to a directoryconst directory = "your/directory/here";// Load the vector store from the same directoryconst loadedVectorStore = await HNSWLib.load(directory, new OpenAIEmbeddings());await loadedVectorStore.delete({ directory });API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiPreviousFaissNextLanceDBSetupUsageCreate a new index from textsCreate a new index from a loaderSave an index to a file and load it againFilter documentsDelete indexCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsHNSWLibOn this pageHNSWLibCompatibilityOnly available on Node.js.HNSWLib is an in-memory vectorstore that can be saved to a file. |
4e9727215e95-1415 | It uses HNSWLib.SetupcautionOn Windows, you might need to install Visual Studio first in order to properly build the hnswlib-node package.You can install it withnpmYarnpnpmnpm install hnswlib-nodeyarn add hnswlib-nodepnpm add hnswlib-nodeUsageCreate a new index from textsimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());// Search for the most similar documentconst result = await vectorStore.similaritySearch("hello world", 1);console.log(result);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from |
4e9727215e95-1416 | langchain/document_loaders/fs/textSave an index to a file and load it againimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Create a vector store through any method, here from texts as an exampleconst vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directoryconst loadedVectorStore = await HNSWLib.load(directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiFilter documentsimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const result = await vectorStore.similaritySearch( "hello world", 10, (document) => document.metadata.id === 3);// only "hello nice world" will be returnedconsole.log(result);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiDelete |
4e9727215e95-1417 | indeximport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Save the vector store to a directoryconst directory = "your/directory/here";// Load the vector store from the same directoryconst loadedVectorStore = await HNSWLib.load(directory, new OpenAIEmbeddings());await loadedVectorStore.delete({ directory });API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiPreviousFaissNextLanceDBSetupUsageCreate a new index from textsCreate a new index from a loaderSave an index to a file and load it againFilter documentsDelete index
ModulesData connectionVector storesIntegrationsHNSWLibOn this pageHNSWLibCompatibilityOnly available on Node.js.HNSWLib is an in-memory vectorstore that can be saved to a file. |
4e9727215e95-1418 | It uses HNSWLib.SetupcautionOn Windows, you might need to install Visual Studio first in order to properly build the hnswlib-node package.You can install it withnpmYarnpnpmnpm install hnswlib-nodeyarn add hnswlib-nodepnpm add hnswlib-nodeUsageCreate a new index from textsimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());// Search for the most similar documentconst result = await vectorStore.similaritySearch("hello world", 1);console.log(result);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from |
4e9727215e95-1419 | langchain/document_loaders/fs/textSave an index to a file and load it againimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Create a vector store through any method, here from texts as an exampleconst vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directoryconst loadedVectorStore = await HNSWLib.load(directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiFilter documentsimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const result = await vectorStore.similaritySearch( "hello world", 10, (document) => document.metadata.id === 3);// only "hello nice world" will be returnedconsole.log(result);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiDelete |
4e9727215e95-1420 | indeximport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Save the vector store to a directoryconst directory = "your/directory/here";// Load the vector store from the same directoryconst loadedVectorStore = await HNSWLib.load(directory, new OpenAIEmbeddings());await loadedVectorStore.delete({ directory });API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiPreviousFaissNextLanceDBSetupUsageCreate a new index from textsCreate a new index from a loaderSave an index to a file and load it againFilter documentsDelete index
ModulesData connectionVector storesIntegrationsHNSWLibOn this pageHNSWLibCompatibilityOnly available on Node.js.HNSWLib is an in-memory vectorstore that can be saved to a file. |
4e9727215e95-1421 | It uses HNSWLib.SetupcautionOn Windows, you might need to install Visual Studio first in order to properly build the hnswlib-node package.You can install it withnpmYarnpnpmnpm install hnswlib-nodeyarn add hnswlib-nodepnpm add hnswlib-nodeUsageCreate a new index from textsimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());// Search for the most similar documentconst result = await vectorStore.similaritySearch("hello world", 1);console.log(result);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from |
4e9727215e95-1422 | langchain/document_loaders/fs/textSave an index to a file and load it againimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Create a vector store through any method, here from texts as an exampleconst vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directoryconst loadedVectorStore = await HNSWLib.load(directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiFilter documentsimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const result = await vectorStore.similaritySearch( "hello world", 10, (document) => document.metadata.id === 3);// only "hello nice world" will be returnedconsole.log(result);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiDelete |
4e9727215e95-1423 | indeximport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Save the vector store to a directoryconst directory = "your/directory/here";// Load the vector store from the same directoryconst loadedVectorStore = await HNSWLib.load(directory, new OpenAIEmbeddings());await loadedVectorStore.delete({ directory });API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiPreviousFaissNextLanceDB
HNSWLibCompatibilityOnly available on Node.js.HNSWLib is an in-memory vectorstore that can be saved to a file. |
4e9727215e95-1424 | It uses HNSWLib.SetupcautionOn Windows, you might need to install Visual Studio first in order to properly build the hnswlib-node package.You can install it withnpmYarnpnpmnpm install hnswlib-nodeyarn add hnswlib-nodepnpm add hnswlib-nodeUsageCreate a new index from textsimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());// Search for the most similar documentconst result = await vectorStore.similaritySearch("hello world", 1);console.log(result);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from |
4e9727215e95-1425 | langchain/document_loaders/fs/textSave an index to a file and load it againimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Create a vector store through any method, here from texts as an exampleconst vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directoryconst loadedVectorStore = await HNSWLib.load(directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiFilter documentsimport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const result = await vectorStore.similaritySearch( "hello world", 10, (document) => document.metadata.id === 3);// only "hello nice world" will be returnedconsole.log(result);API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiDelete |
4e9727215e95-1426 | indeximport { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Save the vector store to a directoryconst directory = "your/directory/here";// Load the vector store from the same directoryconst loadedVectorStore = await HNSWLib.load(directory, new OpenAIEmbeddings());await loadedVectorStore.delete({ directory });API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openai
HNSWLib is an in-memory vectorstore that can be saved to a file. It uses HNSWLib.
cautionOn Windows, you might need to install Visual Studio first in order to properly build the hnswlib-node package.
caution
On Windows, you might need to install Visual Studio first in order to properly build the hnswlib-node package.
You can install it with
npmYarnpnpmnpm install hnswlib-nodeyarn add hnswlib-nodepnpm add hnswlib-node
npm install hnswlib-nodeyarn add hnswlib-nodepnpm add hnswlib-node
npm install hnswlib-node
yarn add hnswlib-node
pnpm add hnswlib-node
import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne); |
4e9727215e95-1427 | API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openai
import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());// Search for the most similar documentconst result = await vectorStore.similaritySearch("hello world", 1);console.log(result);
API Reference:HNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/text
import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Create a vector store through any method, here from texts as an exampleconst vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directoryconst loadedVectorStore = await HNSWLib.load(directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result); |
4e9727215e95-1428 | import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const result = await vectorStore.similaritySearch( "hello world", 10, (document) => document.metadata.id === 3);// only "hello nice world" will be returnedconsole.log(result);
import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// Save the vector store to a directoryconst directory = "your/directory/here";// Load the vector store from the same directoryconst loadedVectorStore = await HNSWLib.load(directory, new OpenAIEmbeddings());await loadedVectorStore.delete({ directory });
LanceDB
SetupUsageCreate a new index from textsCreate a new index from a loaderSave an index to a file and load it againFilter documentsDelete index
Page Title: LanceDB | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-1429 | Page Title: LanceDB | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsLanceDBOn this pageLanceDBLanceDB is an embedded vector database for AI applications. |
4e9727215e95-1430 | It is open source and distributed with an Apache-2.0 license.LanceDB datasets are persisted to disk and can be shared between Node.js and Python.SetupInstall the LanceDB Node.js bindings:npmYarnpnpmnpm install -S vectordbyarn add vectordbpnpm add vectordbUsageCreate a new index from textsimport { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { connect } from "vectordb";import * as fs from "node:fs/promises";import * as path from "node:path";import os from "node:os";export const run = async () => { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); const table = await db.createTable("vectors", [ { vector: Array(1536), text: "sample", id: 1 }, ]); const vectorStore = await LanceDB.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { table } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ Document { pageContent: 'hello nice world', metadata: { id: 3 } } ]};API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } |
4e9727215e95-1431 | { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import fs from "node:fs/promises";import |
4e9727215e95-1432 | path from "node:path";import os from "node:os";import { connect } from "vectordb";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();export const run = async () => { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); const table = await db.createTable("vectors", [ { vector: Array(1536), text: "sample", source: "a" }, ]); const vectorStore = await LanceDB.fromDocuments( docs, new OpenAIEmbeddings(), { table } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ // Document { // pageContent: 'Foo\nBar\nBaz\n\n', // metadata: { source: 'src/document_loaders/example_data/example.txt' } // } // ]};API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textOpen an existing datasetimport { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { connect } from "vectordb";import * as fs from "node:fs/promises";import * as path from "node:path";import os from "node:os";//// You can open a LanceDB dataset created elsewhere, such as LangChain Python, by opening// an existing table//export const run = async () => { const uri = await createdTestDb(); const db = await connect(uri); const table = await |
4e9727215e95-1433 | db.openTable("vectors"); const vectorStore = new LanceDB(new OpenAIEmbeddings(), { table }); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ Document { pageContent: 'Hello world', metadata: { id: 1 } } ]};async function createdTestDb(): Promise<string> { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); await db.createTable("vectors", [ { vector: Array(1536), text: "Hello world", id: 1 }, { vector: Array(1536), text: "Bye bye", id: 2 }, { vector: Array(1536), text: "hello nice world", id: 3 }, ]); return dir;}API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openaiPreviousHNSWLibNextMilvusSetupUsageCreate a new index from textsCreate a new index from a loaderOpen an existing datasetCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-1434 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsLanceDBOn this pageLanceDBLanceDB is an embedded vector database for AI applications. |
4e9727215e95-1435 | It is open source and distributed with an Apache-2.0 license.LanceDB datasets are persisted to disk and can be shared between Node.js and Python.SetupInstall the LanceDB Node.js bindings:npmYarnpnpmnpm install -S vectordbyarn add vectordbpnpm add vectordbUsageCreate a new index from textsimport { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { connect } from "vectordb";import * as fs from "node:fs/promises";import * as path from "node:path";import os from "node:os";export const run = async () => { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); const table = await db.createTable("vectors", [ { vector: Array(1536), text: "sample", id: 1 }, ]); const vectorStore = await LanceDB.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { table } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ Document { pageContent: 'hello nice world', metadata: { id: 3 } } ]};API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } |
4e9727215e95-1436 | { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import fs from "node:fs/promises";import |
4e9727215e95-1437 | path from "node:path";import os from "node:os";import { connect } from "vectordb";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();export const run = async () => { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); const table = await db.createTable("vectors", [ { vector: Array(1536), text: "sample", source: "a" }, ]); const vectorStore = await LanceDB.fromDocuments( docs, new OpenAIEmbeddings(), { table } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ // Document { // pageContent: 'Foo\nBar\nBaz\n\n', // metadata: { source: 'src/document_loaders/example_data/example.txt' } // } // ]};API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textOpen an existing datasetimport { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { connect } from "vectordb";import * as fs from "node:fs/promises";import * as path from "node:path";import os from "node:os";//// You can open a LanceDB dataset created elsewhere, such as LangChain Python, by opening// an existing table//export const run = async () => { const uri = await createdTestDb(); const db = await connect(uri); const table = await |
4e9727215e95-1438 | db.openTable("vectors"); const vectorStore = new LanceDB(new OpenAIEmbeddings(), { table }); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ Document { pageContent: 'Hello world', metadata: { id: 1 } } ]};async function createdTestDb(): Promise<string> { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); await db.createTable("vectors", [ { vector: Array(1536), text: "Hello world", id: 1 }, { vector: Array(1536), text: "Bye bye", id: 2 }, { vector: Array(1536), text: "hello nice world", id: 3 }, ]); return dir;}API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openaiPreviousHNSWLibNextMilvusSetupUsageCreate a new index from textsCreate a new index from a loaderOpen an existing dataset
ModulesData connectionVector storesIntegrationsLanceDBOn this pageLanceDBLanceDB is an embedded vector database for AI applications. |
4e9727215e95-1439 | It is open source and distributed with an Apache-2.0 license.LanceDB datasets are persisted to disk and can be shared between Node.js and Python.SetupInstall the LanceDB Node.js bindings:npmYarnpnpmnpm install -S vectordbyarn add vectordbpnpm add vectordbUsageCreate a new index from textsimport { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { connect } from "vectordb";import * as fs from "node:fs/promises";import * as path from "node:path";import os from "node:os";export const run = async () => { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); const table = await db.createTable("vectors", [ { vector: Array(1536), text: "sample", id: 1 }, ]); const vectorStore = await LanceDB.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { table } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ Document { pageContent: 'hello nice world', metadata: { id: 3 } } ]};API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } |
4e9727215e95-1440 | { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import fs from "node:fs/promises";import |
4e9727215e95-1441 | path from "node:path";import os from "node:os";import { connect } from "vectordb";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();export const run = async () => { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); const table = await db.createTable("vectors", [ { vector: Array(1536), text: "sample", source: "a" }, ]); const vectorStore = await LanceDB.fromDocuments( docs, new OpenAIEmbeddings(), { table } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ // Document { // pageContent: 'Foo\nBar\nBaz\n\n', // metadata: { source: 'src/document_loaders/example_data/example.txt' } // } // ]};API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textOpen an existing datasetimport { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { connect } from "vectordb";import * as fs from "node:fs/promises";import * as path from "node:path";import os from "node:os";//// You can open a LanceDB dataset created elsewhere, such as LangChain Python, by opening// an existing table//export const run = async () => { const uri = await createdTestDb(); const db = await connect(uri); const table = await |
4e9727215e95-1442 | db.openTable("vectors"); const vectorStore = new LanceDB(new OpenAIEmbeddings(), { table }); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ Document { pageContent: 'Hello world', metadata: { id: 1 } } ]};async function createdTestDb(): Promise<string> { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); await db.createTable("vectors", [ { vector: Array(1536), text: "Hello world", id: 1 }, { vector: Array(1536), text: "Bye bye", id: 2 }, { vector: Array(1536), text: "hello nice world", id: 3 }, ]); return dir;}API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openaiPreviousHNSWLibNextMilvusSetupUsageCreate a new index from textsCreate a new index from a loaderOpen an existing dataset
ModulesData connectionVector storesIntegrationsLanceDBOn this pageLanceDBLanceDB is an embedded vector database for AI applications. |
4e9727215e95-1443 | It is open source and distributed with an Apache-2.0 license.LanceDB datasets are persisted to disk and can be shared between Node.js and Python.SetupInstall the LanceDB Node.js bindings:npmYarnpnpmnpm install -S vectordbyarn add vectordbpnpm add vectordbUsageCreate a new index from textsimport { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { connect } from "vectordb";import * as fs from "node:fs/promises";import * as path from "node:path";import os from "node:os";export const run = async () => { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); const table = await db.createTable("vectors", [ { vector: Array(1536), text: "sample", id: 1 }, ]); const vectorStore = await LanceDB.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { table } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ Document { pageContent: 'hello nice world', metadata: { id: 3 } } ]};API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } |
4e9727215e95-1444 | { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import fs from "node:fs/promises";import |
4e9727215e95-1445 | path from "node:path";import os from "node:os";import { connect } from "vectordb";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();export const run = async () => { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); const table = await db.createTable("vectors", [ { vector: Array(1536), text: "sample", source: "a" }, ]); const vectorStore = await LanceDB.fromDocuments( docs, new OpenAIEmbeddings(), { table } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ // Document { // pageContent: 'Foo\nBar\nBaz\n\n', // metadata: { source: 'src/document_loaders/example_data/example.txt' } // } // ]};API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textOpen an existing datasetimport { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { connect } from "vectordb";import * as fs from "node:fs/promises";import * as path from "node:path";import os from "node:os";//// You can open a LanceDB dataset created elsewhere, such as LangChain Python, by opening// an existing table//export const run = async () => { const uri = await createdTestDb(); const db = await connect(uri); const table = await |
4e9727215e95-1446 | db.openTable("vectors"); const vectorStore = new LanceDB(new OpenAIEmbeddings(), { table }); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ Document { pageContent: 'Hello world', metadata: { id: 1 } } ]};async function createdTestDb(): Promise<string> { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); await db.createTable("vectors", [ { vector: Array(1536), text: "Hello world", id: 1 }, { vector: Array(1536), text: "Bye bye", id: 2 }, { vector: Array(1536), text: "hello nice world", id: 3 }, ]); return dir;}API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openaiPreviousHNSWLibNextMilvus
LanceDBLanceDB is an embedded vector database for AI applications. |
4e9727215e95-1447 | It is open source and distributed with an Apache-2.0 license.LanceDB datasets are persisted to disk and can be shared between Node.js and Python.SetupInstall the LanceDB Node.js bindings:npmYarnpnpmnpm install -S vectordbyarn add vectordbpnpm add vectordbUsageCreate a new index from textsimport { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { connect } from "vectordb";import * as fs from "node:fs/promises";import * as path from "node:path";import os from "node:os";export const run = async () => { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); const table = await db.createTable("vectors", [ { vector: Array(1536), text: "sample", id: 1 }, ]); const vectorStore = await LanceDB.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { table } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ Document { pageContent: 'hello nice world', metadata: { id: 3 } } ]};API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } |
4e9727215e95-1448 | { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import fs from "node:fs/promises";import |
4e9727215e95-1449 | path from "node:path";import os from "node:os";import { connect } from "vectordb";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();export const run = async () => { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); const table = await db.createTable("vectors", [ { vector: Array(1536), text: "sample", source: "a" }, ]); const vectorStore = await LanceDB.fromDocuments( docs, new OpenAIEmbeddings(), { table } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ // Document { // pageContent: 'Foo\nBar\nBaz\n\n', // metadata: { source: 'src/document_loaders/example_data/example.txt' } // } // ]};API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textOpen an existing datasetimport { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { connect } from "vectordb";import * as fs from "node:fs/promises";import * as path from "node:path";import os from "node:os";//// You can open a LanceDB dataset created elsewhere, such as LangChain Python, by opening// an existing table//export const run = async () => { const uri = await createdTestDb(); const db = await connect(uri); const table = await |
4e9727215e95-1450 | db.openTable("vectors"); const vectorStore = new LanceDB(new OpenAIEmbeddings(), { table }); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ Document { pageContent: 'Hello world', metadata: { id: 1 } } ]};async function createdTestDb(): Promise<string> { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); await db.createTable("vectors", [ { vector: Array(1536), text: "Hello world", id: 1 }, { vector: Array(1536), text: "Bye bye", id: 2 }, { vector: Array(1536), text: "hello nice world", id: 3 }, ]); return dir;}API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openai
LanceDB datasets are persisted to disk and can be shared between Node.js and Python.
Install the LanceDB Node.js bindings:
npmYarnpnpmnpm install -S vectordbyarn add vectordbpnpm add vectordb
npm install -S vectordbyarn add vectordbpnpm add vectordb
npm install -S vectordb
yarn add vectordb
pnpm add vectordb |
4e9727215e95-1451 | yarn add vectordb
pnpm add vectordb
import { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { connect } from "vectordb";import * as fs from "node:fs/promises";import * as path from "node:path";import os from "node:os";export const run = async () => { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); const table = await db.createTable("vectors", [ { vector: Array(1536), text: "sample", id: 1 }, ]); const vectorStore = await LanceDB.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { table } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ Document { pageContent: 'hello nice world', metadata: { id: 3 } } ]};
API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openai |
4e9727215e95-1452 | import { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import fs from "node:fs/promises";import path from "node:path";import os from "node:os";import { connect } from "vectordb";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();export const run = async () => { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); const table = await db.createTable("vectors", [ { vector: Array(1536), text: "sample", source: "a" }, ]); const vectorStore = await LanceDB.fromDocuments( docs, new OpenAIEmbeddings(), { table } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ // Document { // pageContent: 'Foo\nBar\nBaz\n\n', // metadata: { source: 'src/document_loaders/example_data/example.txt' } // } // ]};
API Reference:LanceDB from langchain/vectorstores/lancedbOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/text |
4e9727215e95-1453 | import { LanceDB } from "langchain/vectorstores/lancedb";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { connect } from "vectordb";import * as fs from "node:fs/promises";import * as path from "node:path";import os from "node:os";//// You can open a LanceDB dataset created elsewhere, such as LangChain Python, by opening// an existing table//export const run = async () => { const uri = await createdTestDb(); const db = await connect(uri); const table = await db.openTable("vectors"); const vectorStore = new LanceDB(new OpenAIEmbeddings(), { table }); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ Document { pageContent: 'Hello world', metadata: { id: 1 } } ]};async function createdTestDb(): Promise<string> { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); await db.createTable("vectors", [ { vector: Array(1536), text: "Hello world", id: 1 }, { vector: Array(1536), text: "Bye bye", id: 2 }, { vector: Array(1536), text: "hello nice world", id: 3 }, ]); return dir;}
Milvus
SetupUsageCreate a new index from textsCreate a new index from a loaderOpen an existing dataset
Page Title: Milvus | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-1454 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsMilvusOn this pageMilvusMilvus is a vector database built for embeddings similarity search and AI applications.CompatibilityOnly available on Node.js.SetupRun Milvus instance with Docker on your computer docsInstall the Milvus Node.js SDK.npmYarnpnpmnpm install -S @zilliz/milvus2-sdk-nodeyarn add @zilliz/milvus2-sdk-nodepnpm add @zilliz/milvus2-sdk-nodeSetup Env variables for Milvus before running the |
4e9727215e95-1455 | code3.1 OpenAIexport OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HEREexport MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:195303.2 Azure OpenAIexport AZURE_OPENAI_API_KEY=YOUR_AZURE_OPENAI_API_KEY_HEREexport AZURE_OPENAI_API_INSTANCE_NAME=YOUR_AZURE_OPENAI_INSTANCE_NAME_HEREexport AZURE_OPENAI_API_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_COMPLETIONS_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_VERSION=YOUR_AZURE_OPENAI_API_VERSION_HEREexport AZURE_OPENAI_BASE_PATH=YOUR_AZURE_OPENAI_BASE_PATH_HEREexport MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:19530Index and query docsimport { Milvus } from "langchain/vectorstores/milvus";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// text sample from Godel, Escher, Bachconst vectorStore = await Milvus.fromTexts( [ "Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little\ Harmonic Labyrinth of the dreaded Majotaur? ", "Achilles: Yiikes! What is that? |
4e9727215e95-1456 | ", "Tortoise: They say-although I person never believed it myself-that an I\ Majotaur has created a tiny labyrinth sits in a pit in the middle of\ it, waiting innocent victims to get lost in its fears complexity.\ Then, when they wander and dazed into the center, he laughs and\ laughs at them-so hard, that he laughs them to death! ", "Achilles: Oh, no! ", "Tortoise: But it's only a myth. Courage, Achilles. ", ], [{ id: 2 }, { id: 1 }, { id: 3 }, { id: 4 }, { id: 5 }], new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach", });// or alternatively from docsconst vectorStore = await Milvus.fromDocuments(docs, new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach",});const response = await vectorStore.similaritySearch("scared", 2);Query docs from existing collectionimport { Milvus } from "langchain/vectorstores/milvus";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await Milvus.fromExistingCollection( new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach", });const response = await vectorStore.similaritySearch("scared", 2);PreviousLanceDBNextMongoDB AtlasSetupIndex and query docsQuery docs from existing collectionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-1457 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsMilvusOn this pageMilvusMilvus is a vector database built for embeddings similarity search and AI applications.CompatibilityOnly available on Node.js.SetupRun Milvus instance with Docker on your computer docsInstall the Milvus Node.js SDK.npmYarnpnpmnpm install -S @zilliz/milvus2-sdk-nodeyarn add @zilliz/milvus2-sdk-nodepnpm add @zilliz/milvus2-sdk-nodeSetup Env variables for Milvus before running the code3.1 OpenAIexport |
4e9727215e95-1458 | OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HEREexport MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:195303.2 Azure OpenAIexport AZURE_OPENAI_API_KEY=YOUR_AZURE_OPENAI_API_KEY_HEREexport AZURE_OPENAI_API_INSTANCE_NAME=YOUR_AZURE_OPENAI_INSTANCE_NAME_HEREexport AZURE_OPENAI_API_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_COMPLETIONS_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_VERSION=YOUR_AZURE_OPENAI_API_VERSION_HEREexport AZURE_OPENAI_BASE_PATH=YOUR_AZURE_OPENAI_BASE_PATH_HEREexport MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:19530Index and query docsimport { Milvus } from "langchain/vectorstores/milvus";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// text sample from Godel, Escher, Bachconst vectorStore = await Milvus.fromTexts( [ "Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little\ Harmonic Labyrinth of the dreaded Majotaur? ", "Achilles: Yiikes! What is that? |
4e9727215e95-1459 | ", "Tortoise: They say-although I person never believed it myself-that an I\ Majotaur has created a tiny labyrinth sits in a pit in the middle of\ it, waiting innocent victims to get lost in its fears complexity.\ Then, when they wander and dazed into the center, he laughs and\ laughs at them-so hard, that he laughs them to death! ", "Achilles: Oh, no! ", "Tortoise: But it's only a myth. Courage, Achilles. ", ], [{ id: 2 }, { id: 1 }, { id: 3 }, { id: 4 }, { id: 5 }], new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach", });// or alternatively from docsconst vectorStore = await Milvus.fromDocuments(docs, new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach",});const response = await vectorStore.similaritySearch("scared", 2);Query docs from existing collectionimport { Milvus } from "langchain/vectorstores/milvus";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await Milvus.fromExistingCollection( new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach", });const response = await vectorStore.similaritySearch("scared", 2);PreviousLanceDBNextMongoDB AtlasSetupIndex and query docsQuery docs from existing collection |
4e9727215e95-1460 | ModulesData connectionVector storesIntegrationsMilvusOn this pageMilvusMilvus is a vector database built for embeddings similarity search and AI applications.CompatibilityOnly available on Node.js.SetupRun Milvus instance with Docker on your computer docsInstall the Milvus Node.js SDK.npmYarnpnpmnpm install -S @zilliz/milvus2-sdk-nodeyarn add @zilliz/milvus2-sdk-nodepnpm add @zilliz/milvus2-sdk-nodeSetup Env variables for Milvus before running the code3.1 OpenAIexport OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HEREexport MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:195303.2 Azure OpenAIexport AZURE_OPENAI_API_KEY=YOUR_AZURE_OPENAI_API_KEY_HEREexport AZURE_OPENAI_API_INSTANCE_NAME=YOUR_AZURE_OPENAI_INSTANCE_NAME_HEREexport AZURE_OPENAI_API_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_COMPLETIONS_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_VERSION=YOUR_AZURE_OPENAI_API_VERSION_HEREexport AZURE_OPENAI_BASE_PATH=YOUR_AZURE_OPENAI_BASE_PATH_HEREexport MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:19530Index and query docsimport { Milvus } from "langchain/vectorstores/milvus";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// text sample from Godel, Escher, Bachconst vectorStore = await Milvus.fromTexts( [ |
4e9727215e95-1461 | Escher, Bachconst vectorStore = await Milvus.fromTexts( [ "Tortoise: Labyrinth? Labyrinth? |
4e9727215e95-1462 | Could it Are we in the notorious Little\ Harmonic Labyrinth of the dreaded Majotaur? ", "Achilles: Yiikes! What is that? ", "Tortoise: They say-although I person never believed it myself-that an I\ Majotaur has created a tiny labyrinth sits in a pit in the middle of\ it, waiting innocent victims to get lost in its fears complexity.\ Then, when they wander and dazed into the center, he laughs and\ laughs at them-so hard, that he laughs them to death! ", "Achilles: Oh, no! ", "Tortoise: But it's only a myth. Courage, Achilles. ", ], [{ id: 2 }, { id: 1 }, { id: 3 }, { id: 4 }, { id: 5 }], new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach", });// or alternatively from docsconst vectorStore = await Milvus.fromDocuments(docs, new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach",});const response = await vectorStore.similaritySearch("scared", 2);Query docs from existing collectionimport { Milvus } from "langchain/vectorstores/milvus";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await Milvus.fromExistingCollection( new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach", });const response = await vectorStore.similaritySearch("scared", 2);PreviousLanceDBNextMongoDB AtlasSetupIndex and query docsQuery docs from existing collection |
4e9727215e95-1463 | ModulesData connectionVector storesIntegrationsMilvusOn this pageMilvusMilvus is a vector database built for embeddings similarity search and AI applications.CompatibilityOnly available on Node.js.SetupRun Milvus instance with Docker on your computer docsInstall the Milvus Node.js SDK.npmYarnpnpmnpm install -S @zilliz/milvus2-sdk-nodeyarn add @zilliz/milvus2-sdk-nodepnpm add @zilliz/milvus2-sdk-nodeSetup Env variables for Milvus before running the code3.1 OpenAIexport OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HEREexport MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:195303.2 Azure OpenAIexport AZURE_OPENAI_API_KEY=YOUR_AZURE_OPENAI_API_KEY_HEREexport AZURE_OPENAI_API_INSTANCE_NAME=YOUR_AZURE_OPENAI_INSTANCE_NAME_HEREexport AZURE_OPENAI_API_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_COMPLETIONS_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_VERSION=YOUR_AZURE_OPENAI_API_VERSION_HEREexport AZURE_OPENAI_BASE_PATH=YOUR_AZURE_OPENAI_BASE_PATH_HEREexport MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:19530Index and query docsimport { Milvus } from "langchain/vectorstores/milvus";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// text sample from Godel, Escher, Bachconst vectorStore = await Milvus.fromTexts( [ |
4e9727215e95-1464 | Escher, Bachconst vectorStore = await Milvus.fromTexts( [ "Tortoise: Labyrinth? Labyrinth? |
4e9727215e95-1465 | Could it Are we in the notorious Little\ Harmonic Labyrinth of the dreaded Majotaur? ", "Achilles: Yiikes! What is that? ", "Tortoise: They say-although I person never believed it myself-that an I\ Majotaur has created a tiny labyrinth sits in a pit in the middle of\ it, waiting innocent victims to get lost in its fears complexity.\ Then, when they wander and dazed into the center, he laughs and\ laughs at them-so hard, that he laughs them to death! ", "Achilles: Oh, no! ", "Tortoise: But it's only a myth. Courage, Achilles. ", ], [{ id: 2 }, { id: 1 }, { id: 3 }, { id: 4 }, { id: 5 }], new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach", });// or alternatively from docsconst vectorStore = await Milvus.fromDocuments(docs, new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach",});const response = await vectorStore.similaritySearch("scared", 2);Query docs from existing collectionimport { Milvus } from "langchain/vectorstores/milvus";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await Milvus.fromExistingCollection( new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach", });const response = await vectorStore.similaritySearch("scared", 2);PreviousLanceDBNextMongoDB Atlas |
4e9727215e95-1466 | MilvusMilvus is a vector database built for embeddings similarity search and AI applications.CompatibilityOnly available on Node.js.SetupRun Milvus instance with Docker on your computer docsInstall the Milvus Node.js SDK.npmYarnpnpmnpm install -S @zilliz/milvus2-sdk-nodeyarn add @zilliz/milvus2-sdk-nodepnpm add @zilliz/milvus2-sdk-nodeSetup Env variables for Milvus before running the code3.1 OpenAIexport OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HEREexport MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:195303.2 Azure OpenAIexport AZURE_OPENAI_API_KEY=YOUR_AZURE_OPENAI_API_KEY_HEREexport AZURE_OPENAI_API_INSTANCE_NAME=YOUR_AZURE_OPENAI_INSTANCE_NAME_HEREexport AZURE_OPENAI_API_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_COMPLETIONS_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_VERSION=YOUR_AZURE_OPENAI_API_VERSION_HEREexport AZURE_OPENAI_BASE_PATH=YOUR_AZURE_OPENAI_BASE_PATH_HEREexport MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:19530Index and query docsimport { Milvus } from "langchain/vectorstores/milvus";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// text sample from Godel, Escher, Bachconst vectorStore = await Milvus.fromTexts( [ "Tortoise: Labyrinth? Labyrinth? |
4e9727215e95-1467 | Could it Are we in the notorious Little\ Harmonic Labyrinth of the dreaded Majotaur? ", "Achilles: Yiikes! What is that? ", "Tortoise: They say-although I person never believed it myself-that an I\ Majotaur has created a tiny labyrinth sits in a pit in the middle of\ it, waiting innocent victims to get lost in its fears complexity.\ Then, when they wander and dazed into the center, he laughs and\ laughs at them-so hard, that he laughs them to death! ", "Achilles: Oh, no! ", "Tortoise: But it's only a myth. Courage, Achilles. ", ], [{ id: 2 }, { id: 1 }, { id: 3 }, { id: 4 }, { id: 5 }], new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach", });// or alternatively from docsconst vectorStore = await Milvus.fromDocuments(docs, new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach",});const response = await vectorStore.similaritySearch("scared", 2);Query docs from existing collectionimport { Milvus } from "langchain/vectorstores/milvus";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await Milvus.fromExistingCollection( new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach", });const response = await vectorStore.similaritySearch("scared", 2);
Run Milvus instance with Docker on your computer docs
Install the Milvus Node.js SDK. |
4e9727215e95-1468 | Install the Milvus Node.js SDK.
npmYarnpnpmnpm install -S @zilliz/milvus2-sdk-nodeyarn add @zilliz/milvus2-sdk-nodepnpm add @zilliz/milvus2-sdk-node
npm install -S @zilliz/milvus2-sdk-nodeyarn add @zilliz/milvus2-sdk-nodepnpm add @zilliz/milvus2-sdk-node
npm install -S @zilliz/milvus2-sdk-node
yarn add @zilliz/milvus2-sdk-node
pnpm add @zilliz/milvus2-sdk-node
Setup Env variables for Milvus before running the code
3.1 OpenAI
export OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HEREexport MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:19530
3.2 Azure OpenAI
export AZURE_OPENAI_API_KEY=YOUR_AZURE_OPENAI_API_KEY_HEREexport AZURE_OPENAI_API_INSTANCE_NAME=YOUR_AZURE_OPENAI_INSTANCE_NAME_HEREexport AZURE_OPENAI_API_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_COMPLETIONS_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_VERSION=YOUR_AZURE_OPENAI_API_VERSION_HEREexport AZURE_OPENAI_BASE_PATH=YOUR_AZURE_OPENAI_BASE_PATH_HEREexport MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:19530 |
4e9727215e95-1469 | import { Milvus } from "langchain/vectorstores/milvus";import { OpenAIEmbeddings } from "langchain/embeddings/openai";// text sample from Godel, Escher, Bachconst vectorStore = await Milvus.fromTexts( [ "Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little\ Harmonic Labyrinth of the dreaded Majotaur? ", "Achilles: Yiikes! What is that? ", "Tortoise: They say-although I person never believed it myself-that an I\ Majotaur has created a tiny labyrinth sits in a pit in the middle of\ it, waiting innocent victims to get lost in its fears complexity.\ Then, when they wander and dazed into the center, he laughs and\ laughs at them-so hard, that he laughs them to death! ", "Achilles: Oh, no! ", "Tortoise: But it's only a myth. Courage, Achilles. ", ], [{ id: 2 }, { id: 1 }, { id: 3 }, { id: 4 }, { id: 5 }], new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach", });// or alternatively from docsconst vectorStore = await Milvus.fromDocuments(docs, new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach",});const response = await vectorStore.similaritySearch("scared", 2); |
4e9727215e95-1470 | import { Milvus } from "langchain/vectorstores/milvus";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await Milvus.fromExistingCollection( new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach", });const response = await vectorStore.similaritySearch("scared", 2);
MongoDB Atlas
SetupIndex and query docsQuery docs from existing collection
Page Title: MongoDB Atlas | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsMongoDB AtlasOn this pageMongoDB AtlasCompatibilityOnly available on Node.js.LangChain.js supports MongoDB Atlas as a vector store, and supports both standard similarity search and maximal marginal relevance search, |
4e9727215e95-1471 | which takes a combination of documents are most similar to the inputs, then reranks and optimizes for diversity.SetupInstallationFirst, add the Node MongoDB SDK to your project:npmYarnpnpmnpm install -S mongodbyarn add mongodbpnpm add mongodbInitial Cluster ConfigurationNext, you'll need create a MongoDB Atlas cluster. Navigate to the MongoDB Atlas website and create an account if you don't already have one.Create and name a cluster when prompted, then find it under Database. Select Collections and create either a blank collection or one from the provided sample data.Creating an IndexAfter configuring your cluster, you'll need to create an index on the collection field you want to search over.Go to the Search tab within your cluster, then select Create Search Index. Using the JSON editor option, add an index to the collection you wish to use. { "mappings": { "fields": { // Default value, should match the name of the field within your collection that contains embeddings "embedding": [ { "dimensions": 1024, "similarity": "euclidean", "type": "knnVector" } ] } }}The dimensions property should match the dimensionality of the embeddings you are using. For example, Cohere embeddings have 1024 dimensions, and OpenAI embeddings have 1536.Note: By default the vector store expects an index name of default, an indexed collection field name of embedding, and a raw text field name of text. |
4e9727215e95-1472 | You should initialize the vector store with field names matching your collection schema as shown below.Finally, proceed to build the index.UsageIngestionimport { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);await MongoDBAtlasVectorSearch.fromTexts( ["Hello world", "Bye bye", "What's this? "], [{ id: 2 }, { id: 1 }, { id: 3 }], new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. |
4e9727215e95-1473 | Defaults to "embedding" });await client.close();API Reference:MongoDBAtlasVectorSearch from langchain/vectorstores/mongodb_atlasCohereEmbeddings from langchain/embeddings/cohereSearchimport { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. |
4e9727215e95-1474 | Defaults to "embedding"});const resultOne = await vectorStore.similaritySearch("Hello world", 1);console.log(resultOne);await client.close();API Reference:MongoDBAtlasVectorSearch from langchain/vectorstores/mongodb_atlasCohereEmbeddings from langchain/embeddings/cohereMaximal marginal relevanceimport { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. |
4e9727215e95-1475 | Defaults to "embedding"});const resultOne = await vectorStore.maxMarginalRelevanceSearch("Hello world", { k: 4, fetchK: 20, // The number of documents to return on initial fetch});console.log(resultOne);// Using MMR in a vector store retrieverconst retriever = await vectorStore.asRetriever({ searchType: "mmr", searchKwargs: { fetchK: 20, lambda: 0.1, },});const retrieverOutput = await retriever.getRelevantDocuments("Hello world");console.log(retrieverOutput);await client.close();API Reference:MongoDBAtlasVectorSearch from langchain/vectorstores/mongodb_atlasCohereEmbeddings from langchain/embeddings/coherePreviousMilvusNextMyScaleSetupInstallationInitial Cluster ConfigurationCreating an IndexUsageIngestionSearchMaximal marginal relevanceCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsMongoDB AtlasOn this pageMongoDB AtlasCompatibilityOnly available on Node.js.LangChain.js supports MongoDB Atlas as a vector store, and supports both standard similarity search and maximal marginal relevance search, |
4e9727215e95-1476 | which takes a combination of documents are most similar to the inputs, then reranks and optimizes for diversity.SetupInstallationFirst, add the Node MongoDB SDK to your project:npmYarnpnpmnpm install -S mongodbyarn add mongodbpnpm add mongodbInitial Cluster ConfigurationNext, you'll need create a MongoDB Atlas cluster. Navigate to the MongoDB Atlas website and create an account if you don't already have one.Create and name a cluster when prompted, then find it under Database. Select Collections and create either a blank collection or one from the provided sample data.Creating an IndexAfter configuring your cluster, you'll need to create an index on the collection field you want to search over.Go to the Search tab within your cluster, then select Create Search Index. Using the JSON editor option, add an index to the collection you wish to use. { "mappings": { "fields": { // Default value, should match the name of the field within your collection that contains embeddings "embedding": [ { "dimensions": 1024, "similarity": "euclidean", "type": "knnVector" } ] } }}The dimensions property should match the dimensionality of the embeddings you are using. For example, Cohere embeddings have 1024 dimensions, and OpenAI embeddings have 1536.Note: By default the vector store expects an index name of default, an indexed collection field name of embedding, and a raw text field name of text. |
4e9727215e95-1477 | You should initialize the vector store with field names matching your collection schema as shown below.Finally, proceed to build the index.UsageIngestionimport { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);await MongoDBAtlasVectorSearch.fromTexts( ["Hello world", "Bye bye", "What's this? "], [{ id: 2 }, { id: 1 }, { id: 3 }], new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. |
4e9727215e95-1478 | Defaults to "embedding" });await client.close();API Reference:MongoDBAtlasVectorSearch from langchain/vectorstores/mongodb_atlasCohereEmbeddings from langchain/embeddings/cohereSearchimport { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. |
4e9727215e95-1479 | Defaults to "embedding"});const resultOne = await vectorStore.similaritySearch("Hello world", 1);console.log(resultOne);await client.close();API Reference:MongoDBAtlasVectorSearch from langchain/vectorstores/mongodb_atlasCohereEmbeddings from langchain/embeddings/cohereMaximal marginal relevanceimport { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. |
4e9727215e95-1480 | Defaults to "embedding"});const resultOne = await vectorStore.maxMarginalRelevanceSearch("Hello world", { k: 4, fetchK: 20, // The number of documents to return on initial fetch});console.log(resultOne);// Using MMR in a vector store retrieverconst retriever = await vectorStore.asRetriever({ searchType: "mmr", searchKwargs: { fetchK: 20, lambda: 0.1, },});const retrieverOutput = await retriever.getRelevantDocuments("Hello world");console.log(retrieverOutput);await client.close();API Reference:MongoDBAtlasVectorSearch from langchain/vectorstores/mongodb_atlasCohereEmbeddings from langchain/embeddings/coherePreviousMilvusNextMyScaleSetupInstallationInitial Cluster ConfigurationCreating an IndexUsageIngestionSearchMaximal marginal relevance
ModulesData connectionVector storesIntegrationsMongoDB AtlasOn this pageMongoDB AtlasCompatibilityOnly available on Node.js.LangChain.js supports MongoDB Atlas as a vector store, and supports both standard similarity search and maximal marginal relevance search, |
4e9727215e95-1481 | which takes a combination of documents are most similar to the inputs, then reranks and optimizes for diversity.SetupInstallationFirst, add the Node MongoDB SDK to your project:npmYarnpnpmnpm install -S mongodbyarn add mongodbpnpm add mongodbInitial Cluster ConfigurationNext, you'll need create a MongoDB Atlas cluster. Navigate to the MongoDB Atlas website and create an account if you don't already have one.Create and name a cluster when prompted, then find it under Database. Select Collections and create either a blank collection or one from the provided sample data.Creating an IndexAfter configuring your cluster, you'll need to create an index on the collection field you want to search over.Go to the Search tab within your cluster, then select Create Search Index. Using the JSON editor option, add an index to the collection you wish to use. { "mappings": { "fields": { // Default value, should match the name of the field within your collection that contains embeddings "embedding": [ { "dimensions": 1024, "similarity": "euclidean", "type": "knnVector" } ] } }}The dimensions property should match the dimensionality of the embeddings you are using. For example, Cohere embeddings have 1024 dimensions, and OpenAI embeddings have 1536.Note: By default the vector store expects an index name of default, an indexed collection field name of embedding, and a raw text field name of text. |
4e9727215e95-1482 | You should initialize the vector store with field names matching your collection schema as shown below.Finally, proceed to build the index.UsageIngestionimport { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);await MongoDBAtlasVectorSearch.fromTexts( ["Hello world", "Bye bye", "What's this? "], [{ id: 2 }, { id: 1 }, { id: 3 }], new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. |
4e9727215e95-1483 | Defaults to "embedding" });await client.close();API Reference:MongoDBAtlasVectorSearch from langchain/vectorstores/mongodb_atlasCohereEmbeddings from langchain/embeddings/cohereSearchimport { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. |
4e9727215e95-1484 | Defaults to "embedding"});const resultOne = await vectorStore.similaritySearch("Hello world", 1);console.log(resultOne);await client.close();API Reference:MongoDBAtlasVectorSearch from langchain/vectorstores/mongodb_atlasCohereEmbeddings from langchain/embeddings/cohereMaximal marginal relevanceimport { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. |
4e9727215e95-1485 | Defaults to "embedding"});const resultOne = await vectorStore.maxMarginalRelevanceSearch("Hello world", { k: 4, fetchK: 20, // The number of documents to return on initial fetch});console.log(resultOne);// Using MMR in a vector store retrieverconst retriever = await vectorStore.asRetriever({ searchType: "mmr", searchKwargs: { fetchK: 20, lambda: 0.1, },});const retrieverOutput = await retriever.getRelevantDocuments("Hello world");console.log(retrieverOutput);await client.close();API Reference:MongoDBAtlasVectorSearch from langchain/vectorstores/mongodb_atlasCohereEmbeddings from langchain/embeddings/coherePreviousMilvusNextMyScaleSetupInstallationInitial Cluster ConfigurationCreating an IndexUsageIngestionSearchMaximal marginal relevance
ModulesData connectionVector storesIntegrationsMongoDB AtlasOn this pageMongoDB AtlasCompatibilityOnly available on Node.js.LangChain.js supports MongoDB Atlas as a vector store, and supports both standard similarity search and maximal marginal relevance search, |
4e9727215e95-1486 | which takes a combination of documents are most similar to the inputs, then reranks and optimizes for diversity.SetupInstallationFirst, add the Node MongoDB SDK to your project:npmYarnpnpmnpm install -S mongodbyarn add mongodbpnpm add mongodbInitial Cluster ConfigurationNext, you'll need create a MongoDB Atlas cluster. Navigate to the MongoDB Atlas website and create an account if you don't already have one.Create and name a cluster when prompted, then find it under Database. Select Collections and create either a blank collection or one from the provided sample data.Creating an IndexAfter configuring your cluster, you'll need to create an index on the collection field you want to search over.Go to the Search tab within your cluster, then select Create Search Index. Using the JSON editor option, add an index to the collection you wish to use. { "mappings": { "fields": { // Default value, should match the name of the field within your collection that contains embeddings "embedding": [ { "dimensions": 1024, "similarity": "euclidean", "type": "knnVector" } ] } }}The dimensions property should match the dimensionality of the embeddings you are using. For example, Cohere embeddings have 1024 dimensions, and OpenAI embeddings have 1536.Note: By default the vector store expects an index name of default, an indexed collection field name of embedding, and a raw text field name of text. |
4e9727215e95-1487 | You should initialize the vector store with field names matching your collection schema as shown below.Finally, proceed to build the index.UsageIngestionimport { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);await MongoDBAtlasVectorSearch.fromTexts( ["Hello world", "Bye bye", "What's this? "], [{ id: 2 }, { id: 1 }, { id: 3 }], new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. |
4e9727215e95-1488 | Defaults to "embedding" });await client.close();API Reference:MongoDBAtlasVectorSearch from langchain/vectorstores/mongodb_atlasCohereEmbeddings from langchain/embeddings/cohereSearchimport { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. |
4e9727215e95-1489 | Defaults to "embedding"});const resultOne = await vectorStore.similaritySearch("Hello world", 1);console.log(resultOne);await client.close();API Reference:MongoDBAtlasVectorSearch from langchain/vectorstores/mongodb_atlasCohereEmbeddings from langchain/embeddings/cohereMaximal marginal relevanceimport { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. |
4e9727215e95-1490 | Defaults to "embedding"});const resultOne = await vectorStore.maxMarginalRelevanceSearch("Hello world", { k: 4, fetchK: 20, // The number of documents to return on initial fetch});console.log(resultOne);// Using MMR in a vector store retrieverconst retriever = await vectorStore.asRetriever({ searchType: "mmr", searchKwargs: { fetchK: 20, lambda: 0.1, },});const retrieverOutput = await retriever.getRelevantDocuments("Hello world");console.log(retrieverOutput);await client.close();API Reference:MongoDBAtlasVectorSearch from langchain/vectorstores/mongodb_atlasCohereEmbeddings from langchain/embeddings/coherePreviousMilvusNextMyScale
MongoDB AtlasCompatibilityOnly available on Node.js.LangChain.js supports MongoDB Atlas as a vector store, and supports both standard similarity search and maximal marginal relevance search, |
4e9727215e95-1491 | which takes a combination of documents are most similar to the inputs, then reranks and optimizes for diversity.SetupInstallationFirst, add the Node MongoDB SDK to your project:npmYarnpnpmnpm install -S mongodbyarn add mongodbpnpm add mongodbInitial Cluster ConfigurationNext, you'll need create a MongoDB Atlas cluster. Navigate to the MongoDB Atlas website and create an account if you don't already have one.Create and name a cluster when prompted, then find it under Database. Select Collections and create either a blank collection or one from the provided sample data.Creating an IndexAfter configuring your cluster, you'll need to create an index on the collection field you want to search over.Go to the Search tab within your cluster, then select Create Search Index. Using the JSON editor option, add an index to the collection you wish to use. { "mappings": { "fields": { // Default value, should match the name of the field within your collection that contains embeddings "embedding": [ { "dimensions": 1024, "similarity": "euclidean", "type": "knnVector" } ] } }}The dimensions property should match the dimensionality of the embeddings you are using. For example, Cohere embeddings have 1024 dimensions, and OpenAI embeddings have 1536.Note: By default the vector store expects an index name of default, an indexed collection field name of embedding, and a raw text field name of text. |
4e9727215e95-1492 | You should initialize the vector store with field names matching your collection schema as shown below.Finally, proceed to build the index.UsageIngestionimport { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);await MongoDBAtlasVectorSearch.fromTexts( ["Hello world", "Bye bye", "What's this? "], [{ id: 2 }, { id: 1 }, { id: 3 }], new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. |
4e9727215e95-1493 | Defaults to "embedding" });await client.close();API Reference:MongoDBAtlasVectorSearch from langchain/vectorstores/mongodb_atlasCohereEmbeddings from langchain/embeddings/cohereSearchimport { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. |
4e9727215e95-1494 | Defaults to "embedding"});const resultOne = await vectorStore.similaritySearch("Hello world", 1);console.log(resultOne);await client.close();API Reference:MongoDBAtlasVectorSearch from langchain/vectorstores/mongodb_atlasCohereEmbeddings from langchain/embeddings/cohereMaximal marginal relevanceimport { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. |
4e9727215e95-1495 | Defaults to "embedding"});const resultOne = await vectorStore.maxMarginalRelevanceSearch("Hello world", { k: 4, fetchK: 20, // The number of documents to return on initial fetch});console.log(resultOne);// Using MMR in a vector store retrieverconst retriever = await vectorStore.asRetriever({ searchType: "mmr", searchKwargs: { fetchK: 20, lambda: 0.1, },});const retrieverOutput = await retriever.getRelevantDocuments("Hello world");console.log(retrieverOutput);await client.close();API Reference:MongoDBAtlasVectorSearch from langchain/vectorstores/mongodb_atlasCohereEmbeddings from langchain/embeddings/cohere
LangChain.js supports MongoDB Atlas as a vector store, and supports both standard similarity search and maximal marginal relevance search,
which takes a combination of documents are most similar to the inputs, then reranks and optimizes for diversity.
First, add the Node MongoDB SDK to your project:
npmYarnpnpmnpm install -S mongodbyarn add mongodbpnpm add mongodb
npm install -S mongodbyarn add mongodbpnpm add mongodb
npm install -S mongodb
yarn add mongodb
pnpm add mongodb
Next, you'll need create a MongoDB Atlas cluster. Navigate to the MongoDB Atlas website and create an account if you don't already have one.
Create and name a cluster when prompted, then find it under Database. Select Collections and create either a blank collection or one from the provided sample data.
After configuring your cluster, you'll need to create an index on the collection field you want to search over.
Go to the Search tab within your cluster, then select Create Search Index. Using the JSON editor option, add an index to the collection you wish to use. |
4e9727215e95-1496 | { "mappings": { "fields": { // Default value, should match the name of the field within your collection that contains embeddings "embedding": [ { "dimensions": 1024, "similarity": "euclidean", "type": "knnVector" } ] } }}
The dimensions property should match the dimensionality of the embeddings you are using. For example, Cohere embeddings have 1024 dimensions, and OpenAI embeddings have 1536.
Note: By default the vector store expects an index name of default, an indexed collection field name of embedding, and a raw text field name of text. You should initialize the vector store with field names matching your collection schema as shown below.
Finally, proceed to build the index.
import { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);await MongoDBAtlasVectorSearch.fromTexts( ["Hello world", "Bye bye", "What's this? "], [{ id: 2 }, { id: 1 }, { id: 3 }], new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. Defaults to "embedding" });await client.close(); |
4e9727215e95-1497 | API Reference:MongoDBAtlasVectorSearch from langchain/vectorstores/mongodb_atlasCohereEmbeddings from langchain/embeddings/cohere
import { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. Defaults to "embedding"});const resultOne = await vectorStore.similaritySearch("Hello world", 1);console.log(resultOne);await client.close(); |
4e9727215e95-1498 | import { MongoDBAtlasVectorSearch } from "langchain/vectorstores/mongodb_atlas";import { CohereEmbeddings } from "langchain/embeddings/cohere";import { MongoClient } from "mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "");const namespace = "langchain.test";const [dbName, collectionName] = namespace.split(". ");const collection = client.db(dbName).collection(collectionName);const vectorStore = new MongoDBAtlasVectorSearch(new CohereEmbeddings(), { collection, indexName: "default", // The name of the Atlas search index. Defaults to "default" textKey: "text", // The name of the collection field containing the raw content. Defaults to "text" embeddingKey: "embedding", // The name of the collection field containing the embedded text. Defaults to "embedding"});const resultOne = await vectorStore.maxMarginalRelevanceSearch("Hello world", { k: 4, fetchK: 20, // The number of documents to return on initial fetch});console.log(resultOne);// Using MMR in a vector store retrieverconst retriever = await vectorStore.asRetriever({ searchType: "mmr", searchKwargs: { fetchK: 20, lambda: 0.1, },});const retrieverOutput = await retriever.getRelevantDocuments("Hello world");console.log(retrieverOutput);await client.close();
MyScale
SetupInstallationInitial Cluster ConfigurationCreating an IndexUsageIngestionSearchMaximal marginal relevance
Page Title: MyScale | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-1499 | Page Title: MyScale | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsMyScaleOn this pageMyScaleCompatibilityOnly available on Node.js.MyScale is an emerging AI database that harmonizes the power of vector search and SQL analytics, providing a managed, efficient, and responsive experience.SetupLaunch a cluster through MyScale's Web Console. See MyScale's official documentation for more information.After launching a cluster, view your Connection Details from your cluster's Actions menu. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.