id stringlengths 14 17 | text stringlengths 42 2.11k |
|---|---|
4e9727215e95-1200 | Text embedding modelsThe Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.Embeddings create a vector representation of a piece of text.... |
4e9727215e95-1201 | Embeddings occasionally have different embedding methods for queries versus documents, so the embedding class exposes a embedQuery and embedDocuments method.import { OpenAIEmbeddings } from "langchain/embeddings/openai";/* Create instance */const embeddings = new OpenAIEmbeddings();/* Embed queries */const res = await ... |
4e9727215e95-1202 | 0.001373733, -0.015552171, 0.019534737, -0.016169721, 0.007316074, 0.008273906, 0.011418369, -0.01390117, -0.033347685, 0.011248227, 0.0042503807, -0.012792102, -0.0014595914, 0.028356876, 0.025407761, 0.00076445413, -0.016308354, 0.017455231, -0.016396577, 0.008557475, -0.03312083, 0.031... |
4e9727215e95-1203 | 0.0030656934, -0.0113742575, -0.0020322427, 0.005069579, 0.0022701253, 0.036095154, -0.027449455, -0.008475555, 0.015388331, 0.018917186, 0.0018999106, -0.003349262, 0.020895867, -0.014480911, -0.025042271, 0.012546342, 0.013850759, 0.0069253794, 0.008588983, -0.015199285, -0.0029585673, -0.00... |
4e9727215e95-1204 | 0.002720268, 0.025088841, -0.012153786, 0.012928754, 0.013054766, -0.010395928, -0.0035566676, 0.0040008575, 0.008600268, -0.020678446, -0.0019106456, 0.012178987, -0.019241918, 0.030444318, -0.03102397, 0.0035692686, -0.007749692, -0.00604854, -0.01781799, 0.004860884, -0.0156127... |
4e9727215e95-1205 | -0.033241767, 0.031200387, 0.03238489, -0.0212833, 0.0032416396, 0.005443686, -0.007749692, 0.0060201874, |
4e9727215e95-1206 | 0.006281661, 0.016923312, 0.003528315, 0.0076740854, -0.01881348, 0.026109532, 0.024660403, 0.005472039, -0.0016712243, -0.0048136297, 0.018397642, 0.003011669, -0.011385117, -0.0020193304, 0.005138109, 0.0022335495, 0.03603922, -0.027495656, -0.008575066, 0.015436378, 0.018851284... |
4e9727215e95-1207 | 0.0077763423, -0.0260478, -0.0114384955, -0.0022683728, -0.016509168, 0.041797023, 0.01787183, 0.00552271, -0.0049789557, 0.018146982, -0.01542166, 0.033752076, 0.006112323, 0.023872782, -0.016535373, -0.006623321, 0.016116094, -0.0061090477, -0.0044155475, -0.016627092, -0.022... |
4e9727215e95-1208 | 0.017688395, 0.015225122, 0.0046186363, -0.0045007137, 0.024265857, 0.03244183, 0.0038848957, -0.03244183, -0.018893827, -0.0018065092, 0.023440398, -0.021763276, 0.015120302, |
4e9727215e95-1209 | 0.01568371, -0.010861984, 0.011739853, -0.024501702, -0.005214801, 0.022955606, 0.001315165, -0.00492327, 0.0020358032, -0.003468891, -0.031079166, 0.0055259857, 0.0028547104, 0.012087069, 0.007992534, -0.0076256637, 0.008110457, 0.002998838, -0.024265857, 0.006977089, -0.015185... |
4e9727215e95-1210 | The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
Embeddings create a vector representation of a piece of text. This is useful bec... |
4e9727215e95-1211 | import { OpenAIEmbeddings } from "langchain/embeddings/openai";/* Create instance */const embeddings = new OpenAIEmbeddings();/* Embed queries */const res = await embeddings.embedQuery("Hello world");/*[ -0.004845875, 0.004899438, -0.016358767, -0.024475135, -0.017341806, 0.012571548, -0.019156644, 0.009036... |
4e9727215e95-1212 | -0.016169721, 0.007316074, 0.008273906, 0.011418369, -0.01390117, -0.033347685, 0.011248227, 0.0042503807, -0.012792102, -0.0014595914, 0.028356876, 0.025407761, 0.00076445413, -0.016308354, 0.017455231, -0.016396577, 0.008557475, -0.03312083, 0.031104341, 0.032389853, -0.02132437, 0.0... |
4e9727215e95-1213 | 0.015388331, 0.018917186, 0.0018999106, |
4e9727215e95-1214 | 0.003349262, 0.020895867, -0.014480911, -0.025042271, 0.012546342, 0.013850759, 0.0069253794, 0.008588983, -0.015199285, -0.0029585673, -0.008759124, 0.016749462, 0.004111747, -0.04804285, ... 1436 more items]*//* Embed documents */const documentRes = await embeddings.embedDocuments(["Hello world", "... |
4e9727215e95-1215 | -0.0019106456, 0.012178987, -0.019241918, 0.030444318, -0.03102397, 0.0035692686, -0.007749692, -0.00604854, -0.01781799, 0.004860884, -0.015612794, 0.0014097509, -0.015637996, 0.019443536, -0.01612944, 0.0072960514, 0.008316742, 0.011548932, -0.013987249, -0.03336778, 0.01134... |
4e9727215e95-1216 | 0.003528315, 0.0076740854, -0.01881348, 0.026109532, 0.024660403, 0.005472039, -0.0016712243, -0.0048136297, 0.018397642, |
4e9727215e95-1217 | 0.003011669, -0.011385117, -0.0020193304, 0.005138109, 0.0022335495, 0.03603922, -0.027495656, -0.008575066, 0.015436378, 0.018851284, 0.0018019609, -0.0034338066, 0.02094307, -0.014503895, -0.024950229, 0.012632628, 0.013735226, 0.0069936244, 0.008575066, -0.015196957, -0.003054197... |
4e9727215e95-1218 | -0.01542166, 0.033752076, 0.006112323, 0.023872782, -0.016535373, -0.006623321, 0.016116094, -0.0061090477, -0.0044155475, -0.016627092, -0.022077737, -0.0009286407, -0.02156674, 0.011890532, -0.026283644, 0.02630985, 0.011942943, -0.026126415, -0.018264906, -0.014045896, -0.02... |
4e9727215e95-1219 | -0.021763276, 0.015120302, -0.01568371, -0.010861984, 0.011739853, -0.024501702, -0.005214801, 0.022955606, 0.001315165, -0.00492327, 0.0020358032, -0.003468891, -0.031079166, |
4e9727215e95-1220 | 0.0055259857, 0.0028547104, 0.012087069, 0.007992534, -0.0076256637, 0.008110457, 0.002998838, -0.024265857, 0.006977089, -0.015185814, -0.0069115767, 0.006466091, -0.029428247, -0.036241557, 0.036713246, 0.032284595, -0.0021144184, -0.014255536, 0.011228855, -0.027227025, -0.0216... |
4e9727215e95-1221 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toDealing with API errorsDealing with rate limitsAdding a timeoutIntegrationsVector storesRe... |
4e9727215e95-1222 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toDealing with API errorsDealing with rate limitsAdding a timeoutIntegrationsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAd... |
4e9727215e95-1223 | Dealing with API errorsIf the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a maxRetries option when you instantiate the model.... |
4e9727215e95-1224 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toDealing with API errorsDealing with rate limitsAdding a timeoutIntegrationsVector storesRe... |
4e9727215e95-1225 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toDealing with API errorsDealing with rate limitsAdding a timeoutIntegrationsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAd... |
4e9727215e95-1226 | ModulesData connectionText embedding modelsHow-toDealing with rate limitsDealing with rate limitsSome providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating an Embeddings model. This option allows you to sp... |
4e9727215e95-1227 | Dealing with rate limitsSome providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating an Embeddings model. This option allows you to specify the maximum number of concurrent requests you want to make to the p... |
4e9727215e95-1228 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toDealing with API errorsDealing with rate limitsAdding a timeoutIntegrationsVector storesRe... |
4e9727215e95-1229 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toDealing with API errorsDealing with rate limitsAdding a timeoutIntegrationsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAd... |
4e9727215e95-1230 | ModulesData connectionText embedding modelsHow-toAdding a timeoutAdding a timeoutBy default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a timeout option, in milliseconds, when you instantiate the model. For example, for OpenAI:import { OpenAIEmbed... |
4e9727215e95-1231 | import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ timeout: 1000, // 1s timeout});/* Embed queries */const res = await embeddings.embedQuery("Hello world");console.log(res);/* Embed documents */const documentRes = await embeddings.embedDocuments(["Hello world", "By... |
4e9727215e95-1232 | could initialize your instance like this:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.e... |
4e9727215e95-1233 | For example, here's how you would connect to the domain https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azur... |
4e9727215e95-1234 | could initialize your instance like this:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.e... |
4e9727215e95-1235 | For example, here's how you would connect to the domain https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azur... |
4e9727215e95-1236 | could initialize your instance like this:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.e... |
4e9727215e95-1237 | For example, here's how you would connect to the domain https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azur... |
4e9727215e95-1238 | could initialize your instance like this:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.e... |
4e9727215e95-1239 | The OpenAIEmbeddings class can also use the OpenAI API on Azure to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing stripNewLines: false to the constructor.
import { OpenAIEmbeddings } from "langchain/embeddings/... |
4e9727215e95-1240 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensor... |
4e9727215e95-1241 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModul... |
4e9727215e95-1242 | CohereThe CohereEmbeddings class uses the Cohere API to generate embeddings for a given text.npmYarnpnpmnpm install cohere-aiyarn add cohere-aipnpm add cohere-aiimport { CohereEmbeddings } from "langchain/embeddings/cohere";const embeddings = new CohereEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to pro... |
4e9727215e95-1243 | the key as GOOGLE_PALM_API_KEY environment variable or pass it as apiKey parameter while instantiating
the model.import { GooglePaLMEmbeddings } from "langchain/embeddings/googlepalm";export const run = async () => { const model = new GooglePaLMEmbeddings({ apiKey: "<YOUR API KEY>", // or set it in environment var... |
4e9727215e95-1244 | the key as GOOGLE_PALM_API_KEY environment variable or pass it as apiKey parameter while instantiating
the model.import { GooglePaLMEmbeddings } from "langchain/embeddings/googlepalm";export const run = async () => { const model = new GooglePaLMEmbeddings({ apiKey: "<YOUR API KEY>", // or set it in environment var... |
4e9727215e95-1245 | the model.import { GooglePaLMEmbeddings } from "langchain/embeddings/googlepalm";export const run = async () => { const model = new GooglePaLMEmbeddings({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` modelName: "models/embedding-gecko-001", // OPTIONAL }); /* Embed qu... |
4e9727215e95-1246 | the model.import { GooglePaLMEmbeddings } from "langchain/embeddings/googlepalm";export const run = async () => { const model = new GooglePaLMEmbeddings({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` modelName: "models/embedding-gecko-001", // OPTIONAL }); /* Embed qu... |
4e9727215e95-1247 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensor... |
4e9727215e95-1248 | to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryimport { GoogleVertexAIEmbeddings } from "langchain/embeddings/googlevertexai";export const run = async () => { cons... |
4e9727215e95-1249 | enabled for the relevant project in your Google Cloud dashboard and that you've authenticated to
Google Cloud using one of these methods:You are logged into an account (using gcloud auth application-default login)
permitted to that project.You are running on a machine using a service account that is permitted
to the... |
4e9727215e95-1250 | permitted to that project.You are running on a machine using a service account that is permitted
to the project.You have downloaded the credentials for a service account that is permitted
to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install go... |
4e9727215e95-1251 | to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryimport { GoogleVertexAIEmbeddings } from "langchain/embeddings/googlevertexai";export const run = async () => { cons... |
4e9727215e95-1252 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensor... |
4e9727215e95-1253 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModul... |
4e9727215e95-1254 | ModulesData connectionText embedding modelsIntegrationsHuggingFace InferenceHuggingFace InferenceThis Embeddings integration uses the HuggingFace Inference API to generate embeddings for a given text using by default the sentence-transformers/distilbert-base-nli-mean-tokens model. You can pass a different model name to... |
4e9727215e95-1255 | import { HuggingFaceInferenceEmbeddings } from "langchain/embeddings/hf";const embeddings = new HuggingFaceInferenceEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY});
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedI... |
4e9727215e95-1256 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModul... |
4e9727215e95-1257 | initializing the model.PreviousHuggingFace InferenceNextTensorFlow
ModulesData connectionText embedding modelsIntegrationsOpenAIOpenAIThe OpenAIEmbeddings class uses the OpenAI API to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can dis... |
4e9727215e95-1258 | initializing the model.
The OpenAIEmbeddings class uses the OpenAI API to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing stripNewLines: false to the constructor.
import { OpenAIEmbeddings } from "langchain/emb... |
4e9727215e95-1259 | However, it does require more memory and processing power than the other integrations.npmYarnpnpmnpm install @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpuyarn add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 ... |
4e9727215e95-1260 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModul... |
4e9727215e95-1261 | However, you can use any of the backends supported by TensorFlow.js, including GPU and WebAssembly, which will be a lot faster. For Node.js you can use the @tensorflow/tfjs-node package, and for the browser you can use the @tensorflow/tfjs-backend-webgl package. See the TensorFlow.js documentation for more information.... |
4e9727215e95-1262 | ModulesData connectionText embedding modelsIntegrationsTensorFlowTensorFlowThis Embeddings integration runs the embeddings entirely in your browser or Node.js environment, using TensorFlow.js. This means that your data isn't sent to any third party, and you don't need to sign up for any API keys. However, it does requi... |
4e9727215e95-1263 | TensorFlowThis Embeddings integration runs the embeddings entirely in your browser or Node.js environment, using TensorFlow.js. This means that your data isn't sent to any third party, and you don't need to sign up for any API keys. However, it does require more memory and processing power than the other integrations.n... |
4e9727215e95-1264 | This Embeddings integration runs the embeddings entirely in your browser or Node.js environment, using TensorFlow.js. This means that your data isn't sent to any third party, and you don't need to sign up for any API keys. However, it does require more memory and processing power than the other integrations.
npmYarnpn... |
4e9727215e95-1265 | npm install @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpu
yarn add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpu
pnpm add @ten... |
4e9727215e95-1266 | Page Title: Vector stores | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsRetrieversExperimentalCach... |
4e9727215e95-1267 | Therefore, it is recommended that you familiarize yourself with the text embedding model interfaces before diving into this.This walkthrough uses a basic, unoptimized implementation called MemoryVectorStore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings.UsageCreate a ... |
4e9727215e95-1268 | MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.loa... |
4e9727215e95-1269 | : object | undefined ): Promise<Document[]>; /** * Search for the most similar documents to a query, * and return their similarity score */ similaritySearchWithScore( query: string, k = 4, filter: object | undefined = undefined ): Promise<[object, number][]>; /** * Turn a VectorStore into a Retrie... |
4e9727215e95-1270 | You can also create a vector store from an existing index, the signature of this method depends on the vector store you're using, check the documentation of the vector store you're interested in.abstract class BaseVectorStore implements VectorStore { static fromTexts( texts: string[], metadatas: object[] | objec... |
4e9727215e95-1271 | can run locally in a docker container, then go for ChromaIf you're looking for an open-source vector database that offers low-latency, local embedding of documents and supports apps on the edge, then go for ZepIf you're looking for an open-source production-ready vector database that you can run locally (in a docker co... |
4e9727215e95-1272 | Therefore, it is recommended that you familiarize yourself with the text embedding model interfaces before diving into this.This walkthrough uses a basic, unoptimized implementation called MemoryVectorStore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings.UsageCreate a ... |
4e9727215e95-1273 | MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.loa... |
4e9727215e95-1274 | : object | undefined ): Promise<Document[]>; /** * Search for the most similar documents to a query, * and return their similarity score */ similaritySearchWithScore( query: string, k = 4, filter: object | undefined = undefined ): Promise<[object, number][]>; /** * Turn a VectorStore into a Retrie... |
4e9727215e95-1275 | You can also create a vector store from an existing index, the signature of this method depends on the vector store you're using, check the documentation of the vector store you're interested in.abstract class BaseVectorStore implements VectorStore { static fromTexts( texts: string[], metadatas: object[] | objec... |
4e9727215e95-1276 | open-source full-featured vector database that you can run locally in a docker container, then go for ChromaIf you're looking for an open-source vector database that offers low-latency, local embedding of documents and supports apps on the edge, then go for ZepIf you're looking for an open-source production-ready vecto... |
4e9727215e95-1277 | Therefore, it is recommended that you familiarize yourself with the text embedding model interfaces before diving into this.This walkthrough uses a basic, unoptimized implementation called MemoryVectorStore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings.UsageCreate a ... |
4e9727215e95-1278 | MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.loa... |
4e9727215e95-1279 | : object | undefined ): Promise<Document[]>; /** * Search for the most similar documents to a query, * and return their similarity score */ similaritySearchWithScore( query: string, k = 4, filter: object | undefined = undefined ): Promise<[object, number][]>; /** * Turn a VectorStore into a Retrie... |
4e9727215e95-1280 | You can also create a vector store from an existing index, the signature of this method depends on the vector store you're using, check the documentation of the vector store you're interested in.abstract class BaseVectorStore implements VectorStore { static fromTexts( texts: string[], metadatas: object[] | objec... |
4e9727215e95-1281 | open-source full-featured vector database that you can run locally in a docker container, then go for ChromaIf you're looking for an open-source vector database that offers low-latency, local embedding of documents and supports apps on the edge, then go for ZepIf you're looking for an open-source production-ready vecto... |
4e9727215e95-1282 | Therefore, it is recommended that you familiarize yourself with the text embedding model interfaces before diving into this.This walkthrough uses a basic, unoptimized implementation called MemoryVectorStore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings.UsageCreate a ... |
4e9727215e95-1283 | MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.loa... |
4e9727215e95-1284 | : object | undefined ): Promise<Document[]>; /** * Search for the most similar documents to a query, * and return their similarity score */ similaritySearchWithScore( query: string, k = 4, filter: object | undefined = undefined ): Promise<[object, number][]>; /** * Turn a VectorStore into a Retrie... |
4e9727215e95-1285 | You can also create a vector store from an existing index, the signature of this method depends on the vector store you're using, check the documentation of the vector store you're interested in.abstract class BaseVectorStore implements VectorStore { static fromTexts( texts: string[], metadatas: object[] | objec... |
4e9727215e95-1286 | open-source full-featured vector database that you can run locally in a docker container, then go for ChromaIf you're looking for an open-source vector database that offers low-latency, local embedding of documents and supports apps on the edge, then go for ZepIf you're looking for an open-source production-ready vecto... |
4e9727215e95-1287 | Therefore, it is recommended that you familiarize yourself with the text embedding model interfaces before diving into this.This walkthrough uses a basic, unoptimized implementation called MemoryVectorStore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings.UsageCreate a ... |
4e9727215e95-1288 | MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.loa... |
4e9727215e95-1289 | : object | undefined ): Promise<Document[]>; /** * Search for the most similar documents to a query, * and return their similarity score */ similaritySearchWithScore( query: string, k = 4, filter: object | undefined = undefined ): Promise<[object, number][]>; /** * Turn a VectorStore into a Retrie... |
4e9727215e95-1290 | You can also create a vector store from an existing index, the signature of this method depends on the vector store you're using, check the documentation of the vector store you're interested in.abstract class BaseVectorStore implements VectorStore { static fromTexts( texts: string[], metadatas: object[] | objec... |
4e9727215e95-1291 | looking for an open-source full-featured vector database that you can run locally in a docker container, then go for ChromaIf you're looking for an open-source vector database that offers low-latency, local embedding of documents and supports apps on the edge, then go for ZepIf you're looking for an open-source product... |
4e9727215e95-1292 | import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await ve... |
4e9727215e95-1293 | Here is the current base interface all vector stores share:
interface VectorStore { /** * Add more documents to an existing VectorStore. * Some providers support additional parameters, e.g. to associate custom ids * with added documents or to change the batch size of bulk inserts. * Returns an array of ids for th... |
4e9727215e95-1294 | You can create a vector store from a list of Documents, or from a list of texts and their corresponding metadata. You can also create a vector store from an existing index, the signature of this method depends on the vector store you're using, check the documentation of the vector store you're interested in.
abstract ... |
4e9727215e95-1295 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB Atl... |
4e9727215e95-1296 | It is open source and distributed with an Apache-2.0 license.📄️ MilvusMilvus is a vector database built for embeddings similarity search and AI applications.📄️ MongoDB AtlasOnly available on Node.js.📄️ MyScaleOnly available on Node.js.📄️ OpenSearchOnly available on Node.js.📄️ PineconeOnly available on Node.js.📄️ ... |
4e9727215e95-1297 | Refer to the Supabase blog post for more information.📄️ TigrisTigris makes it easy to build AI applications with vector embeddings.📄️ TypeORMTo enable vector search in a generic PostgreSQL database, LangChainJS supports using TypeORM with the pgvector Postgres extension.📄️ TypesenseVector store that utilizes the Typ... |
4e9727215e95-1298 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSea... |
4e9727215e95-1299 | It is open source and distributed with an Apache-2.0 license.📄️ MilvusMilvus is a vector database built for embeddings similarity search and AI applications.📄️ MongoDB AtlasOnly available on Node.js.📄️ MyScaleOnly available on Node.js.📄️ OpenSearchOnly available on Node.js.📄️ PineconeOnly available on Node.js.📄️ ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.