id
stringlengths 14
17
| text
stringlengths 42
2.1k
|
---|---|
4e9727215e95-1200 | Text embedding modelsThe Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.The base Embeddings class in LangChain exposes two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).Get startedEmbeddings can be used to create a numerical representation of textual data. This numerical representation is useful because it can be used to find similar documents.Below is an example of how to use the OpenAI embeddings. |
4e9727215e95-1201 | Embeddings occasionally have different embedding methods for queries versus documents, so the embedding class exposes a embedQuery and embedDocuments method.import { OpenAIEmbeddings } from "langchain/embeddings/openai";/* Create instance */const embeddings = new OpenAIEmbeddings();/* Embed queries */const res = await embeddings.embedQuery("Hello world");/*[ -0.004845875, 0.004899438, -0.016358767, -0.024475135, -0.017341806, 0.012571548, -0.019156644, 0.009036391, -0.010227379, -0.026945334, 0.022861943, 0.010321903, -0.023479493, -0.0066544134, 0.007977734, 0.0026371893, 0.025206111, -0.012048521, 0.012943339, 0.013094575, -0.010580265, -0.003509951, 0.004070787, 0.008639394, -0.020631202, -0.0019203906, 0.012161949, -0.019194454, 0.030373365, -0.031028723, 0.0036170771, -0.007813894, -0.0060778237, -0.017820721, 0.0048647798, -0.015640393, 0.001373733, -0.015552171, |
4e9727215e95-1202 | 0.001373733, -0.015552171, 0.019534737, -0.016169721, 0.007316074, 0.008273906, 0.011418369, -0.01390117, -0.033347685, 0.011248227, 0.0042503807, -0.012792102, -0.0014595914, 0.028356876, 0.025407761, 0.00076445413, -0.016308354, 0.017455231, -0.016396577, 0.008557475, -0.03312083, 0.031104341, 0.032389853, -0.02132437, 0.003324056, 0.0055610985, -0.0078012915, 0.006090427, 0.0062038545, 0.0169133, 0.0036391325, 0.0076815626, -0.018841568, 0.026037913, 0.024550753, 0.0055264398, -0.0015824712, -0.0047765584, 0.018425668, |
4e9727215e95-1203 | 0.0030656934, -0.0113742575, -0.0020322427, 0.005069579, 0.0022701253, 0.036095154, -0.027449455, -0.008475555, 0.015388331, 0.018917186, 0.0018999106, -0.003349262, 0.020895867, -0.014480911, -0.025042271, 0.012546342, 0.013850759, 0.0069253794, 0.008588983, -0.015199285, -0.0029585673, -0.008759124, 0.016749462, 0.004111747, -0.04804285, ... 1436 more items]*//* Embed documents */const documentRes = await embeddings.embedDocuments(["Hello world", "Bye bye"]);/*[ [ -0.0047852774, 0.0048640342, -0.01645707, -0.024395779, -0.017263541, 0.012512918, -0.019191515, 0.009053908, -0.010213212, -0.026890801, 0.022883644, 0.010251015, -0.023589306, -0.006584088, 0.007989113, 0.002720268, 0.025088841, |
4e9727215e95-1204 | 0.002720268, 0.025088841, -0.012153786, 0.012928754, 0.013054766, -0.010395928, -0.0035566676, 0.0040008575, 0.008600268, -0.020678446, -0.0019106456, 0.012178987, -0.019241918, 0.030444318, -0.03102397, 0.0035692686, -0.007749692, -0.00604854, -0.01781799, 0.004860884, -0.015612794, 0.0014097509, -0.015637996, 0.019443536, -0.01612944, 0.0072960514, 0.008316742, 0.011548932, -0.013987249, -0.03336778, 0.011341013, 0.00425603, -0.0126578305, -0.0013861238, 0.028302127, 0.025466874, 0.0007029065, -0.016318457, 0.017427357, -0.016394064, 0.008499459, -0.033241767, 0.031200387, |
4e9727215e95-1205 | -0.033241767, 0.031200387, 0.03238489, -0.0212833, 0.0032416396, 0.005443686, -0.007749692, 0.0060201874, |
4e9727215e95-1206 | 0.006281661, 0.016923312, 0.003528315, 0.0076740854, -0.01881348, 0.026109532, 0.024660403, 0.005472039, -0.0016712243, -0.0048136297, 0.018397642, 0.003011669, -0.011385117, -0.0020193304, 0.005138109, 0.0022335495, 0.03603922, -0.027495656, -0.008575066, 0.015436378, 0.018851284, 0.0018019609, -0.0034338066, 0.02094307, -0.014503895, -0.024950229, 0.012632628, 0.013735226, 0.0069936244, 0.008575066, -0.015196957, -0.0030541976, -0.008745181, 0.016746895, 0.0040481114, -0.048010286, ... 1436 more items ], [ -0.009446913, -0.013253193, 0.013174579, 0.0057552797, -0.038993083, 0.0077763423, |
4e9727215e95-1207 | 0.0077763423, -0.0260478, -0.0114384955, -0.0022683728, -0.016509168, 0.041797023, 0.01787183, 0.00552271, -0.0049789557, 0.018146982, -0.01542166, 0.033752076, 0.006112323, 0.023872782, -0.016535373, -0.006623321, 0.016116094, -0.0061090477, -0.0044155475, -0.016627092, -0.022077737, -0.0009286407, -0.02156674, 0.011890532, -0.026283644, 0.02630985, 0.011942943, -0.026126415, -0.018264906, -0.014045896, -0.024187243, -0.019037955, -0.005037917, 0.020780588, -0.0049527506, 0.002399398, 0.020767486, 0.0080908025, -0.019666875, -0.027934562, 0.017688395, 0.015225122, |
4e9727215e95-1208 | 0.017688395, 0.015225122, 0.0046186363, -0.0045007137, 0.024265857, 0.03244183, 0.0038848957, -0.03244183, -0.018893827, -0.0018065092, 0.023440398, -0.021763276, 0.015120302, |
4e9727215e95-1209 | 0.01568371, -0.010861984, 0.011739853, -0.024501702, -0.005214801, 0.022955606, 0.001315165, -0.00492327, 0.0020358032, -0.003468891, -0.031079166, 0.0055259857, 0.0028547104, 0.012087069, 0.007992534, -0.0076256637, 0.008110457, 0.002998838, -0.024265857, 0.006977089, -0.015185814, -0.0069115767, 0.006466091, -0.029428247, -0.036241557, 0.036713246, 0.032284595, -0.0021144184, -0.014255536, 0.011228855, -0.027227025, -0.021619149, 0.00038242966, 0.02245771, -0.0014748519, 0.01573612, 0.0041010873, 0.006256451, -0.007992534, 0.038547598, 0.024658933, -0.012958387, ... 1436 more items ]]*/ |
4e9727215e95-1210 | The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
The base Embeddings class in LangChain exposes two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
Embeddings can be used to create a numerical representation of textual data. This numerical representation is useful because it can be used to find similar documents.
Below is an example of how to use the OpenAI embeddings. Embeddings occasionally have different embedding methods for queries versus documents, so the embedding class exposes a embedQuery and embedDocuments method. |
4e9727215e95-1211 | import { OpenAIEmbeddings } from "langchain/embeddings/openai";/* Create instance */const embeddings = new OpenAIEmbeddings();/* Embed queries */const res = await embeddings.embedQuery("Hello world");/*[ -0.004845875, 0.004899438, -0.016358767, -0.024475135, -0.017341806, 0.012571548, -0.019156644, 0.009036391, -0.010227379, -0.026945334, 0.022861943, 0.010321903, -0.023479493, -0.0066544134, 0.007977734, 0.0026371893, 0.025206111, -0.012048521, 0.012943339, 0.013094575, -0.010580265, -0.003509951, 0.004070787, 0.008639394, -0.020631202, -0.0019203906, 0.012161949, -0.019194454, 0.030373365, -0.031028723, 0.0036170771, -0.007813894, -0.0060778237, -0.017820721, 0.0048647798, -0.015640393, 0.001373733, -0.015552171, 0.019534737, -0.016169721, 0.007316074, |
4e9727215e95-1212 | -0.016169721, 0.007316074, 0.008273906, 0.011418369, -0.01390117, -0.033347685, 0.011248227, 0.0042503807, -0.012792102, -0.0014595914, 0.028356876, 0.025407761, 0.00076445413, -0.016308354, 0.017455231, -0.016396577, 0.008557475, -0.03312083, 0.031104341, 0.032389853, -0.02132437, 0.003324056, 0.0055610985, -0.0078012915, 0.006090427, 0.0062038545, 0.0169133, 0.0036391325, 0.0076815626, -0.018841568, 0.026037913, 0.024550753, 0.0055264398, -0.0015824712, -0.0047765584, 0.018425668, 0.0030656934, -0.0113742575, -0.0020322427, 0.005069579, 0.0022701253, 0.036095154, -0.027449455, -0.008475555, 0.015388331, 0.018917186, |
4e9727215e95-1213 | 0.015388331, 0.018917186, 0.0018999106, |
4e9727215e95-1214 | 0.003349262, 0.020895867, -0.014480911, -0.025042271, 0.012546342, 0.013850759, 0.0069253794, 0.008588983, -0.015199285, -0.0029585673, -0.008759124, 0.016749462, 0.004111747, -0.04804285, ... 1436 more items]*//* Embed documents */const documentRes = await embeddings.embedDocuments(["Hello world", "Bye bye"]);/*[ [ -0.0047852774, 0.0048640342, -0.01645707, -0.024395779, -0.017263541, 0.012512918, -0.019191515, 0.009053908, -0.010213212, -0.026890801, 0.022883644, 0.010251015, -0.023589306, -0.006584088, 0.007989113, 0.002720268, 0.025088841, -0.012153786, 0.012928754, 0.013054766, -0.010395928, -0.0035566676, 0.0040008575, 0.008600268, -0.020678446, -0.0019106456, 0.012178987, |
4e9727215e95-1215 | -0.0019106456, 0.012178987, -0.019241918, 0.030444318, -0.03102397, 0.0035692686, -0.007749692, -0.00604854, -0.01781799, 0.004860884, -0.015612794, 0.0014097509, -0.015637996, 0.019443536, -0.01612944, 0.0072960514, 0.008316742, 0.011548932, -0.013987249, -0.03336778, 0.011341013, 0.00425603, -0.0126578305, -0.0013861238, 0.028302127, 0.025466874, 0.0007029065, -0.016318457, 0.017427357, -0.016394064, 0.008499459, -0.033241767, 0.031200387, 0.03238489, -0.0212833, 0.0032416396, 0.005443686, -0.007749692, 0.0060201874, 0.006281661, 0.016923312, 0.003528315, 0.0076740854, |
4e9727215e95-1216 | 0.003528315, 0.0076740854, -0.01881348, 0.026109532, 0.024660403, 0.005472039, -0.0016712243, -0.0048136297, 0.018397642, |
4e9727215e95-1217 | 0.003011669, -0.011385117, -0.0020193304, 0.005138109, 0.0022335495, 0.03603922, -0.027495656, -0.008575066, 0.015436378, 0.018851284, 0.0018019609, -0.0034338066, 0.02094307, -0.014503895, -0.024950229, 0.012632628, 0.013735226, 0.0069936244, 0.008575066, -0.015196957, -0.0030541976, -0.008745181, 0.016746895, 0.0040481114, -0.048010286, ... 1436 more items ], [ -0.009446913, -0.013253193, 0.013174579, 0.0057552797, -0.038993083, 0.0077763423, -0.0260478, -0.0114384955, -0.0022683728, -0.016509168, 0.041797023, 0.01787183, 0.00552271, -0.0049789557, 0.018146982, -0.01542166, 0.033752076, |
4e9727215e95-1218 | -0.01542166, 0.033752076, 0.006112323, 0.023872782, -0.016535373, -0.006623321, 0.016116094, -0.0061090477, -0.0044155475, -0.016627092, -0.022077737, -0.0009286407, -0.02156674, 0.011890532, -0.026283644, 0.02630985, 0.011942943, -0.026126415, -0.018264906, -0.014045896, -0.024187243, -0.019037955, -0.005037917, 0.020780588, -0.0049527506, 0.002399398, 0.020767486, 0.0080908025, -0.019666875, -0.027934562, 0.017688395, 0.015225122, 0.0046186363, -0.0045007137, 0.024265857, 0.03244183, 0.0038848957, -0.03244183, -0.018893827, -0.0018065092, 0.023440398, -0.021763276, 0.015120302, |
4e9727215e95-1219 | -0.021763276, 0.015120302, -0.01568371, -0.010861984, 0.011739853, -0.024501702, -0.005214801, 0.022955606, 0.001315165, -0.00492327, 0.0020358032, -0.003468891, -0.031079166, |
4e9727215e95-1220 | 0.0055259857, 0.0028547104, 0.012087069, 0.007992534, -0.0076256637, 0.008110457, 0.002998838, -0.024265857, 0.006977089, -0.015185814, -0.0069115767, 0.006466091, -0.029428247, -0.036241557, 0.036713246, 0.032284595, -0.0021144184, -0.014255536, 0.011228855, -0.027227025, -0.021619149, 0.00038242966, 0.02245771, -0.0014748519, 0.01573612, 0.0041010873, 0.006256451, -0.007992534, 0.038547598, 0.024658933, -0.012958387, ... 1436 more items ]]*/
Dealing with API errors
Page Title: Dealing with API errors | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-1221 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toDealing with API errorsDealing with rate limitsAdding a timeoutIntegrationsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsHow-toDealing with API errorsDealing with API errorsIf the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a maxRetries option when you instantiate the model. For example:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const model = new OpenAIEmbeddings({ maxRetries: 10 });PreviousText embedding modelsNextDealing with rate limitsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-1222 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toDealing with API errorsDealing with rate limitsAdding a timeoutIntegrationsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsHow-toDealing with API errorsDealing with API errorsIf the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a maxRetries option when you instantiate the model. For example:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const model = new OpenAIEmbeddings({ maxRetries: 10 });PreviousText embedding modelsNextDealing with rate limits
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toDealing with API errorsDealing with rate limitsAdding a timeoutIntegrationsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference
ModulesData connectionText embedding modelsHow-toDealing with API errorsDealing with API errorsIf the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a maxRetries option when you instantiate the model. For example:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const model = new OpenAIEmbeddings({ maxRetries: 10 });PreviousText embedding modelsNextDealing with rate limits |
4e9727215e95-1223 | Dealing with API errorsIf the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a maxRetries option when you instantiate the model. For example:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const model = new OpenAIEmbeddings({ maxRetries: 10 });
import { OpenAIEmbeddings } from "langchain/embeddings/openai";const model = new OpenAIEmbeddings({ maxRetries: 10 });
Paragraphs: |
4e9727215e95-1224 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toDealing with API errorsDealing with rate limitsAdding a timeoutIntegrationsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsHow-toDealing with rate limitsDealing with rate limitsSome providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating an Embeddings model. This option allows you to specify the maximum number of concurrent requests you want to make to the provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.For example, if you set maxConcurrency: 5, then LangChain will only send 5 requests to the provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.To use this feature, simply pass maxConcurrency: <number> when you instantiate the LLM.
For example:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const model = new OpenAIEmbeddings({ maxConcurrency: 5 });PreviousDealing with API errorsNextAdding a timeoutCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-1225 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toDealing with API errorsDealing with rate limitsAdding a timeoutIntegrationsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsHow-toDealing with rate limitsDealing with rate limitsSome providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating an Embeddings model. This option allows you to specify the maximum number of concurrent requests you want to make to the provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.For example, if you set maxConcurrency: 5, then LangChain will only send 5 requests to the provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.To use this feature, simply pass maxConcurrency: <number> when you instantiate the LLM. For example:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const model = new OpenAIEmbeddings({ maxConcurrency: 5 });PreviousDealing with API errorsNextAdding a timeout |
4e9727215e95-1226 | ModulesData connectionText embedding modelsHow-toDealing with rate limitsDealing with rate limitsSome providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating an Embeddings model. This option allows you to specify the maximum number of concurrent requests you want to make to the provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.For example, if you set maxConcurrency: 5, then LangChain will only send 5 requests to the provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.To use this feature, simply pass maxConcurrency: <number> when you instantiate the LLM. For example:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const model = new OpenAIEmbeddings({ maxConcurrency: 5 });PreviousDealing with API errorsNextAdding a timeout |
4e9727215e95-1227 | Dealing with rate limitsSome providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating an Embeddings model. This option allows you to specify the maximum number of concurrent requests you want to make to the provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.For example, if you set maxConcurrency: 5, then LangChain will only send 5 requests to the provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.To use this feature, simply pass maxConcurrency: <number> when you instantiate the LLM. For example:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const model = new OpenAIEmbeddings({ maxConcurrency: 5 });
Some providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating an Embeddings model. This option allows you to specify the maximum number of concurrent requests you want to make to the provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.
import { OpenAIEmbeddings } from "langchain/embeddings/openai";const model = new OpenAIEmbeddings({ maxConcurrency: 5 });
Paragraphs: |
4e9727215e95-1228 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toDealing with API errorsDealing with rate limitsAdding a timeoutIntegrationsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsHow-toAdding a timeoutAdding a timeoutBy default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a timeout option, in milliseconds, when you instantiate the model. For example, for OpenAI:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ timeout: 1000, // 1s timeout});/* Embed queries */const res = await embeddings.embedQuery("Hello world");console.log(res);/* Embed documents */const documentRes = await embeddings.embedDocuments(["Hello world", "Bye bye"]);console.log({ documentRes });API Reference:OpenAIEmbeddings from langchain/embeddings/openaiCurrently, the timeout option is only supported for OpenAI models.PreviousDealing with rate limitsNextAzure OpenAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-1229 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toDealing with API errorsDealing with rate limitsAdding a timeoutIntegrationsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsHow-toAdding a timeoutAdding a timeoutBy default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a timeout option, in milliseconds, when you instantiate the model. For example, for OpenAI:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ timeout: 1000, // 1s timeout});/* Embed queries */const res = await embeddings.embedQuery("Hello world");console.log(res);/* Embed documents */const documentRes = await embeddings.embedDocuments(["Hello world", "Bye bye"]);console.log({ documentRes });API Reference:OpenAIEmbeddings from langchain/embeddings/openaiCurrently, the timeout option is only supported for OpenAI models.PreviousDealing with rate limitsNextAzure OpenAI |
4e9727215e95-1230 | ModulesData connectionText embedding modelsHow-toAdding a timeoutAdding a timeoutBy default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a timeout option, in milliseconds, when you instantiate the model. For example, for OpenAI:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ timeout: 1000, // 1s timeout});/* Embed queries */const res = await embeddings.embedQuery("Hello world");console.log(res);/* Embed documents */const documentRes = await embeddings.embedDocuments(["Hello world", "Bye bye"]);console.log({ documentRes });API Reference:OpenAIEmbeddings from langchain/embeddings/openaiCurrently, the timeout option is only supported for OpenAI models.PreviousDealing with rate limitsNextAzure OpenAI
Adding a timeoutBy default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a timeout option, in milliseconds, when you instantiate the model. For example, for OpenAI:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ timeout: 1000, // 1s timeout});/* Embed queries */const res = await embeddings.embedQuery("Hello world");console.log(res);/* Embed documents */const documentRes = await embeddings.embedDocuments(["Hello world", "Bye bye"]);console.log({ documentRes });API Reference:OpenAIEmbeddings from langchain/embeddings/openaiCurrently, the timeout option is only supported for OpenAI models.
By default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a timeout option, in milliseconds, when you instantiate the model. For example, for OpenAI: |
4e9727215e95-1231 | import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ timeout: 1000, // 1s timeout});/* Embed queries */const res = await embeddings.embedQuery("Hello world");console.log(res);/* Embed documents */const documentRes = await embeddings.embedDocuments(["Hello world", "Bye bye"]);console.log({ documentRes });
API Reference:OpenAIEmbeddings from langchain/embeddings/openai
Currently, the timeout option is only supported for OpenAI models.
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsIntegrationsAzure OpenAIAzure OpenAIThe OpenAIEmbeddings class can also use the OpenAI API on Azure to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing stripNewLines: false to the constructor.For example, if your Azure instance is hosted under https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}, you |
4e9727215e95-1232 | could initialize your instance like this:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME});If you'd like to initialize using environment variable defaults, the process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
will be used first, then process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME. This can be useful if you're using these embeddings
with another Azure OpenAI model.If your instance is hosted under a domain other than the default openai.azure.com, you'll need to use the alternate AZURE_OPENAI_BASE_PATH environemnt variable. |
4e9727215e95-1233 | For example, here's how you would connect to the domain https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});PreviousAdding a timeoutNextCohereCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsIntegrationsAzure OpenAIAzure OpenAIThe OpenAIEmbeddings class can also use the OpenAI API on Azure to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing stripNewLines: false to the constructor.For example, if your Azure instance is hosted under https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}, you |
4e9727215e95-1234 | could initialize your instance like this:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME});If you'd like to initialize using environment variable defaults, the process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
will be used first, then process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME. This can be useful if you're using these embeddings
with another Azure OpenAI model.If your instance is hosted under a domain other than the default openai.azure.com, you'll need to use the alternate AZURE_OPENAI_BASE_PATH environemnt variable. |
4e9727215e95-1235 | For example, here's how you would connect to the domain https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});PreviousAdding a timeoutNextCohere
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference
ModulesData connectionText embedding modelsIntegrationsAzure OpenAIAzure OpenAIThe OpenAIEmbeddings class can also use the OpenAI API on Azure to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing stripNewLines: false to the constructor.For example, if your Azure instance is hosted under https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}, you |
4e9727215e95-1236 | could initialize your instance like this:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME});If you'd like to initialize using environment variable defaults, the process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
will be used first, then process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME. This can be useful if you're using these embeddings
with another Azure OpenAI model.If your instance is hosted under a domain other than the default openai.azure.com, you'll need to use the alternate AZURE_OPENAI_BASE_PATH environemnt variable. |
4e9727215e95-1237 | For example, here's how you would connect to the domain https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});PreviousAdding a timeoutNextCohere
Azure OpenAIThe OpenAIEmbeddings class can also use the OpenAI API on Azure to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing stripNewLines: false to the constructor.For example, if your Azure instance is hosted under https://{MY_INSTANCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}, you |
4e9727215e95-1238 | could initialize your instance like this:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME});If you'd like to initialize using environment variable defaults, the process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
will be used first, then process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME. This can be useful if you're using these embeddings
with another Azure OpenAI model.If your instance is hosted under a domain other than the default openai.azure.com, you'll need to use the alternate AZURE_OPENAI_BASE_PATH environemnt variable.
For example, here's how you would connect to the domain https://westeurope.api.microsoft.com/openai/deployments/{DEPLOYMENT_NAME}:import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH}); |
4e9727215e95-1239 | The OpenAIEmbeddings class can also use the OpenAI API on Azure to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing stripNewLines: false to the constructor.
import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME});
If you'd like to initialize using environment variable defaults, the process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
will be used first, then process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME. This can be useful if you're using these embeddings
with another Azure OpenAI model.
import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", azureOpenAIApiVersion: "YOUR-API-VERSION", azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", azureOpenAIBasePath: "https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH});
Paragraphs: |
4e9727215e95-1240 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsIntegrationsCohereCohereThe CohereEmbeddings class uses the Cohere API to generate embeddings for a given text.npmYarnpnpmnpm install cohere-aiyarn add cohere-aipnpm add cohere-aiimport { CohereEmbeddings } from "langchain/embeddings/cohere";const embeddings = new CohereEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.COHERE_API_KEY batchSize: 48, // Default value if omitted is 48. Max value is 96});PreviousAzure OpenAINextGoogle PaLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-1241 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsIntegrationsCohereCohereThe CohereEmbeddings class uses the Cohere API to generate embeddings for a given text.npmYarnpnpmnpm install cohere-aiyarn add cohere-aipnpm add cohere-aiimport { CohereEmbeddings } from "langchain/embeddings/cohere";const embeddings = new CohereEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.COHERE_API_KEY batchSize: 48, // Default value if omitted is 48. Max value is 96});PreviousAzure OpenAINextGoogle PaLM
ModulesData connectionText embedding modelsIntegrationsCohereCohereThe CohereEmbeddings class uses the Cohere API to generate embeddings for a given text.npmYarnpnpmnpm install cohere-aiyarn add cohere-aipnpm add cohere-aiimport { CohereEmbeddings } from "langchain/embeddings/cohere";const embeddings = new CohereEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.COHERE_API_KEY batchSize: 48, // Default value if omitted is 48. Max value is 96});PreviousAzure OpenAINextGoogle PaLM |
4e9727215e95-1242 | CohereThe CohereEmbeddings class uses the Cohere API to generate embeddings for a given text.npmYarnpnpmnpm install cohere-aiyarn add cohere-aipnpm add cohere-aiimport { CohereEmbeddings } from "langchain/embeddings/cohere";const embeddings = new CohereEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.COHERE_API_KEY batchSize: 48, // Default value if omitted is 48. Max value is 96});
The CohereEmbeddings class uses the Cohere API to generate embeddings for a given text.
import { CohereEmbeddings } from "langchain/embeddings/cohere";const embeddings = new CohereEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.COHERE_API_KEY batchSize: 48, // Default value if omitted is 48. Max value is 96});
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsIntegrationsGoogle PaLMGoogle PaLMThe Google PaLM API can be integrated by first
installing the required packages:npmYarnpnpmnpm install google-auth-library @google-ai/generativelanguageyarn add google-auth-library @google-ai/generativelanguagepnpm add google-auth-library @google-ai/generativelanguageCreate an API key from Google MakerSuite. You can then set |
4e9727215e95-1243 | the key as GOOGLE_PALM_API_KEY environment variable or pass it as apiKey parameter while instantiating
the model.import { GooglePaLMEmbeddings } from "langchain/embeddings/googlepalm";export const run = async () => { const model = new GooglePaLMEmbeddings({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` modelName: "models/embedding-gecko-001", // OPTIONAL }); /* Embed queries */ const res = await model.embedQuery( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res }); /* Embed documents */ const documentRes = await model.embedDocuments(["Hello world", "Bye bye"]); console.log({ documentRes });};API Reference:GooglePaLMEmbeddings from langchain/embeddings/googlepalmPreviousCohereNextGoogle Vertex AICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsIntegrationsGoogle PaLMGoogle PaLMThe Google PaLM API can be integrated by first
installing the required packages:npmYarnpnpmnpm install google-auth-library @google-ai/generativelanguageyarn add google-auth-library @google-ai/generativelanguagepnpm add google-auth-library @google-ai/generativelanguageCreate an API key from Google MakerSuite. You can then set |
4e9727215e95-1244 | the key as GOOGLE_PALM_API_KEY environment variable or pass it as apiKey parameter while instantiating
the model.import { GooglePaLMEmbeddings } from "langchain/embeddings/googlepalm";export const run = async () => { const model = new GooglePaLMEmbeddings({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` modelName: "models/embedding-gecko-001", // OPTIONAL }); /* Embed queries */ const res = await model.embedQuery( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res }); /* Embed documents */ const documentRes = await model.embedDocuments(["Hello world", "Bye bye"]); console.log({ documentRes });};API Reference:GooglePaLMEmbeddings from langchain/embeddings/googlepalmPreviousCohereNextGoogle Vertex AI
ModulesData connectionText embedding modelsIntegrationsGoogle PaLMGoogle PaLMThe Google PaLM API can be integrated by first
installing the required packages:npmYarnpnpmnpm install google-auth-library @google-ai/generativelanguageyarn add google-auth-library @google-ai/generativelanguagepnpm add google-auth-library @google-ai/generativelanguageCreate an API key from Google MakerSuite. You can then set
the key as GOOGLE_PALM_API_KEY environment variable or pass it as apiKey parameter while instantiating |
4e9727215e95-1245 | the model.import { GooglePaLMEmbeddings } from "langchain/embeddings/googlepalm";export const run = async () => { const model = new GooglePaLMEmbeddings({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` modelName: "models/embedding-gecko-001", // OPTIONAL }); /* Embed queries */ const res = await model.embedQuery( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res }); /* Embed documents */ const documentRes = await model.embedDocuments(["Hello world", "Bye bye"]); console.log({ documentRes });};API Reference:GooglePaLMEmbeddings from langchain/embeddings/googlepalmPreviousCohereNextGoogle Vertex AI
Google PaLMThe Google PaLM API can be integrated by first
installing the required packages:npmYarnpnpmnpm install google-auth-library @google-ai/generativelanguageyarn add google-auth-library @google-ai/generativelanguagepnpm add google-auth-library @google-ai/generativelanguageCreate an API key from Google MakerSuite. You can then set
the key as GOOGLE_PALM_API_KEY environment variable or pass it as apiKey parameter while instantiating |
4e9727215e95-1246 | the model.import { GooglePaLMEmbeddings } from "langchain/embeddings/googlepalm";export const run = async () => { const model = new GooglePaLMEmbeddings({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` modelName: "models/embedding-gecko-001", // OPTIONAL }); /* Embed queries */ const res = await model.embedQuery( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res }); /* Embed documents */ const documentRes = await model.embedDocuments(["Hello world", "Bye bye"]); console.log({ documentRes });};API Reference:GooglePaLMEmbeddings from langchain/embeddings/googlepalm
import { GooglePaLMEmbeddings } from "langchain/embeddings/googlepalm";export const run = async () => { const model = new GooglePaLMEmbeddings({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` modelName: "models/embedding-gecko-001", // OPTIONAL }); /* Embed queries */ const res = await model.embedQuery( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res }); /* Embed documents */ const documentRes = await model.embedDocuments(["Hello world", "Bye bye"]); console.log({ documentRes });};
API Reference:GooglePaLMEmbeddings from langchain/embeddings/googlepalm
Paragraphs: |
4e9727215e95-1247 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsIntegrationsGoogle Vertex AIGoogle Vertex AIThe GoogleVertexAIEmbeddings class uses Google's Vertex AI PaLM models
to generate embeddings for a given text.The Vertex AI implementation is meant to be used in Node.js and not
directly in a browser, since it requires a service account to use.Before running this code, you should make sure the Vertex AI API is
enabled for the relevant project in your Google Cloud dashboard and that you've authenticated to
Google Cloud using one of these methods:You are logged into an account (using gcloud auth application-default login)
permitted to that project.You are running on a machine using a service account that is permitted
to the project.You have downloaded the credentials for a service account that is permitted
to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment |
4e9727215e95-1248 | to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryimport { GoogleVertexAIEmbeddings } from "langchain/embeddings/googlevertexai";export const run = async () => { const model = new GoogleVertexAIEmbeddings(); const res = await model.embedQuery( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res });};API Reference:GoogleVertexAIEmbeddings from langchain/embeddings/googlevertexaiNote: The default Google Vertex AI embeddings model, textembedding-gecko, has a different number of dimensions than OpenAI's text-embedding-ada-002 model
and may not be supported by all vector store providers.PreviousGoogle PaLMNextHuggingFace InferenceCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsIntegrationsGoogle Vertex AIGoogle Vertex AIThe GoogleVertexAIEmbeddings class uses Google's Vertex AI PaLM models
to generate embeddings for a given text.The Vertex AI implementation is meant to be used in Node.js and not
directly in a browser, since it requires a service account to use.Before running this code, you should make sure the Vertex AI API is
enabled for the relevant project in your Google Cloud dashboard and that you've authenticated to |
4e9727215e95-1249 | enabled for the relevant project in your Google Cloud dashboard and that you've authenticated to
Google Cloud using one of these methods:You are logged into an account (using gcloud auth application-default login)
permitted to that project.You are running on a machine using a service account that is permitted
to the project.You have downloaded the credentials for a service account that is permitted
to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryimport { GoogleVertexAIEmbeddings } from "langchain/embeddings/googlevertexai";export const run = async () => { const model = new GoogleVertexAIEmbeddings(); const res = await model.embedQuery( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res });};API Reference:GoogleVertexAIEmbeddings from langchain/embeddings/googlevertexaiNote: The default Google Vertex AI embeddings model, textembedding-gecko, has a different number of dimensions than OpenAI's text-embedding-ada-002 model
and may not be supported by all vector store providers.PreviousGoogle PaLMNextHuggingFace Inference
ModulesData connectionText embedding modelsIntegrationsGoogle Vertex AIGoogle Vertex AIThe GoogleVertexAIEmbeddings class uses Google's Vertex AI PaLM models
to generate embeddings for a given text.The Vertex AI implementation is meant to be used in Node.js and not
directly in a browser, since it requires a service account to use.Before running this code, you should make sure the Vertex AI API is
enabled for the relevant project in your Google Cloud dashboard and that you've authenticated to
Google Cloud using one of these methods:You are logged into an account (using gcloud auth application-default login) |
4e9727215e95-1250 | permitted to that project.You are running on a machine using a service account that is permitted
to the project.You have downloaded the credentials for a service account that is permitted
to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryimport { GoogleVertexAIEmbeddings } from "langchain/embeddings/googlevertexai";export const run = async () => { const model = new GoogleVertexAIEmbeddings(); const res = await model.embedQuery( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res });};API Reference:GoogleVertexAIEmbeddings from langchain/embeddings/googlevertexaiNote: The default Google Vertex AI embeddings model, textembedding-gecko, has a different number of dimensions than OpenAI's text-embedding-ada-002 model
and may not be supported by all vector store providers.PreviousGoogle PaLMNextHuggingFace Inference
Google Vertex AIThe GoogleVertexAIEmbeddings class uses Google's Vertex AI PaLM models
to generate embeddings for a given text.The Vertex AI implementation is meant to be used in Node.js and not
directly in a browser, since it requires a service account to use.Before running this code, you should make sure the Vertex AI API is
enabled for the relevant project in your Google Cloud dashboard and that you've authenticated to
Google Cloud using one of these methods:You are logged into an account (using gcloud auth application-default login)
permitted to that project.You are running on a machine using a service account that is permitted
to the project.You have downloaded the credentials for a service account that is permitted
to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment |
4e9727215e95-1251 | to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryimport { GoogleVertexAIEmbeddings } from "langchain/embeddings/googlevertexai";export const run = async () => { const model = new GoogleVertexAIEmbeddings(); const res = await model.embedQuery( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res });};API Reference:GoogleVertexAIEmbeddings from langchain/embeddings/googlevertexaiNote: The default Google Vertex AI embeddings model, textembedding-gecko, has a different number of dimensions than OpenAI's text-embedding-ada-002 model
and may not be supported by all vector store providers.
The GoogleVertexAIEmbeddings class uses Google's Vertex AI PaLM models
to generate embeddings for a given text.
import { GoogleVertexAIEmbeddings } from "langchain/embeddings/googlevertexai";export const run = async () => { const model = new GoogleVertexAIEmbeddings(); const res = await model.embedQuery( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res });};
API Reference:GoogleVertexAIEmbeddings from langchain/embeddings/googlevertexai
Note: The default Google Vertex AI embeddings model, textembedding-gecko, has a different number of dimensions than OpenAI's text-embedding-ada-002 model
and may not be supported by all vector store providers.
HuggingFace Inference
Page Title: HuggingFace Inference | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-1252 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsIntegrationsHuggingFace InferenceHuggingFace InferenceThis Embeddings integration uses the HuggingFace Inference API to generate embeddings for a given text using by default the sentence-transformers/distilbert-base-nli-mean-tokens model. You can pass a different model name to the constructor to use a different model.npmYarnpnpmnpm install @huggingface/inference@1yarn add @huggingface/inference@1pnpm add @huggingface/inference@1import { HuggingFaceInferenceEmbeddings } from "langchain/embeddings/hf";const embeddings = new HuggingFaceInferenceEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY});PreviousGoogle Vertex AINextOpenAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-1253 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsIntegrationsHuggingFace InferenceHuggingFace InferenceThis Embeddings integration uses the HuggingFace Inference API to generate embeddings for a given text using by default the sentence-transformers/distilbert-base-nli-mean-tokens model. You can pass a different model name to the constructor to use a different model.npmYarnpnpmnpm install @huggingface/inference@1yarn add @huggingface/inference@1pnpm add @huggingface/inference@1import { HuggingFaceInferenceEmbeddings } from "langchain/embeddings/hf";const embeddings = new HuggingFaceInferenceEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY});PreviousGoogle Vertex AINextOpenAI |
4e9727215e95-1254 | ModulesData connectionText embedding modelsIntegrationsHuggingFace InferenceHuggingFace InferenceThis Embeddings integration uses the HuggingFace Inference API to generate embeddings for a given text using by default the sentence-transformers/distilbert-base-nli-mean-tokens model. You can pass a different model name to the constructor to use a different model.npmYarnpnpmnpm install @huggingface/inference@1yarn add @huggingface/inference@1pnpm add @huggingface/inference@1import { HuggingFaceInferenceEmbeddings } from "langchain/embeddings/hf";const embeddings = new HuggingFaceInferenceEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY});PreviousGoogle Vertex AINextOpenAI
HuggingFace InferenceThis Embeddings integration uses the HuggingFace Inference API to generate embeddings for a given text using by default the sentence-transformers/distilbert-base-nli-mean-tokens model. You can pass a different model name to the constructor to use a different model.npmYarnpnpmnpm install @huggingface/inference@1yarn add @huggingface/inference@1pnpm add @huggingface/inference@1import { HuggingFaceInferenceEmbeddings } from "langchain/embeddings/hf";const embeddings = new HuggingFaceInferenceEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY});
This Embeddings integration uses the HuggingFace Inference API to generate embeddings for a given text using by default the sentence-transformers/distilbert-base-nli-mean-tokens model. You can pass a different model name to the constructor to use a different model. |
4e9727215e95-1255 | import { HuggingFaceInferenceEmbeddings } from "langchain/embeddings/hf";const embeddings = new HuggingFaceInferenceEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY});
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsIntegrationsOpenAIOpenAIThe OpenAIEmbeddings class uses the OpenAI API to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing stripNewLines: false to the constructor.import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY batchSize: 512, // Default value if omitted is 512. Max is 2048});If you're part of an organization, you can set process.env.OPENAI_ORGANIZATION to your OpenAI organization id, or pass it in as organization when
initializing the model.PreviousHuggingFace InferenceNextTensorFlowCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-1256 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsIntegrationsOpenAIOpenAIThe OpenAIEmbeddings class uses the OpenAI API to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing stripNewLines: false to the constructor.import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY batchSize: 512, // Default value if omitted is 512. Max is 2048});If you're part of an organization, you can set process.env.OPENAI_ORGANIZATION to your OpenAI organization id, or pass it in as organization when
initializing the model.PreviousHuggingFace InferenceNextTensorFlow |
4e9727215e95-1257 | initializing the model.PreviousHuggingFace InferenceNextTensorFlow
ModulesData connectionText embedding modelsIntegrationsOpenAIOpenAIThe OpenAIEmbeddings class uses the OpenAI API to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing stripNewLines: false to the constructor.import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY batchSize: 512, // Default value if omitted is 512. Max is 2048});If you're part of an organization, you can set process.env.OPENAI_ORGANIZATION to your OpenAI organization id, or pass it in as organization when
initializing the model.PreviousHuggingFace InferenceNextTensorFlow
OpenAIThe OpenAIEmbeddings class uses the OpenAI API to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing stripNewLines: false to the constructor.import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY batchSize: 512, // Default value if omitted is 512. Max is 2048});If you're part of an organization, you can set process.env.OPENAI_ORGANIZATION to your OpenAI organization id, or pass it in as organization when
initializing the model. |
4e9727215e95-1258 | initializing the model.
The OpenAIEmbeddings class uses the OpenAI API to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing stripNewLines: false to the constructor.
import { OpenAIEmbeddings } from "langchain/embeddings/openai";const embeddings = new OpenAIEmbeddings({ openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY batchSize: 512, // Default value if omitted is 512. Max is 2048});
TensorFlow
Page Title: TensorFlow | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsIntegrationsTensorFlowTensorFlowThis Embeddings integration runs the embeddings entirely in your browser or Node.js environment, using TensorFlow.js. This means that your data isn't sent to any third party, and you don't need to sign up for any API keys. |
4e9727215e95-1259 | However, it does require more memory and processing power than the other integrations.npmYarnpnpmnpm install @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpuyarn add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpupnpm add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpuimport "@tensorflow/tfjs-backend-cpu";import { TensorFlowEmbeddings } from "langchain/embeddings/tensorflow";const embeddings = new TensorFlowEmbeddings();This example uses the CPU backend, which works in any JS environment. However, you can use any of the backends supported by TensorFlow.js, including GPU and WebAssembly, which will be a lot faster. For Node.js you can use the @tensorflow/tfjs-node package, and for the browser you can use the @tensorflow/tfjs-backend-webgl package. See the TensorFlow.js documentation for more information.PreviousOpenAINextVector storesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-1260 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsHow-toIntegrationsAzure OpenAICohereGoogle PaLMGoogle Vertex AIHuggingFace InferenceOpenAITensorFlowVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionText embedding modelsIntegrationsTensorFlowTensorFlowThis Embeddings integration runs the embeddings entirely in your browser or Node.js environment, using TensorFlow.js. This means that your data isn't sent to any third party, and you don't need to sign up for any API keys. However, it does require more memory and processing power than the other integrations.npmYarnpnpmnpm install @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpuyarn add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpupnpm add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpuimport "@tensorflow/tfjs-backend-cpu";import { TensorFlowEmbeddings } from "langchain/embeddings/tensorflow";const embeddings = new TensorFlowEmbeddings();This example uses the CPU backend, which works in any JS environment. |
4e9727215e95-1261 | However, you can use any of the backends supported by TensorFlow.js, including GPU and WebAssembly, which will be a lot faster. For Node.js you can use the @tensorflow/tfjs-node package, and for the browser you can use the @tensorflow/tfjs-backend-webgl package. See the TensorFlow.js documentation for more information.PreviousOpenAINextVector stores |
4e9727215e95-1262 | ModulesData connectionText embedding modelsIntegrationsTensorFlowTensorFlowThis Embeddings integration runs the embeddings entirely in your browser or Node.js environment, using TensorFlow.js. This means that your data isn't sent to any third party, and you don't need to sign up for any API keys. However, it does require more memory and processing power than the other integrations.npmYarnpnpmnpm install @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpuyarn add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpupnpm add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpuimport "@tensorflow/tfjs-backend-cpu";import { TensorFlowEmbeddings } from "langchain/embeddings/tensorflow";const embeddings = new TensorFlowEmbeddings();This example uses the CPU backend, which works in any JS environment. However, you can use any of the backends supported by TensorFlow.js, including GPU and WebAssembly, which will be a lot faster. For Node.js you can use the @tensorflow/tfjs-node package, and for the browser you can use the @tensorflow/tfjs-backend-webgl package. See the TensorFlow.js documentation for more information.PreviousOpenAINextVector stores |
4e9727215e95-1263 | TensorFlowThis Embeddings integration runs the embeddings entirely in your browser or Node.js environment, using TensorFlow.js. This means that your data isn't sent to any third party, and you don't need to sign up for any API keys. However, it does require more memory and processing power than the other integrations.npmYarnpnpmnpm install @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpuyarn add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpupnpm add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpuimport "@tensorflow/tfjs-backend-cpu";import { TensorFlowEmbeddings } from "langchain/embeddings/tensorflow";const embeddings = new TensorFlowEmbeddings();This example uses the CPU backend, which works in any JS environment. However, you can use any of the backends supported by TensorFlow.js, including GPU and WebAssembly, which will be a lot faster. For Node.js you can use the @tensorflow/tfjs-node package, and for the browser you can use the @tensorflow/tfjs-backend-webgl package. See the TensorFlow.js documentation for more information. |
4e9727215e95-1264 | This Embeddings integration runs the embeddings entirely in your browser or Node.js environment, using TensorFlow.js. This means that your data isn't sent to any third party, and you don't need to sign up for any API keys. However, it does require more memory and processing power than the other integrations.
npmYarnpnpmnpm install @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpuyarn add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpupnpm add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpu
npm install @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpuyarn add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpupnpm add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpu |
4e9727215e95-1265 | npm install @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpu
yarn add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpu
pnpm add @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpu
import "@tensorflow/tfjs-backend-cpu";import { TensorFlowEmbeddings } from "langchain/embeddings/tensorflow";const embeddings = new TensorFlowEmbeddings();
This example uses the CPU backend, which works in any JS environment. However, you can use any of the backends supported by TensorFlow.js, including GPU and WebAssembly, which will be a lot faster. For Node.js you can use the @tensorflow/tfjs-node package, and for the browser you can use the @tensorflow/tfjs-backend-webgl package. See the TensorFlow.js documentation for more information.
Page Title: Vector stores | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-1266 | Page Title: Vector stores | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesOn this pageVector storesOne of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding
vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are
'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search
for you.Get startedThis walkthrough showcases basic functionality related to VectorStores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. |
4e9727215e95-1267 | Therefore, it is recommended that you familiarize yourself with the text embedding model interfaces before diving into this.This walkthrough uses a basic, unoptimized implementation called MemoryVectorStore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings.UsageCreate a new index from textsimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { |
4e9727215e95-1268 | MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textHere is the current base interface all vector stores share:interface VectorStore { /** * Add more documents to an existing VectorStore. * Some providers support additional parameters, e.g. to associate custom ids * with added documents or to change the batch size of bulk inserts. * Returns an array of ids for the documents or nothing. */ addDocuments( documents: Document[], options? : Record<string, any> ): Promise<string[] | void>; /** * Search for the most similar documents to a query */ similaritySearch( query: string, k? : number, filter? |
4e9727215e95-1269 | : object | undefined ): Promise<Document[]>; /** * Search for the most similar documents to a query, * and return their similarity score */ similaritySearchWithScore( query: string, k = 4, filter: object | undefined = undefined ): Promise<[object, number][]>; /** * Turn a VectorStore into a Retriever */ asRetriever(k? : number): BaseRetriever; /** * Delete embedded documents from the vector store matching the passed in parameter. * Not supported by every provider. */ delete(params? : Record<string, any>): Promise<void>; /** * Advanced: Add more documents to an existing VectorStore, * when you already have their embeddings */ addVectors( vectors: number[][], documents: Document[], options? : Record<string, any> ): Promise<string[] | void>; /** * Advanced: Search for the most similar documents to a query, * when you already have the embedding of the query */ similaritySearchVectorWithScore( query: number[], k: number, filter? : object ): Promise<[Document, number][]>;}You can create a vector store from a list of Documents, or from a list of texts and their corresponding metadata. |
4e9727215e95-1270 | You can also create a vector store from an existing index, the signature of this method depends on the vector store you're using, check the documentation of the vector store you're interested in.abstract class BaseVectorStore implements VectorStore { static fromTexts( texts: string[], metadatas: object[] | object, embeddings: Embeddings, dbConfig: Record<string, any> ): Promise<VectorStore>; static fromDocuments( docs: Document[], embeddings: Embeddings, dbConfig: Record<string, any> ): Promise<VectorStore>;}Which one to pick?Here's a quick guide to help you pick the right vector store for your use case:If you're after something that can just run inside your Node.js application, in-memory, without any other servers to stand up, then go for HNSWLib, Faiss, or LanceDBIf you're looking for something that can run in-memory in browser-like environments, then go for MemoryVectorStoreIf you come from Python and you were looking for something similar to FAISS, try HNSWLib or FaissIf you're looking for an open-source full-featured vector database that you |
4e9727215e95-1271 | can run locally in a docker container, then go for ChromaIf you're looking for an open-source vector database that offers low-latency, local embedding of documents and supports apps on the edge, then go for ZepIf you're looking for an open-source production-ready vector database that you can run locally (in a docker container) or hosted in the cloud, then go for Weaviate.If you're using Supabase already then look at the Supabase vector store to use the same Postgres database for your embeddings tooIf you're looking for a production-ready vector store you don't have to worry about hosting yourself, then go for PineconeIf you are already utilizing SingleStore, or if you find yourself in need of a distributed, high-performance database, you might want to consider the SingleStore vector store.If you are looking for an online MPP (Massively Parallel Processing) data warehousing service, you might want to consider the AnalyticDB vector store.PreviousTensorFlowNextIntegrationsGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesOn this pageVector storesOne of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding
vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are
'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search
for you.Get startedThis walkthrough showcases basic functionality related to VectorStores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. |
4e9727215e95-1272 | Therefore, it is recommended that you familiarize yourself with the text embedding model interfaces before diving into this.This walkthrough uses a basic, unoptimized implementation called MemoryVectorStore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings.UsageCreate a new index from textsimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { |
4e9727215e95-1273 | MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textHere is the current base interface all vector stores share:interface VectorStore { /** * Add more documents to an existing VectorStore. * Some providers support additional parameters, e.g. to associate custom ids * with added documents or to change the batch size of bulk inserts. * Returns an array of ids for the documents or nothing. */ addDocuments( documents: Document[], options? : Record<string, any> ): Promise<string[] | void>; /** * Search for the most similar documents to a query */ similaritySearch( query: string, k? : number, filter? |
4e9727215e95-1274 | : object | undefined ): Promise<Document[]>; /** * Search for the most similar documents to a query, * and return their similarity score */ similaritySearchWithScore( query: string, k = 4, filter: object | undefined = undefined ): Promise<[object, number][]>; /** * Turn a VectorStore into a Retriever */ asRetriever(k? : number): BaseRetriever; /** * Delete embedded documents from the vector store matching the passed in parameter. * Not supported by every provider. */ delete(params? : Record<string, any>): Promise<void>; /** * Advanced: Add more documents to an existing VectorStore, * when you already have their embeddings */ addVectors( vectors: number[][], documents: Document[], options? : Record<string, any> ): Promise<string[] | void>; /** * Advanced: Search for the most similar documents to a query, * when you already have the embedding of the query */ similaritySearchVectorWithScore( query: number[], k: number, filter? : object ): Promise<[Document, number][]>;}You can create a vector store from a list of Documents, or from a list of texts and their corresponding metadata. |
4e9727215e95-1275 | You can also create a vector store from an existing index, the signature of this method depends on the vector store you're using, check the documentation of the vector store you're interested in.abstract class BaseVectorStore implements VectorStore { static fromTexts( texts: string[], metadatas: object[] | object, embeddings: Embeddings, dbConfig: Record<string, any> ): Promise<VectorStore>; static fromDocuments( docs: Document[], embeddings: Embeddings, dbConfig: Record<string, any> ): Promise<VectorStore>;}Which one to pick?Here's a quick guide to help you pick the right vector store for your use case:If you're after something that can just run inside your Node.js application, in-memory, without any other servers to stand up, then go for HNSWLib, Faiss, or LanceDBIf you're looking for something that can run in-memory in browser-like environments, then go for MemoryVectorStoreIf you come from Python and you were looking for something similar to FAISS, try HNSWLib or FaissIf you're looking for an |
4e9727215e95-1276 | open-source full-featured vector database that you can run locally in a docker container, then go for ChromaIf you're looking for an open-source vector database that offers low-latency, local embedding of documents and supports apps on the edge, then go for ZepIf you're looking for an open-source production-ready vector database that you can run locally (in a docker container) or hosted in the cloud, then go for Weaviate.If you're using Supabase already then look at the Supabase vector store to use the same Postgres database for your embeddings tooIf you're looking for a production-ready vector store you don't have to worry about hosting yourself, then go for PineconeIf you are already utilizing SingleStore, or if you find yourself in need of a distributed, high-performance database, you might want to consider the SingleStore vector store.If you are looking for an online MPP (Massively Parallel Processing) data warehousing service, you might want to consider the AnalyticDB vector store.PreviousTensorFlowNextIntegrationsGet started
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference
ModulesData connectionVector storesOn this pageVector storesOne of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding
vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are
'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search
for you.Get startedThis walkthrough showcases basic functionality related to VectorStores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. |
4e9727215e95-1277 | Therefore, it is recommended that you familiarize yourself with the text embedding model interfaces before diving into this.This walkthrough uses a basic, unoptimized implementation called MemoryVectorStore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings.UsageCreate a new index from textsimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { |
4e9727215e95-1278 | MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textHere is the current base interface all vector stores share:interface VectorStore { /** * Add more documents to an existing VectorStore. * Some providers support additional parameters, e.g. to associate custom ids * with added documents or to change the batch size of bulk inserts. * Returns an array of ids for the documents or nothing. */ addDocuments( documents: Document[], options? : Record<string, any> ): Promise<string[] | void>; /** * Search for the most similar documents to a query */ similaritySearch( query: string, k? : number, filter? |
4e9727215e95-1279 | : object | undefined ): Promise<Document[]>; /** * Search for the most similar documents to a query, * and return their similarity score */ similaritySearchWithScore( query: string, k = 4, filter: object | undefined = undefined ): Promise<[object, number][]>; /** * Turn a VectorStore into a Retriever */ asRetriever(k? : number): BaseRetriever; /** * Delete embedded documents from the vector store matching the passed in parameter. * Not supported by every provider. */ delete(params? : Record<string, any>): Promise<void>; /** * Advanced: Add more documents to an existing VectorStore, * when you already have their embeddings */ addVectors( vectors: number[][], documents: Document[], options? : Record<string, any> ): Promise<string[] | void>; /** * Advanced: Search for the most similar documents to a query, * when you already have the embedding of the query */ similaritySearchVectorWithScore( query: number[], k: number, filter? : object ): Promise<[Document, number][]>;}You can create a vector store from a list of Documents, or from a list of texts and their corresponding metadata. |
4e9727215e95-1280 | You can also create a vector store from an existing index, the signature of this method depends on the vector store you're using, check the documentation of the vector store you're interested in.abstract class BaseVectorStore implements VectorStore { static fromTexts( texts: string[], metadatas: object[] | object, embeddings: Embeddings, dbConfig: Record<string, any> ): Promise<VectorStore>; static fromDocuments( docs: Document[], embeddings: Embeddings, dbConfig: Record<string, any> ): Promise<VectorStore>;}Which one to pick?Here's a quick guide to help you pick the right vector store for your use case:If you're after something that can just run inside your Node.js application, in-memory, without any other servers to stand up, then go for HNSWLib, Faiss, or LanceDBIf you're looking for something that can run in-memory in browser-like environments, then go for MemoryVectorStoreIf you come from Python and you were looking for something similar to FAISS, try HNSWLib or FaissIf you're looking for an |
4e9727215e95-1281 | open-source full-featured vector database that you can run locally in a docker container, then go for ChromaIf you're looking for an open-source vector database that offers low-latency, local embedding of documents and supports apps on the edge, then go for ZepIf you're looking for an open-source production-ready vector database that you can run locally (in a docker container) or hosted in the cloud, then go for Weaviate.If you're using Supabase already then look at the Supabase vector store to use the same Postgres database for your embeddings tooIf you're looking for a production-ready vector store you don't have to worry about hosting yourself, then go for PineconeIf you are already utilizing SingleStore, or if you find yourself in need of a distributed, high-performance database, you might want to consider the SingleStore vector store.If you are looking for an online MPP (Massively Parallel Processing) data warehousing service, you might want to consider the AnalyticDB vector store.PreviousTensorFlowNextIntegrationsGet started
ModulesData connectionVector storesOn this pageVector storesOne of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding
vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are
'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search
for you.Get startedThis walkthrough showcases basic functionality related to VectorStores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. |
4e9727215e95-1282 | Therefore, it is recommended that you familiarize yourself with the text embedding model interfaces before diving into this.This walkthrough uses a basic, unoptimized implementation called MemoryVectorStore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings.UsageCreate a new index from textsimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { |
4e9727215e95-1283 | MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textHere is the current base interface all vector stores share:interface VectorStore { /** * Add more documents to an existing VectorStore. * Some providers support additional parameters, e.g. to associate custom ids * with added documents or to change the batch size of bulk inserts. * Returns an array of ids for the documents or nothing. */ addDocuments( documents: Document[], options? : Record<string, any> ): Promise<string[] | void>; /** * Search for the most similar documents to a query */ similaritySearch( query: string, k? : number, filter? |
4e9727215e95-1284 | : object | undefined ): Promise<Document[]>; /** * Search for the most similar documents to a query, * and return their similarity score */ similaritySearchWithScore( query: string, k = 4, filter: object | undefined = undefined ): Promise<[object, number][]>; /** * Turn a VectorStore into a Retriever */ asRetriever(k? : number): BaseRetriever; /** * Delete embedded documents from the vector store matching the passed in parameter. * Not supported by every provider. */ delete(params? : Record<string, any>): Promise<void>; /** * Advanced: Add more documents to an existing VectorStore, * when you already have their embeddings */ addVectors( vectors: number[][], documents: Document[], options? : Record<string, any> ): Promise<string[] | void>; /** * Advanced: Search for the most similar documents to a query, * when you already have the embedding of the query */ similaritySearchVectorWithScore( query: number[], k: number, filter? : object ): Promise<[Document, number][]>;}You can create a vector store from a list of Documents, or from a list of texts and their corresponding metadata. |
4e9727215e95-1285 | You can also create a vector store from an existing index, the signature of this method depends on the vector store you're using, check the documentation of the vector store you're interested in.abstract class BaseVectorStore implements VectorStore { static fromTexts( texts: string[], metadatas: object[] | object, embeddings: Embeddings, dbConfig: Record<string, any> ): Promise<VectorStore>; static fromDocuments( docs: Document[], embeddings: Embeddings, dbConfig: Record<string, any> ): Promise<VectorStore>;}Which one to pick?Here's a quick guide to help you pick the right vector store for your use case:If you're after something that can just run inside your Node.js application, in-memory, without any other servers to stand up, then go for HNSWLib, Faiss, or LanceDBIf you're looking for something that can run in-memory in browser-like environments, then go for MemoryVectorStoreIf you come from Python and you were looking for something similar to FAISS, try HNSWLib or FaissIf you're looking for an |
4e9727215e95-1286 | open-source full-featured vector database that you can run locally in a docker container, then go for ChromaIf you're looking for an open-source vector database that offers low-latency, local embedding of documents and supports apps on the edge, then go for ZepIf you're looking for an open-source production-ready vector database that you can run locally (in a docker container) or hosted in the cloud, then go for Weaviate.If you're using Supabase already then look at the Supabase vector store to use the same Postgres database for your embeddings tooIf you're looking for a production-ready vector store you don't have to worry about hosting yourself, then go for PineconeIf you are already utilizing SingleStore, or if you find yourself in need of a distributed, high-performance database, you might want to consider the SingleStore vector store.If you are looking for an online MPP (Massively Parallel Processing) data warehousing service, you might want to consider the AnalyticDB vector store.PreviousTensorFlowNextIntegrations
Vector storesOne of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding
vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are
'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search
for you.Get startedThis walkthrough showcases basic functionality related to VectorStores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. |
4e9727215e95-1287 | Therefore, it is recommended that you familiarize yourself with the text embedding model interfaces before diving into this.This walkthrough uses a basic, unoptimized implementation called MemoryVectorStore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings.UsageCreate a new index from textsimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiCreate a new index from a loaderimport { |
4e9727215e95-1288 | MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/textHere is the current base interface all vector stores share:interface VectorStore { /** * Add more documents to an existing VectorStore. * Some providers support additional parameters, e.g. to associate custom ids * with added documents or to change the batch size of bulk inserts. * Returns an array of ids for the documents or nothing. */ addDocuments( documents: Document[], options? : Record<string, any> ): Promise<string[] | void>; /** * Search for the most similar documents to a query */ similaritySearch( query: string, k? : number, filter? |
4e9727215e95-1289 | : object | undefined ): Promise<Document[]>; /** * Search for the most similar documents to a query, * and return their similarity score */ similaritySearchWithScore( query: string, k = 4, filter: object | undefined = undefined ): Promise<[object, number][]>; /** * Turn a VectorStore into a Retriever */ asRetriever(k? : number): BaseRetriever; /** * Delete embedded documents from the vector store matching the passed in parameter. * Not supported by every provider. */ delete(params? : Record<string, any>): Promise<void>; /** * Advanced: Add more documents to an existing VectorStore, * when you already have their embeddings */ addVectors( vectors: number[][], documents: Document[], options? : Record<string, any> ): Promise<string[] | void>; /** * Advanced: Search for the most similar documents to a query, * when you already have the embedding of the query */ similaritySearchVectorWithScore( query: number[], k: number, filter? : object ): Promise<[Document, number][]>;}You can create a vector store from a list of Documents, or from a list of texts and their corresponding metadata. |
4e9727215e95-1290 | You can also create a vector store from an existing index, the signature of this method depends on the vector store you're using, check the documentation of the vector store you're interested in.abstract class BaseVectorStore implements VectorStore { static fromTexts( texts: string[], metadatas: object[] | object, embeddings: Embeddings, dbConfig: Record<string, any> ): Promise<VectorStore>; static fromDocuments( docs: Document[], embeddings: Embeddings, dbConfig: Record<string, any> ): Promise<VectorStore>;}Which one to pick?Here's a quick guide to help you pick the right vector store for your use case:If you're after something that can just run inside your Node.js application, in-memory, without any other servers to stand up, then go for HNSWLib, Faiss, or LanceDBIf you're looking for something that can run in-memory in browser-like environments, then go for MemoryVectorStoreIf you come from Python and you were looking for something similar to FAISS, try HNSWLib or FaissIf you're |
4e9727215e95-1291 | looking for an open-source full-featured vector database that you can run locally in a docker container, then go for ChromaIf you're looking for an open-source vector database that offers low-latency, local embedding of documents and supports apps on the edge, then go for ZepIf you're looking for an open-source production-ready vector database that you can run locally (in a docker container) or hosted in the cloud, then go for Weaviate.If you're using Supabase already then look at the Supabase vector store to use the same Postgres database for your embeddings tooIf you're looking for a production-ready vector store you don't have to worry about hosting yourself, then go for PineconeIf you are already utilizing SingleStore, or if you find yourself in need of a distributed, high-performance database, you might want to consider the SingleStore vector store.If you are looking for an online MPP (Massively Parallel Processing) data warehousing service, you might want to consider the AnalyticDB vector store.
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding
vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are
'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search
for you.
This walkthrough showcases basic functionality related to VectorStores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the text embedding model interfaces before diving into this.
This walkthrough uses a basic, unoptimized implementation called MemoryVectorStore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. |
4e9727215e95-1292 | import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/
API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openai
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/
API Reference:MemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiTextLoader from langchain/document_loaders/fs/text
Here is the current base interface all vector stores share: |
4e9727215e95-1293 | Here is the current base interface all vector stores share:
interface VectorStore { /** * Add more documents to an existing VectorStore. * Some providers support additional parameters, e.g. to associate custom ids * with added documents or to change the batch size of bulk inserts. * Returns an array of ids for the documents or nothing. */ addDocuments( documents: Document[], options? : Record<string, any> ): Promise<string[] | void>; /** * Search for the most similar documents to a query */ similaritySearch( query: string, k? : number, filter? : object | undefined ): Promise<Document[]>; /** * Search for the most similar documents to a query, * and return their similarity score */ similaritySearchWithScore( query: string, k = 4, filter: object | undefined = undefined ): Promise<[object, number][]>; /** * Turn a VectorStore into a Retriever */ asRetriever(k? : number): BaseRetriever; /** * Delete embedded documents from the vector store matching the passed in parameter. * Not supported by every provider. */ delete(params? : Record<string, any>): Promise<void>; /** * Advanced: Add more documents to an existing VectorStore, * when you already have their embeddings */ addVectors( vectors: number[][], documents: Document[], options?
: Record<string, any> ): Promise<string[] | void>; /** * Advanced: Search for the most similar documents to a query, * when you already have the embedding of the query */ similaritySearchVectorWithScore( query: number[], k: number, filter? : object ): Promise<[Document, number][]>;} |
4e9727215e95-1294 | You can create a vector store from a list of Documents, or from a list of texts and their corresponding metadata. You can also create a vector store from an existing index, the signature of this method depends on the vector store you're using, check the documentation of the vector store you're interested in.
abstract class BaseVectorStore implements VectorStore { static fromTexts( texts: string[], metadatas: object[] | object, embeddings: Embeddings, dbConfig: Record<string, any> ): Promise<VectorStore>; static fromDocuments( docs: Document[], embeddings: Embeddings, dbConfig: Record<string, any> ): Promise<VectorStore>;}
Here's a quick guide to help you pick the right vector store for your use case:
Page Title: Vector Stores: Integrations | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-1295 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsVector Stores: Integrations📄️ MemoryMemoryVectorStore is an in-memory, ephemeral vectorstore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. The default similarity metric is cosine similarity, but can be changed to any of the similarity metrics supported by ml-distance.📄️ AnalyticDBAnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.📄️ ChromaChroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.📄️ ElasticsearchOnly available on Node.js.📄️ FaissOnly available on Node.js.📄️ HNSWLibOnly available on Node.js.📄️ LanceDBLanceDB is an embedded vector database for AI applications. |
4e9727215e95-1296 | It is open source and distributed with an Apache-2.0 license.📄️ MilvusMilvus is a vector database built for embeddings similarity search and AI applications.📄️ MongoDB AtlasOnly available on Node.js.📄️ MyScaleOnly available on Node.js.📄️ OpenSearchOnly available on Node.js.📄️ PineconeOnly available on Node.js.📄️ PrismaFor augmenting existing models in PostgreSQL database with vector search, Langchain supports using Prisma together with PostgreSQL and pgvector Postgres extension.📄️ QdrantQdrant is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload.📄️ RedisRedis is a fast open source, in-memory data store.📄️ SingleStoreSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premise. It provides vector storage, as well as vector functions like dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching.📄️ SupabaseLangchain supports using Supabase Postgres database as a vector store, using the pgvector postgres extension. |
4e9727215e95-1297 | Refer to the Supabase blog post for more information.📄️ TigrisTigris makes it easy to build AI applications with vector embeddings.📄️ TypeORMTo enable vector search in a generic PostgreSQL database, LangChainJS supports using TypeORM with the pgvector Postgres extension.📄️ TypesenseVector store that utilizes the Typesense search engine.📄️ USearchOnly available on Node.js.📄️ VectaraVectara is a platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy.📄️ WeaviateWeaviate is an open source vector database that stores both objects and vectors, allowing for combining vector search with structured filtering. LangChain connects to Weaviate via the weaviate-ts-client package, the official Typescript client for Weaviate.📄️ XataXata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a UI for managing your data.📄️ ZepZep is an open source long-term memory store for LLM applications. Zep makes it easy to add relevant documents,PreviousVector storesNextMemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-1298 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsVector Stores: Integrations📄️ MemoryMemoryVectorStore is an in-memory, ephemeral vectorstore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. The default similarity metric is cosine similarity, but can be changed to any of the similarity metrics supported by ml-distance.📄️ AnalyticDBAnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.📄️ ChromaChroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.📄️ ElasticsearchOnly available on Node.js.📄️ FaissOnly available on Node.js.📄️ HNSWLibOnly available on Node.js.📄️ LanceDBLanceDB is an embedded vector database for AI applications. |
4e9727215e95-1299 | It is open source and distributed with an Apache-2.0 license.📄️ MilvusMilvus is a vector database built for embeddings similarity search and AI applications.📄️ MongoDB AtlasOnly available on Node.js.📄️ MyScaleOnly available on Node.js.📄️ OpenSearchOnly available on Node.js.📄️ PineconeOnly available on Node.js.📄️ PrismaFor augmenting existing models in PostgreSQL database with vector search, Langchain supports using Prisma together with PostgreSQL and pgvector Postgres extension.📄️ QdrantQdrant is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload.📄️ RedisRedis is a fast open source, in-memory data store.📄️ SingleStoreSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premise. It provides vector storage, as well as vector functions like dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching.📄️ SupabaseLangchain supports using Supabase Postgres database as a vector store, using the pgvector postgres extension. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.