id stringlengths 14 17 | text stringlengths 42 2.11k |
|---|---|
4e9727215e95-2200 | Time-Weighted RetrieverA Time-Weighted Retriever is a retriever that takes into account recency in addition to similarity. The scoring algorithm is:let score = (1.0 - this.decayRate) ** hoursPassed + vectorRelevance;Notably, hoursPassed above refers to the time since the object in the retriever was last accessed, not s... |
4e9727215e95-2201 | It is important to note that due to required metadata, all documents must be added to the backing vector store using the addDocuments method on the retriever, not the vector store itself.import { TimeWeightedVectorStoreRetriever } from "langchain/retrievers/time_weighted";import { MemoryVectorStore } from "langchain/ve... |
4e9727215e95-2202 | Notably, hoursPassed above refers to the time since the object in the retriever was last accessed, not since it was created. This means that frequently accessed objects remain "fresh" and score higher.
Vector Store
Page Title: Vector Store | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse cas... |
4e9727215e95-2203 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesRetrieversHow-toIntegrationsChatGPT Plugin RetrieverContextual Compression RetrieverDataberry RetrieverHyDE RetrieverAmazon Kendra RetrieverMetal RetrieverRemote RetrieverS... |
4e9727215e95-2204 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesRetrieversHow-toIntegrationsChatGPT Plugin RetrieverContextual Compression Retrieve... |
4e9727215e95-2205 | Vespa.ai is a platform for highly efficient structured text and vector search.
Please refer to Vespa.ai for more information.The following sets up a retriever that fetches results from Vespa's documentation search:import { VespaRetriever } from "langchain/retrievers/vespa";export const run = async () => { const url =... |
4e9727215e95-2206 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesRetrieversHow-toIntegrationsChatGPT Plugin RetrieverContextual Compression RetrieverDataberry RetrieverHyDE RetrieverAmazon Kendra RetrieverMetal RetrieverRemote RetrieverS... |
4e9727215e95-2207 | using documentation as the ranking method. The userQuery() is replaced with the actual query
passed from LangChain.Please refer to the pyvespa documentation
for more information.The URL is the endpoint of the Vespa application.
You can connect to any Vespa endpoint, either a remote service or a local instance using ... |
4e9727215e95-2208 | using documentation as the ranking method. The userQuery() is replaced with the actual query
passed from LangChain.Please refer to the pyvespa documentation
for more information.The URL is the endpoint of the Vespa application.
You can connect to any Vespa endpoint, either a remote service or a local instance using ... |
4e9727215e95-2209 | using documentation as the ranking method. The userQuery() is replaced with the actual query
passed from LangChain.Please refer to the pyvespa documentation
for more information.The URL is the endpoint of the Vespa application.
You can connect to any Vespa endpoint, either a remote service or a local instance using ... |
4e9727215e95-2210 | passed from LangChain.
Please refer to the pyvespa documentation
for more information.
The URL is the endpoint of the Vespa application.
You can connect to any Vespa endpoint, either a remote service or a local instance using Docker.
However, most Vespa Cloud instances are protected with mTLS.
If this is your cas... |
4e9727215e95-2211 | @getzep/zep-jsyarn add @getzep/zep-jspnpm add @getzep/zep-jsUsageimport { ZepRetriever } from "langchain/retrievers/zep";import { ZepMemory } from "langchain/memory/zep";import { Memory as MemoryModel, Message } from "@getzep/zep-js";import { randomUUID } from "crypto";function sleep(ms: number) { // eslint-disable-n... |
4e9727215e95-2212 | }, ]; const zepClient = await memory.zepClientPromise; if (!zepClient) { throw new Error("ZepClient is not initialized"); } // Add chat messages to memory for (const chatMessage of chatMessages) { let m: MemoryModel; if (chatMessage.role === "AI") { m = new MemoryModel({ messages: [new Messag... |
4e9727215e95-2213 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesRetrieversHow-toIntegrationsChatGPT Plugin RetrieverContextual Compression RetrieverDataberry RetrieverHyDE RetrieverAmazon Kendra RetrieverMetal RetrieverRemote RetrieverS... |
4e9727215e95-2214 | add @getzep/zep-jsUsageimport { ZepRetriever } from "langchain/retrievers/zep";import { ZepMemory } from "langchain/memory/zep";import { Memory as MemoryModel, Message } from "@getzep/zep-js";import { randomUUID } from "crypto";function sleep(ms: number) { // eslint-disable-next-line no-promise-executor-return retur... |
4e9727215e95-2215 | }, ]; const zepClient = await memory.zepClientPromise; if (!zepClient) { throw new Error("ZepClient is not initialized"); } // Add chat messages to memory for (const chatMessage of chatMessages) { let m: MemoryModel; if (chatMessage.role === "AI") { m = new MemoryModel({ messages: [new Messag... |
4e9727215e95-2216 | ModulesData connectionRetrieversIntegrationsZep RetrieverZep RetrieverThis example shows how to use the Zep Retriever in a RetrievalQAChain to retrieve documents from Zep memory store.SetupnpmYarnpnpmnpm i @getzep/zep-jsyarn add @getzep/zep-jspnpm add @getzep/zep-jsUsageimport { ZepRetriever } from "langchain/retriev... |
4e9727215e95-2217 | role: "AI", message: "We have many red cars. Anything more specific?" }, { role: "User", message: "I'm looking for a red car with a sunroof." |
4e9727215e95-2218 | }, ]; const zepClient = await memory.zepClientPromise; if (!zepClient) { throw new Error("ZepClient is not initialized"); } // Add chat messages to memory for (const chatMessage of chatMessages) { let m: MemoryModel; if (chatMessage.role === "AI") { m = new MemoryModel({ messages: [new Messag... |
4e9727215e95-2219 | Zep RetrieverThis example shows how to use the Zep Retriever in a RetrievalQAChain to retrieve documents from Zep memory store.SetupnpmYarnpnpmnpm i @getzep/zep-jsyarn add @getzep/zep-jspnpm add @getzep/zep-jsUsageimport { ZepRetriever } from "langchain/retrievers/zep";import { ZepMemory } from "langchain/memory/zep"... |
4e9727215e95-2220 | }, ]; const zepClient = await memory.zepClientPromise; if (!zepClient) { throw new Error("ZepClient is not initialized"); } // Add chat messages to memory for (const chatMessage of chatMessages) { let m: MemoryModel; if (chatMessage.role === "AI") { m = new MemoryModel({ messages: [new Messag... |
4e9727215e95-2221 | pnpm add @getzep/zep-js
import { ZepRetriever } from "langchain/retrievers/zep";import { ZepMemory } from "langchain/memory/zep";import { Memory as MemoryModel, Message } from "@getzep/zep-js";import { randomUUID } from "crypto";function sleep(ms: number) { // eslint-disable-next-line no-promise-executor-return retu... |
4e9727215e95-2222 | }, ]; const zepClient = await memory.zepClientPromise; if (!zepClient) { throw new Error("ZepClient is not initialized"); } // Add chat messages to memory for (const chatMessage of chatMessages) { let m: MemoryModel; if (chatMessage.role === "AI") { m = new MemoryModel({ messages: [new Messag... |
4e9727215e95-2223 | contain an image.embedMedia() and embedMediaQuery() take an object that contain a text string
field, an image Buffer field, or both and returns a similarly constructed object
containing the respective vectors.Note: The Google Vertex AI embeddings models have different vector sizes
than OpenAI's standard model, so so... |
4e9727215e95-2224 | to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryUsageHere's a basic example that shows how to embed image queries:import fs from "fs";import { GoogleVertexAIMultimo... |
4e9727215e95-2225 | ");console.log({ textEmbedding });API Reference:GoogleVertexAIMultimodalEmbeddings from langchain/experimental/multimodal_embeddings/googlevertexaiAdvanced usageHere's a more advanced example that shows how to integrate these new embeddings with a LangChain vector store.import fs from "fs";import { GoogleVertexAIMulti... |
4e9727215e95-2226 | vector store directlyawait vectorStore.addVectors([vectors], [document]);// Use a similar image to the one just addedconst img2 = fs.readFileSync("parrot-icon.png");const vectors2: number[] = await embeddings.embedImageQuery(img2);// Use the lower level, direct APIconst resultTwo = await vectorStore.similaritySearchVec... |
4e9727215e95-2227 | contain an image.embedMedia() and embedMediaQuery() take an object that contain a text string
field, an image Buffer field, or both and returns a similarly constructed object
containing the respective vectors.Note: The Google Vertex AI embeddings models have different vector sizes
than OpenAI's standard model, so so... |
4e9727215e95-2228 | to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryUsageHere's a basic example that shows how to embed image queries:import fs from "fs";import { GoogleVertexAIMultimo... |
4e9727215e95-2229 | ");console.log({ textEmbedding });API Reference:GoogleVertexAIMultimodalEmbeddings from langchain/experimental/multimodal_embeddings/googlevertexaiAdvanced usageHere's a more advanced example that shows how to integrate these new embeddings with a LangChain vector store.import fs from "fs";import { GoogleVertexAIMulti... |
4e9727215e95-2230 | },});// Add the image embedding vectors to the vector store directlyawait vectorStore.addVectors([vectors], [document]);// Use a similar image to the one just addedconst img2 = fs.readFileSync("parrot-icon.png");const vectors2: number[] = await embeddings.embedImageQuery(img2);// Use the lower level, direct APIconst re... |
4e9727215e95-2231 | contain an image.embedMedia() and embedMediaQuery() take an object that contain a text string
field, an image Buffer field, or both and returns a similarly constructed object
containing the respective vectors.Note: The Google Vertex AI embeddings models have different vector sizes
than OpenAI's standard model, so so... |
4e9727215e95-2232 | to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment
variable to the path of this file.npmYarnpnpmnpm install google-auth-libraryyarn add google-auth-librarypnpm add google-auth-libraryUsageHere's a basic example that shows how to embed image queries:import fs from "fs";import { GoogleVertexAIMultimo... |
4e9727215e95-2233 | ");console.log({ textEmbedding });API Reference:GoogleVertexAIMultimodalEmbeddings from langchain/experimental/multimodal_embeddings/googlevertexaiAdvanced usageHere's a more advanced example that shows how to integrate these new embeddings with a LangChain vector store.import fs from "fs";import { GoogleVertexAIMulti... |
4e9727215e95-2234 | },});// Add the image embedding vectors to the vector store directlyawait vectorStore.addVectors([vectors], [document]);// Use a similar image to the one just addedconst img2 = fs.readFileSync("parrot-icon.png");const vectors2: number[] = await embeddings.embedImageQuery(img2);// Use the lower level, direct APIconst re... |
4e9727215e95-2235 | containing the respective vectors.Note: The Google Vertex AI embeddings models have different vector sizes
than OpenAI's standard model, so some vector stores may not handle them correctly.The textembedding-gecko model in GoogleVertexAIEmbeddings provides 768 dimensions.The multimodalembedding@001 model in GoogleVerte... |
4e9727215e95-2236 | ");console.log({ textEmbedding });API Reference:GoogleVertexAIMultimodalEmbeddings from langchain/experimental/multimodal_embeddings/googlevertexaiAdvanced usageHere's a more advanced example that shows how to integrate these new embeddings with a LangChain vector store.import fs from "fs";import { GoogleVertexAIMulti... |
4e9727215e95-2237 | "image", },});// Add the image embedding vectors to the vector store directlyawait vectorStore.addVectors([vectors], [document]);// Use a similar image to the one just addedconst img2 = fs.readFileSync("parrot-icon.png");const vectors2: number[] = await embeddings.embedImageQuery(img2);// Use the lower level, direct A... |
4e9727215e95-2238 | containing the respective vectors.Note: The Google Vertex AI embeddings models have different vector sizes
than OpenAI's standard model, so some vector stores may not handle them correctly.The textembedding-gecko model in GoogleVertexAIEmbeddings provides 768 dimensions.The multimodalembedding@001 model in GoogleVerte... |
4e9727215e95-2239 | ");console.log({ textEmbedding });API Reference:GoogleVertexAIMultimodalEmbeddings from langchain/experimental/multimodal_embeddings/googlevertexaiAdvanced usageHere's a more advanced example that shows how to integrate these new embeddings with a LangChain vector store.import fs from "fs";import { GoogleVertexAIMulti... |
4e9727215e95-2240 | id: 5, mediaType: "image", },});// Add the image embedding vectors to the vector store directlyawait vectorStore.addVectors([vectors], [document]);// Use a similar image to the one just addedconst img2 = fs.readFileSync("parrot-icon.png");const vectors2: number[] = await embeddings.embedImageQuery(img2);// Use the ... |
4e9727215e95-2241 | Here's a basic example that shows how to embed image queries:
import fs from "fs";import { GoogleVertexAIMultimodalEmbeddings } from "langchain/experimental/multimodal_embeddings/googlevertexai";const model = new GoogleVertexAIMultimodalEmbeddings();// Load the image into a buffer to get the embedding of itconst img =... |
4e9727215e95-2242 | the image embedding vectors to the vector store directlyawait vectorStore.addVectors([vectors], [document]);// Use a similar image to the one just addedconst img2 = fs.readFileSync("parrot-icon.png");const vectors2: number[] = await embeddings.embedImageQuery(img2);// Use the lower level, direct APIconst resultTwo = aw... |
4e9727215e95-2243 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEco... |
4e9727215e95-2244 | Do not use this cache if you need to actually store the embeddings for an extended period of time:import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { InMemoryStore } from "langchain/storage/in_memory";import { RecursiveCharact... |
4e9727215e95-2245 | FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Cached creation time: ${Date.now() - time}ms`);/* Cached creation time: 8ms*/// Many keys logged with hashed valuesconst keys = [];for await (const key of inMemoryStore.yieldKeys()) { keys.push(key);}console.log(keys.slice(0, 5));/* [ 'tex... |
4e9727215e95-2246 | ioredisimport { Redis } from "ioredis";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { FaissStore } from "langchain/vectorstores/faiss";import { |
4e9727215e95-2247 | TextLoader } from "langchain/document_loaders/fs/text";import { RedisByteStore } from "langchain/storage/ioredis";const underlyingEmbeddings = new OpenAIEmbeddings();// Requires a Redis instance running at http://localhost:6379.// See https://github.com/redis/ioredis for full config options.const redisClient = new Redi... |
4e9727215e95-2248 | ada-002fa9ac80e1bf226b7b4dfc03ea743289a65a727b2', 'text-embedding-ada-0027dbf9c4b36e12fe1768300f145f4640342daaf22', 'text-embedding-ada-002ea9b59e760e64bec6ee9097b5a06b0d91cb3ab64', 'text-embedding-ada-002fec5d021611e1527297c5e8f485876ea82dcb111', 'text-embedding-ada-002c00f818c345da13fed9f2697b4b689338143c... |
4e9727215e95-2249 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionCaching e... |
4e9727215e95-2250 | Do not use this cache if you need to actually store the embeddings for an extended period of time:import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { InMemoryStore } from "langchain/storage/in_memory";import { RecursiveCharact... |
4e9727215e95-2251 | FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Cached creation time: ${Date.now() - time}ms`);/* Cached creation time: 8ms*/// Many keys logged with hashed valuesconst keys = [];for await (const key of inMemoryStore.yieldKeys()) { keys.push(key);}console.log(keys.slice(0, 5));/* [ 'tex... |
4e9727215e95-2252 | ioredisimport { Redis } from "ioredis";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { FaissStore } from "langchain/vectorstores/faiss";import { |
4e9727215e95-2253 | TextLoader } from "langchain/document_loaders/fs/text";import { RedisByteStore } from "langchain/storage/ioredis";const underlyingEmbeddings = new OpenAIEmbeddings();// Requires a Redis instance running at http://localhost:6379.// See https://github.com/redis/ioredis for full config options.const redisClient = new Redi... |
4e9727215e95-2254 | ada-002fa9ac80e1bf226b7b4dfc03ea743289a65a727b2', 'text-embedding-ada-0027dbf9c4b36e12fe1768300f145f4640342daaf22', 'text-embedding-ada-002ea9b59e760e64bec6ee9097b5a06b0d91cb3ab64', 'text-embedding-ada-002fec5d021611e1527297c5e8f485876ea82dcb111', 'text-embedding-ada-002c00f818c345da13fed9f2697b4b689338143c... |
4e9727215e95-2255 | ModulesData connectionCaching embeddingsOn this pageCaching embeddingsEmbeddings can be stored or temporarily cached to avoid needing to recompute them.Caching embeddings can be done using a CacheBackedEmbeddings instance.The cache backed embedder is a wrapper around an embedder that caches embeddings in a key-value st... |
4e9727215e95-2256 | Do not use this cache if you need to actually store the embeddings for an extended period of time:import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { InMemoryStore } from "langchain/storage/in_memory";import { RecursiveCharact... |
4e9727215e95-2257 | FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Cached creation time: ${Date.now() - time}ms`);/* Cached creation time: 8ms*/// Many keys logged with hashed valuesconst keys = [];for await (const key of inMemoryStore.yieldKeys()) { keys.push(key);}console.log(keys.slice(0, 5));/* [ 'tex... |
4e9727215e95-2258 | ioredisimport { Redis } from "ioredis";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { FaissStore } from "langchain/vectorstores/faiss";import { |
4e9727215e95-2259 | TextLoader } from "langchain/document_loaders/fs/text";import { RedisByteStore } from "langchain/storage/ioredis";const underlyingEmbeddings = new OpenAIEmbeddings();// Requires a Redis instance running at http://localhost:6379.// See https://github.com/redis/ioredis for full config options.const redisClient = new Redi... |
4e9727215e95-2260 | ada-002fa9ac80e1bf226b7b4dfc03ea743289a65a727b2', 'text-embedding-ada-0027dbf9c4b36e12fe1768300f145f4640342daaf22', 'text-embedding-ada-002ea9b59e760e64bec6ee9097b5a06b0d91cb3ab64', 'text-embedding-ada-002fec5d021611e1527297c5e8f485876ea82dcb111', 'text-embedding-ada-002c00f818c345da13fed9f2697b4b689338143c... |
4e9727215e95-2261 | ModulesData connectionCaching embeddingsOn this pageCaching embeddingsEmbeddings can be stored or temporarily cached to avoid needing to recompute them.Caching embeddings can be done using a CacheBackedEmbeddings instance.The cache backed embedder is a wrapper around an embedder that caches embeddings in a key-value st... |
4e9727215e95-2262 | Do not use this cache if you need to actually store the embeddings for an extended period of time:import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { InMemoryStore } from "langchain/storage/in_memory";import { RecursiveCharact... |
4e9727215e95-2263 | FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Cached creation time: ${Date.now() - time}ms`);/* Cached creation time: 8ms*/// Many keys logged with hashed valuesconst keys = [];for await (const key of inMemoryStore.yieldKeys()) { keys.push(key);}console.log(keys.slice(0, 5));/* [ 'tex... |
4e9727215e95-2264 | ioredisimport { Redis } from "ioredis";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { FaissStore } from "langchain/vectorstores/faiss";import { |
4e9727215e95-2265 | TextLoader } from "langchain/document_loaders/fs/text";import { RedisByteStore } from "langchain/storage/ioredis";const underlyingEmbeddings = new OpenAIEmbeddings();// Requires a Redis instance running at http://localhost:6379.// See https://github.com/redis/ioredis for full config options.const redisClient = new Redi... |
4e9727215e95-2266 | ada-002fa9ac80e1bf226b7b4dfc03ea743289a65a727b2', 'text-embedding-ada-0027dbf9c4b36e12fe1768300f145f4640342daaf22', 'text-embedding-ada-002ea9b59e760e64bec6ee9097b5a06b0d91cb3ab64', 'text-embedding-ada-002fec5d021611e1527297c5e8f485876ea82dcb111', 'text-embedding-ada-002c00f818c345da13fed9f2697b4b689338143c... |
4e9727215e95-2267 | Caching embeddingsEmbeddings can be stored or temporarily cached to avoid needing to recompute them.Caching embeddings can be done using a CacheBackedEmbeddings instance.The cache backed embedder is a wrapper around an embedder that caches embeddings in a key-value store.The text is hashed and the hash is used as the k... |
4e9727215e95-2268 | Do not use this cache if you need to actually store the embeddings for an extended period of time:import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { InMemoryStore } from "langchain/storage/in_memory";import { RecursiveCharact... |
4e9727215e95-2269 | FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Cached creation time: ${Date.now() - time}ms`);/* Cached creation time: 8ms*/// Many keys logged with hashed valuesconst keys = [];for await (const key of inMemoryStore.yieldKeys()) { keys.push(key);}console.log(keys.slice(0, 5));/* [ 'tex... |
4e9727215e95-2270 | ioredisimport { Redis } from "ioredis";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { FaissStore } from "langchain/vectorstores/faiss";import { |
4e9727215e95-2271 | TextLoader } from "langchain/document_loaders/fs/text";import { RedisByteStore } from "langchain/storage/ioredis";const underlyingEmbeddings = new OpenAIEmbeddings();// Requires a Redis instance running at http://localhost:6379.// See https://github.com/redis/ioredis for full config options.const redisClient = new Redi... |
4e9727215e95-2272 | ada-002fa9ac80e1bf226b7b4dfc03ea743289a65a727b2', 'text-embedding-ada-0027dbf9c4b36e12fe1768300f145f4640342daaf22', 'text-embedding-ada-002ea9b59e760e64bec6ee9097b5a06b0d91cb3ab64', 'text-embedding-ada-002fec5d021611e1527297c5e8f485876ea82dcb111', 'text-embedding-ada-002c00f818c345da13fed9f2697b4b689338143c... |
4e9727215e95-2273 | Do not use this cache if you need to actually store the embeddings for an extended period of time:
import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { InMemoryStore } from "langchain/storage/in_memory";import { RecursiveChara... |
4e9727215e95-2274 | time = Date.now();const vectorstore = await FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Initial creation time: ${Date.now() - time}ms`);/* Initial creation time: 1905ms*/// The second time is much faster since the embeddings for the input docs have already been added to the cachetime = D... |
4e9727215e95-2275 | API Reference:OpenAIEmbeddings from langchain/embeddings/openaiCacheBackedEmbeddings from langchain/embeddings/cache_backedInMemoryStore from langchain/storage/in_memoryRecursiveCharacterTextSplitter from langchain/text_splitterFaissStore from langchain/vectorstores/faissTextLoader from langchain/document_loaders/fs/te... |
4e9727215e95-2276 | yarn add ioredis
pnpm add ioredis
import { Redis } from "ioredis";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { FaissStore } from "langchain/vect... |
4e9727215e95-2277 | await splitter.splitDocuments(rawDocuments);let time = Date.now();const vectorstore = await FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Initial creation time: ${Date.now() - time}ms`);/* Initial creation time: 1808ms*/// The second time is much faster since the embeddings for the input d... |
4e9727215e95-2278 | API Reference:OpenAIEmbeddings from langchain/embeddings/openaiCacheBackedEmbeddings from langchain/embeddings/cache_backedRecursiveCharacterTextSplitter from langchain/text_splitterFaissStore from langchain/vectorstores/faissTextLoader from langchain/document_loaders/fs/textRedisByteStore from langchain/storage/ioredi... |
4e9727215e95-2279 | but more complex applications require chaining LLMs - either with each other or with other components.LangChain provides the Chain interface for such "chained" applications. We define a Chain very generically as a sequence of calls to components, which can include other chains. The base interface is simple:import { Cal... |
4e9727215e95-2280 | It drastically simplifies and makes more modular the implementation of complex applications, which in turn makes it much easier to debug, maintain, and improve your applications.For more specifics check out:How-to for walkthroughs of different chain featuresFoundational to get acquainted with core building block chains... |
4e9727215e95-2281 | ");We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM.const chain = new LLMChain({ llm: model, prompt });// Since this LLMChain is a single-input, single-output chain, we can also `run` it.// This convenience method takes in a string and returns the v... |
4e9727215e95-2282 | This will return the complete chain response.const prompt = PromptTemplate.fromTemplate( "What is a good name for {company} that makes {product}? ");const chain = new LLMChain({ llm: model, prompt });const res = await chain.call({ company: "a startup", product: "colorful socks"});console.log({ res });// { res: { tex... |
4e9727215e95-2283 | } }API Reference:ChatPromptTemplate from langchain/promptsHumanMessagePromptTemplate from langchain/promptsSystemMessagePromptTemplate from langchain/promptsLLMChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiPreviousCaching embeddingsNextHow toWhy do we need chains?Get startedCommunityDiscordTwit... |
4e9727215e95-2284 | but more complex applications require chaining LLMs - either with each other or with other components.LangChain provides the Chain interface for such "chained" applications. We define a Chain very generically as a sequence of calls to components, which can include other chains. The base interface is simple:import { Cal... |
4e9727215e95-2285 | It drastically simplifies and makes more modular the implementation of complex applications, which in turn makes it much easier to debug, maintain, and improve your applications.For more specifics check out:How-to for walkthroughs of different chain featuresFoundational to get acquainted with core building block chains... |
4e9727215e95-2286 | ");We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM.const chain = new LLMChain({ llm: model, prompt });// Since this LLMChain is a single-input, single-output chain, we can also `run` it.// This convenience method takes in a string and returns the v... |
4e9727215e95-2287 | This will return the complete chain response.const prompt = PromptTemplate.fromTemplate( "What is a good name for {company} that makes {product}? ");const chain = new LLMChain({ llm: model, prompt });const res = await chain.call({ company: "a startup", product: "colorful socks"});console.log({ res });// { res: { tex... |
4e9727215e95-2288 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference
How to
Foundational
Documents
Popular
Additional
ModulesChainsOn this pageChainsUsing an LL... |
4e9727215e95-2289 | It drastically simplifies and makes more modular the implementation of complex applications, which in turn makes it much easier to debug, maintain, and improve your applications.For more specifics check out:How-to for walkthroughs of different chain featuresFoundational to get acquainted with core building block chains... |
4e9727215e95-2290 | ");We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM.const chain = new LLMChain({ llm: model, prompt });// Since this LLMChain is a single-input, single-output chain, we can also `run` it.// This convenience method takes in a string and returns the v... |
4e9727215e95-2291 | This will return the complete chain response.const prompt = PromptTemplate.fromTemplate( "What is a good name for {company} that makes {product}? ");const chain = new LLMChain({ llm: model, prompt });const res = await chain.call({ company: "a startup", product: "colorful socks"});console.log({ res });// { res: { tex... |
4e9727215e95-2292 | ModulesChainsOn this pageChainsUsing an LLM in isolation is fine for simple applications,
but more complex applications require chaining LLMs - either with each other or with other components.LangChain provides the Chain interface for such "chained" applications. We define a Chain very generically as a sequence of cal... |
4e9727215e95-2293 | It drastically simplifies and makes more modular the implementation of complex applications, which in turn makes it much easier to debug, maintain, and improve your applications.For more specifics check out:How-to for walkthroughs of different chain featuresFoundational to get acquainted with core building block chains... |
4e9727215e95-2294 | ");We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM.const chain = new LLMChain({ llm: model, prompt });// Since this LLMChain is a single-input, single-output chain, we can also `run` it.// This convenience method takes in a string and returns the v... |
4e9727215e95-2295 | This will return the complete chain response.const prompt = PromptTemplate.fromTemplate( "What is a good name for {company} that makes {product}? ");const chain = new LLMChain({ llm: model, prompt });const res = await chain.call({ company: "a startup", product: "colorful socks"});console.log({ res });// { res: { tex... |
4e9727215e95-2296 | ChainsUsing an LLM in isolation is fine for simple applications,
but more complex applications require chaining LLMs - either with each other or with other components.LangChain provides the Chain interface for such "chained" applications. We define a Chain very generically as a sequence of calls to components, which c... |
4e9727215e95-2297 | It drastically simplifies and makes more modular the implementation of complex applications, which in turn makes it much easier to debug, maintain, and improve your applications.For more specifics check out:How-to for walkthroughs of different chain featuresFoundational to get acquainted with core building block chains... |
4e9727215e95-2298 | ");We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM.const chain = new LLMChain({ llm: model, prompt });// Since this LLMChain is a single-input, single-output chain, we can also `run` it.// This convenience method takes in a string and returns the v... |
4e9727215e95-2299 | This will return the complete chain response.const prompt = PromptTemplate.fromTemplate( "What is a good name for {company} that makes {product}? ");const chain = new LLMChain({ llm: model, prompt });const res = await chain.call({ company: "a startup", product: "colorful socks"});console.log({ res });// { res: { tex... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.