id
stringlengths 14
17
| text
stringlengths 42
2.1k
|
---|---|
4e9727215e95-1600
|
Redis
SetupUsageCreate a new index from textsCreate a new index from docsQuery docs from existing collection
Page Title: Redis | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsRedisOn this pageRedisRedis is a fast open source, in-memory data store.
As part of the Redis Stack, RediSearch is the module that enables vector similarity semantic search, as well as many other types of searching.CompatibilityOnly available on Node.js.LangChain.js accepts node-redis as the client for Redis vectorstore.SetupRun Redis with Docker on your computer following the docsInstall the node-redis JS clientnpmYarnpnpmnpm install -S redisyarn add redispnpm add redisIndex docsimport { createClient, createCluster } from "redis";import { Document } from "langchain/document";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RedisVectorStore } from "langchain/vectorstores/redis";const client = createClient({ url: process.env.REDIS_URL ? ?
|
4e9727215e95-1601
|
"redis://localhost:6379",});await client.connect();const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "redis is fast", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "consectetur adipiscing elit", }),];const vectorStore = await RedisVectorStore.fromDocuments( docs, new OpenAIEmbeddings(), { redisClient: client, indexName: "docs", });await client.disconnect();API Reference:Document from langchain/documentOpenAIEmbeddings from langchain/embeddings/openaiRedisVectorStore from langchain/vectorstores/redisQuery docsimport { createClient } from "redis";import { OpenAI } from "langchain/llms/openai";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RetrievalQAChain } from "langchain/chains";import { RedisVectorStore } from "langchain/vectorstores/redis";const client = createClient({ url: process.env.REDIS_URL ? ?
|
4e9727215e95-1602
|
"redis://localhost:6379",});await client.connect();const vectorStore = new RedisVectorStore(new OpenAIEmbeddings(), { redisClient: client, indexName: "docs",});/* Simple standalone search in the vector DB */const simpleRes = await vectorStore.similaritySearch("redis", 1);console.log(simpleRes);/*[ Document { pageContent: "redis is fast", metadata: { foo: "bar" } }]*//* Search in the vector DB using filters */const filterRes = await vectorStore.similaritySearch("redis", 3, ["qux"]);console.log(filterRes);/*[ Document { pageContent: "consectetur adipiscing elit", metadata: { baz: "qux" }, }, Document { pageContent: "lorem ipsum dolor sit amet", metadata: { baz: "qux" }, }]*//* Usage as part of a chain */const model = new OpenAI();const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(1), { returnSourceDocuments: true,});const chainRes = await chain.call({ query: "What did the fox do?" });console.log(chainRes);/*{ text: " The fox jumped over the lazy dog. ", sourceDocuments: [ Document { pageContent: "the quick brown fox jumped over the lazy dog", metadata: [Object] } ]}*/await client.disconnect();API Reference:OpenAI from langchain/llms/openaiOpenAIEmbeddings from langchain/embeddings/openaiRetrievalQAChain from langchain/chainsRedisVectorStore from langchain/vectorstores/redisPreviousQdrantNextSingleStoreSetupIndex docsQuery docsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
|
4e9727215e95-1603
|
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsRedisOn this pageRedisRedis is a fast open source, in-memory data store.
As part of the Redis Stack, RediSearch is the module that enables vector similarity semantic search, as well as many other types of searching.CompatibilityOnly available on Node.js.LangChain.js accepts node-redis as the client for Redis vectorstore.SetupRun Redis with Docker on your computer following the docsInstall the node-redis JS clientnpmYarnpnpmnpm install -S redisyarn add redispnpm add redisIndex docsimport { createClient, createCluster } from "redis";import { Document } from "langchain/document";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RedisVectorStore } from "langchain/vectorstores/redis";const client = createClient({ url: process.env.REDIS_URL ? ?
|
4e9727215e95-1604
|
"redis://localhost:6379",});await client.connect();const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "redis is fast", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "consectetur adipiscing elit", }),];const vectorStore = await RedisVectorStore.fromDocuments( docs, new OpenAIEmbeddings(), { redisClient: client, indexName: "docs", });await client.disconnect();API Reference:Document from langchain/documentOpenAIEmbeddings from langchain/embeddings/openaiRedisVectorStore from langchain/vectorstores/redisQuery docsimport { createClient } from "redis";import { OpenAI } from "langchain/llms/openai";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RetrievalQAChain } from "langchain/chains";import { RedisVectorStore } from "langchain/vectorstores/redis";const client = createClient({ url: process.env.REDIS_URL ? ?
|
4e9727215e95-1605
|
"redis://localhost:6379",});await client.connect();const vectorStore = new RedisVectorStore(new OpenAIEmbeddings(), { redisClient: client, indexName: "docs",});/* Simple standalone search in the vector DB */const simpleRes = await vectorStore.similaritySearch("redis", 1);console.log(simpleRes);/*[ Document { pageContent: "redis is fast", metadata: { foo: "bar" } }]*//* Search in the vector DB using filters */const filterRes = await vectorStore.similaritySearch("redis", 3, ["qux"]);console.log(filterRes);/*[ Document { pageContent: "consectetur adipiscing elit", metadata: { baz: "qux" }, }, Document { pageContent: "lorem ipsum dolor sit amet", metadata: { baz: "qux" }, }]*//* Usage as part of a chain */const model = new OpenAI();const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(1), { returnSourceDocuments: true,});const chainRes = await chain.call({ query: "What did the fox do?" });console.log(chainRes);/*{ text: " The fox jumped over the lazy dog. ", sourceDocuments: [ Document { pageContent: "the quick brown fox jumped over the lazy dog", metadata: [Object] } ]}*/await client.disconnect();API Reference:OpenAI from langchain/llms/openaiOpenAIEmbeddings from langchain/embeddings/openaiRetrievalQAChain from langchain/chainsRedisVectorStore from langchain/vectorstores/redisPreviousQdrantNextSingleStoreSetupIndex docsQuery docs
ModulesData connectionVector storesIntegrationsRedisOn this pageRedisRedis is a fast open source, in-memory data store.
|
4e9727215e95-1606
|
As part of the Redis Stack, RediSearch is the module that enables vector similarity semantic search, as well as many other types of searching.CompatibilityOnly available on Node.js.LangChain.js accepts node-redis as the client for Redis vectorstore.SetupRun Redis with Docker on your computer following the docsInstall the node-redis JS clientnpmYarnpnpmnpm install -S redisyarn add redispnpm add redisIndex docsimport { createClient, createCluster } from "redis";import { Document } from "langchain/document";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RedisVectorStore } from "langchain/vectorstores/redis";const client = createClient({ url: process.env.REDIS_URL ? ?
|
4e9727215e95-1607
|
"redis://localhost:6379",});await client.connect();const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "redis is fast", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "consectetur adipiscing elit", }),];const vectorStore = await RedisVectorStore.fromDocuments( docs, new OpenAIEmbeddings(), { redisClient: client, indexName: "docs", });await client.disconnect();API Reference:Document from langchain/documentOpenAIEmbeddings from langchain/embeddings/openaiRedisVectorStore from langchain/vectorstores/redisQuery docsimport { createClient } from "redis";import { OpenAI } from "langchain/llms/openai";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RetrievalQAChain } from "langchain/chains";import { RedisVectorStore } from "langchain/vectorstores/redis";const client = createClient({ url: process.env.REDIS_URL ? ?
|
4e9727215e95-1608
|
"redis://localhost:6379",});await client.connect();const vectorStore = new RedisVectorStore(new OpenAIEmbeddings(), { redisClient: client, indexName: "docs",});/* Simple standalone search in the vector DB */const simpleRes = await vectorStore.similaritySearch("redis", 1);console.log(simpleRes);/*[ Document { pageContent: "redis is fast", metadata: { foo: "bar" } }]*//* Search in the vector DB using filters */const filterRes = await vectorStore.similaritySearch("redis", 3, ["qux"]);console.log(filterRes);/*[ Document { pageContent: "consectetur adipiscing elit", metadata: { baz: "qux" }, }, Document { pageContent: "lorem ipsum dolor sit amet", metadata: { baz: "qux" }, }]*//* Usage as part of a chain */const model = new OpenAI();const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(1), { returnSourceDocuments: true,});const chainRes = await chain.call({ query: "What did the fox do?" });console.log(chainRes);/*{ text: " The fox jumped over the lazy dog. ", sourceDocuments: [ Document { pageContent: "the quick brown fox jumped over the lazy dog", metadata: [Object] } ]}*/await client.disconnect();API Reference:OpenAI from langchain/llms/openaiOpenAIEmbeddings from langchain/embeddings/openaiRetrievalQAChain from langchain/chainsRedisVectorStore from langchain/vectorstores/redisPreviousQdrantNextSingleStoreSetupIndex docsQuery docs
ModulesData connectionVector storesIntegrationsRedisOn this pageRedisRedis is a fast open source, in-memory data store.
|
4e9727215e95-1609
|
As part of the Redis Stack, RediSearch is the module that enables vector similarity semantic search, as well as many other types of searching.CompatibilityOnly available on Node.js.LangChain.js accepts node-redis as the client for Redis vectorstore.SetupRun Redis with Docker on your computer following the docsInstall the node-redis JS clientnpmYarnpnpmnpm install -S redisyarn add redispnpm add redisIndex docsimport { createClient, createCluster } from "redis";import { Document } from "langchain/document";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RedisVectorStore } from "langchain/vectorstores/redis";const client = createClient({ url: process.env.REDIS_URL ? ?
|
4e9727215e95-1610
|
"redis://localhost:6379",});await client.connect();const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "redis is fast", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "consectetur adipiscing elit", }),];const vectorStore = await RedisVectorStore.fromDocuments( docs, new OpenAIEmbeddings(), { redisClient: client, indexName: "docs", });await client.disconnect();API Reference:Document from langchain/documentOpenAIEmbeddings from langchain/embeddings/openaiRedisVectorStore from langchain/vectorstores/redisQuery docsimport { createClient } from "redis";import { OpenAI } from "langchain/llms/openai";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RetrievalQAChain } from "langchain/chains";import { RedisVectorStore } from "langchain/vectorstores/redis";const client = createClient({ url: process.env.REDIS_URL ? ?
|
4e9727215e95-1611
|
"redis://localhost:6379",});await client.connect();const vectorStore = new RedisVectorStore(new OpenAIEmbeddings(), { redisClient: client, indexName: "docs",});/* Simple standalone search in the vector DB */const simpleRes = await vectorStore.similaritySearch("redis", 1);console.log(simpleRes);/*[ Document { pageContent: "redis is fast", metadata: { foo: "bar" } }]*//* Search in the vector DB using filters */const filterRes = await vectorStore.similaritySearch("redis", 3, ["qux"]);console.log(filterRes);/*[ Document { pageContent: "consectetur adipiscing elit", metadata: { baz: "qux" }, }, Document { pageContent: "lorem ipsum dolor sit amet", metadata: { baz: "qux" }, }]*//* Usage as part of a chain */const model = new OpenAI();const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(1), { returnSourceDocuments: true,});const chainRes = await chain.call({ query: "What did the fox do?" });console.log(chainRes);/*{ text: " The fox jumped over the lazy dog. ", sourceDocuments: [ Document { pageContent: "the quick brown fox jumped over the lazy dog", metadata: [Object] } ]}*/await client.disconnect();API Reference:OpenAI from langchain/llms/openaiOpenAIEmbeddings from langchain/embeddings/openaiRetrievalQAChain from langchain/chainsRedisVectorStore from langchain/vectorstores/redisPreviousQdrantNextSingleStore
RedisRedis is a fast open source, in-memory data store.
|
4e9727215e95-1612
|
RedisRedis is a fast open source, in-memory data store.
As part of the Redis Stack, RediSearch is the module that enables vector similarity semantic search, as well as many other types of searching.CompatibilityOnly available on Node.js.LangChain.js accepts node-redis as the client for Redis vectorstore.SetupRun Redis with Docker on your computer following the docsInstall the node-redis JS clientnpmYarnpnpmnpm install -S redisyarn add redispnpm add redisIndex docsimport { createClient, createCluster } from "redis";import { Document } from "langchain/document";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RedisVectorStore } from "langchain/vectorstores/redis";const client = createClient({ url: process.env.REDIS_URL ? ?
|
4e9727215e95-1613
|
"redis://localhost:6379",});await client.connect();const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "redis is fast", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "consectetur adipiscing elit", }),];const vectorStore = await RedisVectorStore.fromDocuments( docs, new OpenAIEmbeddings(), { redisClient: client, indexName: "docs", });await client.disconnect();API Reference:Document from langchain/documentOpenAIEmbeddings from langchain/embeddings/openaiRedisVectorStore from langchain/vectorstores/redisQuery docsimport { createClient } from "redis";import { OpenAI } from "langchain/llms/openai";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RetrievalQAChain } from "langchain/chains";import { RedisVectorStore } from "langchain/vectorstores/redis";const client = createClient({ url: process.env.REDIS_URL ? ?
|
4e9727215e95-1614
|
"redis://localhost:6379",});await client.connect();const vectorStore = new RedisVectorStore(new OpenAIEmbeddings(), { redisClient: client, indexName: "docs",});/* Simple standalone search in the vector DB */const simpleRes = await vectorStore.similaritySearch("redis", 1);console.log(simpleRes);/*[ Document { pageContent: "redis is fast", metadata: { foo: "bar" } }]*//* Search in the vector DB using filters */const filterRes = await vectorStore.similaritySearch("redis", 3, ["qux"]);console.log(filterRes);/*[ Document { pageContent: "consectetur adipiscing elit", metadata: { baz: "qux" }, }, Document { pageContent: "lorem ipsum dolor sit amet", metadata: { baz: "qux" }, }]*//* Usage as part of a chain */const model = new OpenAI();const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(1), { returnSourceDocuments: true,});const chainRes = await chain.call({ query: "What did the fox do?" });console.log(chainRes);/*{ text: " The fox jumped over the lazy dog. ", sourceDocuments: [ Document { pageContent: "the quick brown fox jumped over the lazy dog", metadata: [Object] } ]}*/await client.disconnect();API Reference:OpenAI from langchain/llms/openaiOpenAIEmbeddings from langchain/embeddings/openaiRetrievalQAChain from langchain/chainsRedisVectorStore from langchain/vectorstores/redis
Redis is a fast open source, in-memory data store.
|
4e9727215e95-1615
|
Redis is a fast open source, in-memory data store.
As part of the Redis Stack, RediSearch is the module that enables vector similarity semantic search, as well as many other types of searching.
LangChain.js accepts node-redis as the client for Redis vectorstore.
npmYarnpnpmnpm install -S redisyarn add redispnpm add redis
npm install -S redisyarn add redispnpm add redis
npm install -S redis
yarn add redis
pnpm add redis
import { createClient, createCluster } from "redis";import { Document } from "langchain/document";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RedisVectorStore } from "langchain/vectorstores/redis";const client = createClient({ url: process.env.REDIS_URL ? ? "redis://localhost:6379",});await client.connect();const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "redis is fast", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "consectetur adipiscing elit", }),];const vectorStore = await RedisVectorStore.fromDocuments( docs, new OpenAIEmbeddings(), { redisClient: client, indexName: "docs", });await client.disconnect();
API Reference:Document from langchain/documentOpenAIEmbeddings from langchain/embeddings/openaiRedisVectorStore from langchain/vectorstores/redis
|
4e9727215e95-1616
|
import { createClient } from "redis";import { OpenAI } from "langchain/llms/openai";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RetrievalQAChain } from "langchain/chains";import { RedisVectorStore } from "langchain/vectorstores/redis";const client = createClient({ url: process.env.REDIS_URL ? ? "redis://localhost:6379",});await client.connect();const vectorStore = new RedisVectorStore(new OpenAIEmbeddings(), { redisClient: client, indexName: "docs",});/* Simple standalone search in the vector DB */const simpleRes = await vectorStore.similaritySearch("redis", 1);console.log(simpleRes);/*[ Document { pageContent: "redis is fast", metadata: { foo: "bar" } }]*//* Search in the vector DB using filters */const filterRes = await vectorStore.similaritySearch("redis", 3, ["qux"]);console.log(filterRes);/*[ Document { pageContent: "consectetur adipiscing elit", metadata: { baz: "qux" }, }, Document { pageContent: "lorem ipsum dolor sit amet", metadata: { baz: "qux" }, }]*//* Usage as part of a chain */const model = new OpenAI();const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(1), { returnSourceDocuments: true,});const chainRes = await chain.call({ query: "What did the fox do?" });console.log(chainRes);/*{ text: " The fox jumped over the lazy dog. ", sourceDocuments: [ Document { pageContent: "the quick brown fox jumped over the lazy
|
4e9727215e95-1617
|
Document { pageContent: "the quick brown fox jumped over the lazy dog", metadata: [Object] } ]}*/await client.disconnect();
|
4e9727215e95-1618
|
API Reference:OpenAI from langchain/llms/openaiOpenAIEmbeddings from langchain/embeddings/openaiRetrievalQAChain from langchain/chainsRedisVectorStore from langchain/vectorstores/redis
SingleStore
Page Title: SingleStore | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsSingleStoreOn this pageSingleStoreSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premise. It provides vector storage, as well as vector functions like dot_product and euclidean_distance, thereby supporting AI applications that require text similarity matching.CompatibilityOnly available on Node.js.LangChain.js requires the mysql2 library to create a connection to a SingleStoreDB instance.SetupEstablish a SingleStoreDB environment. You have the flexibility to choose between Cloud-based or On-Premise editions.Install the mysql2 JS clientnpmYarnpnpmnpm install -S mysql2yarn add mysql2pnpm add mysql2UsageSingleStoreVectorStore manages a connection pool.
|
4e9727215e95-1619
|
It is recommended to call await store.end(); before terminating your application to assure all connections are appropriately closed and prevent any possible resource leaks.Standard usageBelow is a straightforward example showcasing how to import the relevant module and perform a base similarity search using the SingleStoreVectorStore:import { SingleStoreVectorStore } from "langchain/vectorstores/singlestore";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await SingleStoreVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { connectionOptions: { host: process.env.SINGLESTORE_HOST, port: Number(process.env.SINGLESTORE_PORT), user: process.env.SINGLESTORE_USERNAME,
password: process.env.SINGLESTORE_PASSWORD, database: process.env.SINGLESTORE_DATABASE, }, } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); await vectorStore.end();};API Reference:SingleStoreVectorStore from langchain/vectorstores/singlestoreOpenAIEmbeddings from langchain/embeddings/openaiMetadata FilteringIf it is needed to filter results based on specific metadata fields, you can pass a filter parameter to narrow down your search to the documents that match all specified fields in the filter object:import { SingleStoreVectorStore } from "langchain/vectorstores/singlestore";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await SingleStoreVectorStore.fromTexts( ["Good afternoon", "Bye bye", "Boa tarde! ", "Até logo!
|
4e9727215e95-1620
|
"], [ { id: 1, language: "English" }, { id: 2, language: "English" }, { id: 3, language: "Portugese" }, { id: 4, language: "Portugese" }, ], new OpenAIEmbeddings(), { connectionOptions: { host: process.env.SINGLESTORE_HOST, port: Number(process.env.SINGLESTORE_PORT), user: process.env.SINGLESTORE_USERNAME, password: process.env.SINGLESTORE_PASSWORD, database: process.env.SINGLESTORE_DATABASE, }, distanceMetric: "EUCLIDEAN_DISTANCE", } ); const resultOne = await vectorStore.similaritySearch("greetings", 1, { language: "Portugese", }); console.log(resultOne); await vectorStore.end();};API Reference:SingleStoreVectorStore from langchain/vectorstores/singlestoreOpenAIEmbeddings from langchain/embeddings/openaiPreviousRedisNextSupabaseSetupUsageStandard usageMetadata FilteringCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
|
4e9727215e95-1621
|
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsSingleStoreOn this pageSingleStoreSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premise. It provides vector storage, as well as vector functions like dot_product and euclidean_distance, thereby supporting AI applications that require text similarity matching.CompatibilityOnly available on Node.js.LangChain.js requires the mysql2 library to create a connection to a SingleStoreDB instance.SetupEstablish a SingleStoreDB environment. You have the flexibility to choose between Cloud-based or On-Premise editions.Install the mysql2 JS clientnpmYarnpnpmnpm install -S mysql2yarn add mysql2pnpm add mysql2UsageSingleStoreVectorStore manages a connection pool.
|
4e9727215e95-1622
|
It is recommended to call await store.end(); before terminating your application to assure all connections are appropriately closed and prevent any possible resource leaks.Standard usageBelow is a straightforward example showcasing how to import the relevant module and perform a base similarity search using the SingleStoreVectorStore:import { SingleStoreVectorStore } from "langchain/vectorstores/singlestore";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await SingleStoreVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { connectionOptions: { host: process.env.SINGLESTORE_HOST, port: Number(process.env.SINGLESTORE_PORT), user: process.env.SINGLESTORE_USERNAME,
password: process.env.SINGLESTORE_PASSWORD, database: process.env.SINGLESTORE_DATABASE, }, } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); await vectorStore.end();};API Reference:SingleStoreVectorStore from langchain/vectorstores/singlestoreOpenAIEmbeddings from langchain/embeddings/openaiMetadata FilteringIf it is needed to filter results based on specific metadata fields, you can pass a filter parameter to narrow down your search to the documents that match all specified fields in the filter object:import { SingleStoreVectorStore } from "langchain/vectorstores/singlestore";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await SingleStoreVectorStore.fromTexts( ["Good afternoon", "Bye bye", "Boa tarde! ", "Até logo!
|
4e9727215e95-1623
|
"], [ { id: 1, language: "English" }, { id: 2, language: "English" }, { id: 3, language: "Portugese" }, { id: 4, language: "Portugese" }, ], new OpenAIEmbeddings(), { connectionOptions: { host: process.env.SINGLESTORE_HOST, port: Number(process.env.SINGLESTORE_PORT), user: process.env.SINGLESTORE_USERNAME, password: process.env.SINGLESTORE_PASSWORD, database: process.env.SINGLESTORE_DATABASE, }, distanceMetric: "EUCLIDEAN_DISTANCE", } ); const resultOne = await vectorStore.similaritySearch("greetings", 1, { language: "Portugese", }); console.log(resultOne); await vectorStore.end();};API Reference:SingleStoreVectorStore from langchain/vectorstores/singlestoreOpenAIEmbeddings from langchain/embeddings/openaiPreviousRedisNextSupabaseSetupUsageStandard usageMetadata Filtering
ModulesData connectionVector storesIntegrationsSingleStoreOn this pageSingleStoreSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premise. It provides vector storage, as well as vector functions like dot_product and euclidean_distance, thereby supporting AI applications that require text similarity matching.CompatibilityOnly available on Node.js.LangChain.js requires the mysql2 library to create a connection to a SingleStoreDB instance.SetupEstablish a SingleStoreDB environment. You have the flexibility to choose between Cloud-based or On-Premise editions.Install the mysql2 JS clientnpmYarnpnpmnpm install -S mysql2yarn add mysql2pnpm add mysql2UsageSingleStoreVectorStore manages a connection pool.
|
4e9727215e95-1624
|
It is recommended to call await store.end(); before terminating your application to assure all connections are appropriately closed and prevent any possible resource leaks.Standard usageBelow is a straightforward example showcasing how to import the relevant module and perform a base similarity search using the SingleStoreVectorStore:import { SingleStoreVectorStore } from "langchain/vectorstores/singlestore";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await SingleStoreVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { connectionOptions: { host: process.env.SINGLESTORE_HOST, port: Number(process.env.SINGLESTORE_PORT), user: process.env.SINGLESTORE_USERNAME,
password: process.env.SINGLESTORE_PASSWORD, database: process.env.SINGLESTORE_DATABASE, }, } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); await vectorStore.end();};API Reference:SingleStoreVectorStore from langchain/vectorstores/singlestoreOpenAIEmbeddings from langchain/embeddings/openaiMetadata FilteringIf it is needed to filter results based on specific metadata fields, you can pass a filter parameter to narrow down your search to the documents that match all specified fields in the filter object:import { SingleStoreVectorStore } from "langchain/vectorstores/singlestore";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await SingleStoreVectorStore.fromTexts( ["Good afternoon", "Bye bye", "Boa tarde! ", "Até logo!
|
4e9727215e95-1625
|
"], [ { id: 1, language: "English" }, { id: 2, language: "English" }, { id: 3, language: "Portugese" }, { id: 4, language: "Portugese" }, ], new OpenAIEmbeddings(), { connectionOptions: { host: process.env.SINGLESTORE_HOST, port: Number(process.env.SINGLESTORE_PORT), user: process.env.SINGLESTORE_USERNAME, password: process.env.SINGLESTORE_PASSWORD, database: process.env.SINGLESTORE_DATABASE, }, distanceMetric: "EUCLIDEAN_DISTANCE", } ); const resultOne = await vectorStore.similaritySearch("greetings", 1, { language: "Portugese", }); console.log(resultOne); await vectorStore.end();};API Reference:SingleStoreVectorStore from langchain/vectorstores/singlestoreOpenAIEmbeddings from langchain/embeddings/openaiPreviousRedisNextSupabaseSetupUsageStandard usageMetadata Filtering
ModulesData connectionVector storesIntegrationsSingleStoreOn this pageSingleStoreSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premise. It provides vector storage, as well as vector functions like dot_product and euclidean_distance, thereby supporting AI applications that require text similarity matching.CompatibilityOnly available on Node.js.LangChain.js requires the mysql2 library to create a connection to a SingleStoreDB instance.SetupEstablish a SingleStoreDB environment. You have the flexibility to choose between Cloud-based or On-Premise editions.Install the mysql2 JS clientnpmYarnpnpmnpm install -S mysql2yarn add mysql2pnpm add mysql2UsageSingleStoreVectorStore manages a connection pool.
|
4e9727215e95-1626
|
It is recommended to call await store.end(); before terminating your application to assure all connections are appropriately closed and prevent any possible resource leaks.Standard usageBelow is a straightforward example showcasing how to import the relevant module and perform a base similarity search using the SingleStoreVectorStore:import { SingleStoreVectorStore } from "langchain/vectorstores/singlestore";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await SingleStoreVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { connectionOptions: { host: process.env.SINGLESTORE_HOST, port: Number(process.env.SINGLESTORE_PORT), user: process.env.SINGLESTORE_USERNAME,
password: process.env.SINGLESTORE_PASSWORD, database: process.env.SINGLESTORE_DATABASE, }, } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); await vectorStore.end();};API Reference:SingleStoreVectorStore from langchain/vectorstores/singlestoreOpenAIEmbeddings from langchain/embeddings/openaiMetadata FilteringIf it is needed to filter results based on specific metadata fields, you can pass a filter parameter to narrow down your search to the documents that match all specified fields in the filter object:import { SingleStoreVectorStore } from "langchain/vectorstores/singlestore";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await SingleStoreVectorStore.fromTexts( ["Good afternoon", "Bye bye", "Boa tarde! ", "Até logo!
|
4e9727215e95-1627
|
"], [ { id: 1, language: "English" }, { id: 2, language: "English" }, { id: 3, language: "Portugese" }, { id: 4, language: "Portugese" }, ], new OpenAIEmbeddings(), { connectionOptions: { host: process.env.SINGLESTORE_HOST, port: Number(process.env.SINGLESTORE_PORT), user: process.env.SINGLESTORE_USERNAME, password: process.env.SINGLESTORE_PASSWORD, database: process.env.SINGLESTORE_DATABASE, }, distanceMetric: "EUCLIDEAN_DISTANCE", } ); const resultOne = await vectorStore.similaritySearch("greetings", 1, { language: "Portugese", }); console.log(resultOne); await vectorStore.end();};API Reference:SingleStoreVectorStore from langchain/vectorstores/singlestoreOpenAIEmbeddings from langchain/embeddings/openaiPreviousRedisNextSupabase
SingleStoreSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premise. It provides vector storage, as well as vector functions like dot_product and euclidean_distance, thereby supporting AI applications that require text similarity matching.CompatibilityOnly available on Node.js.LangChain.js requires the mysql2 library to create a connection to a SingleStoreDB instance.SetupEstablish a SingleStoreDB environment. You have the flexibility to choose between Cloud-based or On-Premise editions.Install the mysql2 JS clientnpmYarnpnpmnpm install -S mysql2yarn add mysql2pnpm add mysql2UsageSingleStoreVectorStore manages a connection pool.
|
4e9727215e95-1628
|
It is recommended to call await store.end(); before terminating your application to assure all connections are appropriately closed and prevent any possible resource leaks.Standard usageBelow is a straightforward example showcasing how to import the relevant module and perform a base similarity search using the SingleStoreVectorStore:import { SingleStoreVectorStore } from "langchain/vectorstores/singlestore";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await SingleStoreVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { connectionOptions: { host: process.env.SINGLESTORE_HOST, port: Number(process.env.SINGLESTORE_PORT), user: process.env.SINGLESTORE_USERNAME,
password: process.env.SINGLESTORE_PASSWORD, database: process.env.SINGLESTORE_DATABASE, }, } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); await vectorStore.end();};API Reference:SingleStoreVectorStore from langchain/vectorstores/singlestoreOpenAIEmbeddings from langchain/embeddings/openaiMetadata FilteringIf it is needed to filter results based on specific metadata fields, you can pass a filter parameter to narrow down your search to the documents that match all specified fields in the filter object:import { SingleStoreVectorStore } from "langchain/vectorstores/singlestore";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await SingleStoreVectorStore.fromTexts( ["Good afternoon", "Bye bye", "Boa tarde! ", "Até logo!
|
4e9727215e95-1629
|
"], [ { id: 1, language: "English" }, { id: 2, language: "English" }, { id: 3, language: "Portugese" }, { id: 4, language: "Portugese" }, ], new OpenAIEmbeddings(), { connectionOptions: { host: process.env.SINGLESTORE_HOST, port: Number(process.env.SINGLESTORE_PORT), user: process.env.SINGLESTORE_USERNAME, password: process.env.SINGLESTORE_PASSWORD, database: process.env.SINGLESTORE_DATABASE, }, distanceMetric: "EUCLIDEAN_DISTANCE", } ); const resultOne = await vectorStore.similaritySearch("greetings", 1, { language: "Portugese", }); console.log(resultOne); await vectorStore.end();};API Reference:SingleStoreVectorStore from langchain/vectorstores/singlestoreOpenAIEmbeddings from langchain/embeddings/openai
SingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premise. It provides vector storage, as well as vector functions like dot_product and euclidean_distance, thereby supporting AI applications that require text similarity matching.
LangChain.js requires the mysql2 library to create a connection to a SingleStoreDB instance.
npmYarnpnpmnpm install -S mysql2yarn add mysql2pnpm add mysql2
npm install -S mysql2yarn add mysql2pnpm add mysql2
npm install -S mysql2
yarn add mysql2
pnpm add mysql2
SingleStoreVectorStore manages a connection pool. It is recommended to call await store.end(); before terminating your application to assure all connections are appropriately closed and prevent any possible resource leaks.
|
4e9727215e95-1630
|
Below is a straightforward example showcasing how to import the relevant module and perform a base similarity search using the SingleStoreVectorStore:
import { SingleStoreVectorStore } from "langchain/vectorstores/singlestore";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await SingleStoreVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { connectionOptions: { host: process.env.SINGLESTORE_HOST, port: Number(process.env.SINGLESTORE_PORT), user: process.env.SINGLESTORE_USERNAME, password: process.env.SINGLESTORE_PASSWORD, database: process.env.SINGLESTORE_DATABASE, }, } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); await vectorStore.end();};
API Reference:SingleStoreVectorStore from langchain/vectorstores/singlestoreOpenAIEmbeddings from langchain/embeddings/openai
If it is needed to filter results based on specific metadata fields, you can pass a filter parameter to narrow down your search to the documents that match all specified fields in the filter object:
|
4e9727215e95-1631
|
import { SingleStoreVectorStore } from "langchain/vectorstores/singlestore";import { OpenAIEmbeddings } from "langchain/embeddings/openai";export const run = async () => { const vectorStore = await SingleStoreVectorStore.fromTexts( ["Good afternoon", "Bye bye", "Boa tarde! ", "Até logo! "], [ { id: 1, language: "English" }, { id: 2, language: "English" }, { id: 3, language: "Portugese" }, { id: 4, language: "Portugese" }, ], new OpenAIEmbeddings(), { connectionOptions: { host: process.env.SINGLESTORE_HOST, port: Number(process.env.SINGLESTORE_PORT), user: process.env.SINGLESTORE_USERNAME, password: process.env.SINGLESTORE_PASSWORD, database: process.env.SINGLESTORE_DATABASE, }, distanceMetric: "EUCLIDEAN_DISTANCE", } ); const resultOne = await vectorStore.similaritySearch("greetings", 1, { language: "Portugese", }); console.log(resultOne); await vectorStore.end();};
Supabase
SetupUsageStandard usageMetadata Filtering
Page Title: Supabase | 🦜️🔗 Langchain
Paragraphs:
|
4e9727215e95-1632
|
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsSupabaseOn this pageSupabaseLangchain supports using Supabase Postgres database as a vector store, using the pgvector postgres extension.
|
4e9727215e95-1633
|
Refer to the Supabase blog post for more information.SetupInstall the library withnpmYarnpnpmnpm install -S @supabase/supabase-jsyarn add @supabase/supabase-jspnpm add @supabase/supabase-jsCreate a table and search function in your databaseRun this in your database:-- Enable the pgvector extension to work with embedding vectorscreate extension vector;-- Create a table to store your documentscreate table documents ( id bigserial primary key, content text, -- corresponds to Document.pageContent metadata jsonb, -- corresponds to Document.metadata embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed);-- Create a function to search for documentscreate function match_documents ( query_embedding vector(1536), match_count int DEFAULT null, filter jsonb DEFAULT '{}') returns table ( id bigint, content text, metadata jsonb, similarity float)language plpgsqlas $$#variable_conflict use_columnbegin return query select id, content, metadata,
|
4e9727215e95-1634
|
1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding limit match_count;end;$$;UsageStandard UsageThe below example shows how to perform a basic similarity search with Supabase:import { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Bye bye", "What's this?
|
4e9727215e95-1635
|
"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const resultOne = await vectorStore.similaritySearch("Hello world", 1); console.log(resultOne);};API Reference:SupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiMetadata FilteringGiven the above match_documents Postgres function, you can also pass a filter parameter to only documents with a specific metadata field value. This filter parameter is a JSON object, and the match_documents function will use the Postgres JSONB Containment operator @> to filter documents by the metadata field values you specify.
See details on the Postgres JSONB Containment operator for more information.Note: If you've previously been using SupabaseVectorStore, you may need to drop and recreate the match_documents function per the updated SQL above to use this functionality.import { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var
|
4e9727215e95-1636
|
SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Hello world", "Hello world"], [{ user_id: 2 }, { user_id: 1 }, { user_id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const result = await vectorStore.similaritySearch("Hello world", 1, { user_id: 3, }); console.log(result);};API Reference:SupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiMetadata Query Builder FilteringYou can also use query builder-style filtering similar to how the Supabase JavaScript library works instead of passing an object. Note that since most of the filter properties are in the metadata column, you need to use arrow operators (-> for integer or ->> for text) as defined in Postgrest API documentation and specify the data type of the property (e.g.
|
4e9727215e95-1637
|
the column should look something like metadata->some_int_value::int).import { SupabaseFilterRPCCall, SupabaseVectorStore,} from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const embeddings = new OpenAIEmbeddings(); const store = new SupabaseVectorStore(embeddings, { client, tableName: "documents", }); const docs = [ { pageContent: "This is a long text, but it actually means something because vector database does not understand Lorem Ipsum. So I would need to expand upon the notion of quantum fluff, a theorectical concept where subatomic particles coalesce to form transient multidimensional spaces. Yet, this abstraction holds no real-world application or comprehensible meaning, reflecting a cosmic puzzle. ", metadata: { b: 1, c: 10, stuff: "right" }, }, { pageContent: "This is a long text, but it actually means something because vector database does not understand Lorem Ipsum.
|
4e9727215e95-1638
|
So I would need to proceed by discussing the echo of virtual tweets in the binary corridors of the digital universe. Each tweet, like a pixelated canary, hums in an unseen frequency, a fascinatingly perplexing phenomenon that, while conjuring vivid imagery, lacks any concrete implication or real-world relevance, portraying a paradox of multidimensional spaces in the age of cyber folklore.
|
4e9727215e95-1639
|
", metadata: { b: 2, c: 9, stuff: "right" }, }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, { pageContent: "hi", metadata: { b: 2, c: 8, stuff: "right" } }, { pageContent: "bye", metadata: { b: 3, c: 7, stuff: "right" } }, { pageContent: "what's this", metadata: { b: 4, c: 6, stuff: "right" } }, ]; // Also supports an additional {ids: []} parameter for upsertion await store.addDocuments(docs); const funcFilterA: SupabaseFilterRPCCall = (rpc) => rpc .filter("metadata->b::int", "lt", 3) .filter("metadata->c::int", "gt", 7) .textSearch("content", `'multidimensional' & 'spaces'`, { config: "english", }); const resultA = await store.similaritySearch("quantum", 4, funcFilterA); const funcFilterB: SupabaseFilterRPCCall = (rpc) => rpc .filter("metadata->b::int", "lt", 3) .filter("metadata->c::int", "gt", 7)
|
4e9727215e95-1640
|
.filter("metadata->c::int", "gt", 7) .filter("metadata->>stuff", "eq", "right"); const resultB = await store.similaritySearch("hello", 2, funcFilterB); console.log(resultA, resultB);};API Reference:SupabaseFilterRPCCall from langchain/vectorstores/supabaseSupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiDocument deletionimport { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from
|
4e9727215e95-1641
|
"@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const embeddings = new OpenAIEmbeddings(); const store = new SupabaseVectorStore(embeddings, { client, tableName: "documents", }); const docs = [ { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, ]; // Also takes an additional {ids: []} parameter for upsertion const ids = await store.addDocuments(docs); const resultA = await store.similaritySearch("hello", 2); console.log(resultA); /* [ Document { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, Document { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, ] */ await store.delete({ ids }); const resultB = await store.similaritySearch("hello", 2); console.log(resultB); /* [] */};API Reference:SupabaseVectorStore from
|
4e9727215e95-1642
|
/* [] */};API Reference:SupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiPreviousSingleStoreNextTigrisSetupInstall the library withCreate a table and search function in your databaseUsageStandard
|
4e9727215e95-1643
|
UsageMetadata FilteringMetadata Query Builder FilteringDocument deletionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsSupabaseOn this pageSupabaseLangchain supports using Supabase Postgres database as a vector store, using the pgvector postgres extension.
|
4e9727215e95-1644
|
Refer to the Supabase blog post for more information.SetupInstall the library withnpmYarnpnpmnpm install -S @supabase/supabase-jsyarn add @supabase/supabase-jspnpm add @supabase/supabase-jsCreate a table and search function in your databaseRun this in your database:-- Enable the pgvector extension to work with embedding vectorscreate extension vector;-- Create a table to store your documentscreate table documents ( id bigserial primary key, content text, -- corresponds to Document.pageContent metadata jsonb, -- corresponds to Document.metadata embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed);-- Create a function to search for documentscreate function match_documents ( query_embedding vector(1536), match_count int DEFAULT null, filter jsonb DEFAULT '{}') returns table ( id bigint, content text, metadata jsonb, similarity float)language plpgsqlas $$#variable_conflict use_columnbegin return query select id, content, metadata,
|
4e9727215e95-1645
|
1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding limit match_count;end;$$;UsageStandard UsageThe below example shows how to perform a basic similarity search with Supabase:import { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Bye bye", "What's this?
|
4e9727215e95-1646
|
"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const resultOne = await vectorStore.similaritySearch("Hello world", 1); console.log(resultOne);};API Reference:SupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiMetadata FilteringGiven the above match_documents Postgres function, you can also pass a filter parameter to only documents with a specific metadata field value. This filter parameter is a JSON object, and the match_documents function will use the Postgres JSONB Containment operator @> to filter documents by the metadata field values you specify.
See details on the Postgres JSONB Containment operator for more information.Note: If you've previously been using SupabaseVectorStore, you may need to drop and recreate the match_documents function per the updated SQL above to use this functionality.import { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var
|
4e9727215e95-1647
|
SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Hello world", "Hello world"], [{ user_id: 2 }, { user_id: 1 }, { user_id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const result = await vectorStore.similaritySearch("Hello world", 1, { user_id: 3, }); console.log(result);};API Reference:SupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiMetadata Query Builder FilteringYou can also use query builder-style filtering similar to how the Supabase JavaScript library works instead of passing an object. Note that since most of the filter properties are in the metadata column, you need to use arrow operators (-> for integer or ->> for text) as defined in Postgrest API documentation and specify the data type of the property (e.g.
|
4e9727215e95-1648
|
the column should look something like metadata->some_int_value::int).import { SupabaseFilterRPCCall, SupabaseVectorStore,} from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const embeddings = new OpenAIEmbeddings(); const store = new SupabaseVectorStore(embeddings, { client, tableName: "documents", }); const docs = [ { pageContent: "This is a long text, but it actually means something because vector database does not understand Lorem Ipsum. So I would need to expand upon the notion of quantum fluff, a theorectical concept where subatomic particles coalesce to form transient multidimensional spaces. Yet, this abstraction holds no real-world application or comprehensible meaning, reflecting a cosmic puzzle. ", metadata: { b: 1, c: 10, stuff: "right" }, }, { pageContent: "This is a long text, but it actually means something because vector database does not understand Lorem Ipsum.
|
4e9727215e95-1649
|
So I would need to proceed by discussing the echo of virtual tweets in the binary corridors of the digital universe. Each tweet, like a pixelated canary, hums in an unseen frequency, a fascinatingly perplexing phenomenon that, while conjuring vivid imagery, lacks any concrete implication or real-world relevance, portraying a paradox of multidimensional spaces in the age of cyber folklore.
|
4e9727215e95-1650
|
", metadata: { b: 2, c: 9, stuff: "right" }, }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, { pageContent: "hi", metadata: { b: 2, c: 8, stuff: "right" } }, { pageContent: "bye", metadata: { b: 3, c: 7, stuff: "right" } }, { pageContent: "what's this", metadata: { b: 4, c: 6, stuff: "right" } }, ]; // Also supports an additional {ids: []} parameter for upsertion await store.addDocuments(docs); const funcFilterA: SupabaseFilterRPCCall = (rpc) => rpc .filter("metadata->b::int", "lt", 3) .filter("metadata->c::int", "gt", 7) .textSearch("content", `'multidimensional' & 'spaces'`, { config: "english", }); const resultA = await store.similaritySearch("quantum", 4, funcFilterA); const funcFilterB: SupabaseFilterRPCCall = (rpc) => rpc .filter("metadata->b::int", "lt", 3) .filter("metadata->c::int", "gt", 7)
|
4e9727215e95-1651
|
.filter("metadata->c::int", "gt", 7) .filter("metadata->>stuff", "eq", "right"); const resultB = await store.similaritySearch("hello", 2, funcFilterB); console.log(resultA, resultB);};API Reference:SupabaseFilterRPCCall from langchain/vectorstores/supabaseSupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiDocument deletionimport { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from
|
4e9727215e95-1652
|
"@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const embeddings = new OpenAIEmbeddings(); const store = new SupabaseVectorStore(embeddings, { client, tableName: "documents", }); const docs = [ { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, ]; // Also takes an additional {ids: []} parameter for upsertion const ids = await store.addDocuments(docs); const resultA = await store.similaritySearch("hello", 2); console.log(resultA); /* [ Document { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, Document { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, ] */ await store.delete({ ids }); const resultB = await store.similaritySearch("hello", 2); console.log(resultB); /* [] */};API Reference:SupabaseVectorStore from
|
4e9727215e95-1653
|
/* [] */};API Reference:SupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiPreviousSingleStoreNextTigrisSetupInstall the library withCreate a table and search function in your databaseUsageStandard
|
4e9727215e95-1654
|
UsageMetadata FilteringMetadata Query Builder FilteringDocument deletion
ModulesData connectionVector storesIntegrationsSupabaseOn this pageSupabaseLangchain supports using Supabase Postgres database as a vector store, using the pgvector postgres extension.
Refer to the Supabase blog post for more information.SetupInstall the library withnpmYarnpnpmnpm install -S @supabase/supabase-jsyarn add @supabase/supabase-jspnpm add @supabase/supabase-jsCreate a table and search function in your databaseRun this in your database:-- Enable the pgvector extension to work with embedding vectorscreate extension vector;-- Create a table to store your documentscreate table documents ( id bigserial primary key, content text, -- corresponds to Document.pageContent metadata jsonb, -- corresponds to Document.metadata embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed);-- Create a function to search for documentscreate function match_documents ( query_embedding vector(1536), match_count int DEFAULT null, filter jsonb DEFAULT '{}') returns table ( id bigint, content text, metadata jsonb, similarity float)language plpgsqlas $$#variable_conflict use_columnbegin return query select id, content, metadata,
|
4e9727215e95-1655
|
1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding limit match_count;end;$$;UsageStandard UsageThe below example shows how to perform a basic similarity search with Supabase:import { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Bye bye", "What's this?
|
4e9727215e95-1656
|
"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const resultOne = await vectorStore.similaritySearch("Hello world", 1); console.log(resultOne);};API Reference:SupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiMetadata FilteringGiven the above match_documents Postgres function, you can also pass a filter parameter to only documents with a specific metadata field value. This filter parameter is a JSON object, and the match_documents function will use the Postgres JSONB Containment operator @> to filter documents by the metadata field values you specify.
See details on the Postgres JSONB Containment operator for more information.Note: If you've previously been using SupabaseVectorStore, you may need to drop and recreate the match_documents function per the updated SQL above to use this functionality.import { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var
|
4e9727215e95-1657
|
SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Hello world", "Hello world"], [{ user_id: 2 }, { user_id: 1 }, { user_id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const result = await vectorStore.similaritySearch("Hello world", 1, { user_id: 3, }); console.log(result);};API Reference:SupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiMetadata Query Builder FilteringYou can also use query builder-style filtering similar to how the Supabase JavaScript library works instead of passing an object. Note that since most of the filter properties are in the metadata column, you need to use arrow operators (-> for integer or ->> for text) as defined in Postgrest API documentation and specify the data type of the property (e.g.
|
4e9727215e95-1658
|
the column should look something like metadata->some_int_value::int).import { SupabaseFilterRPCCall, SupabaseVectorStore,} from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const embeddings = new OpenAIEmbeddings(); const store = new SupabaseVectorStore(embeddings, { client, tableName: "documents", }); const docs = [ { pageContent: "This is a long text, but it actually means something because vector database does not understand Lorem Ipsum. So I would need to expand upon the notion of quantum fluff, a theorectical concept where subatomic particles coalesce to form transient multidimensional spaces. Yet, this abstraction holds no real-world application or comprehensible meaning, reflecting a cosmic puzzle. ", metadata: { b: 1, c: 10, stuff: "right" }, }, { pageContent: "This is a long text, but it actually means something because vector database does not understand Lorem Ipsum.
|
4e9727215e95-1659
|
So I would need to proceed by discussing the echo of virtual tweets in the binary corridors of the digital universe. Each tweet, like a pixelated canary, hums in an unseen frequency, a fascinatingly perplexing phenomenon that, while conjuring vivid imagery, lacks any concrete implication or real-world relevance, portraying a paradox of multidimensional spaces in the age of cyber folklore.
|
4e9727215e95-1660
|
", metadata: { b: 2, c: 9, stuff: "right" }, }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, { pageContent: "hi", metadata: { b: 2, c: 8, stuff: "right" } }, { pageContent: "bye", metadata: { b: 3, c: 7, stuff: "right" } }, { pageContent: "what's this", metadata: { b: 4, c: 6, stuff: "right" } }, ]; // Also supports an additional {ids: []} parameter for upsertion await store.addDocuments(docs); const funcFilterA: SupabaseFilterRPCCall = (rpc) => rpc .filter("metadata->b::int", "lt", 3) .filter("metadata->c::int", "gt", 7) .textSearch("content", `'multidimensional' & 'spaces'`, { config: "english", }); const resultA = await store.similaritySearch("quantum", 4, funcFilterA); const funcFilterB: SupabaseFilterRPCCall = (rpc) => rpc .filter("metadata->b::int", "lt", 3) .filter("metadata->c::int", "gt", 7)
|
4e9727215e95-1661
|
.filter("metadata->c::int", "gt", 7) .filter("metadata->>stuff", "eq", "right"); const resultB = await store.similaritySearch("hello", 2, funcFilterB); console.log(resultA, resultB);};API Reference:SupabaseFilterRPCCall from langchain/vectorstores/supabaseSupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiDocument deletionimport { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from
|
4e9727215e95-1662
|
"@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const embeddings = new OpenAIEmbeddings(); const store = new SupabaseVectorStore(embeddings, { client, tableName: "documents", }); const docs = [ { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, ]; // Also takes an additional {ids: []} parameter for upsertion const ids = await store.addDocuments(docs); const resultA = await store.similaritySearch("hello", 2); console.log(resultA); /* [ Document { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, Document { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, ] */ await store.delete({ ids }); const resultB = await store.similaritySearch("hello", 2); console.log(resultB); /* [] */};API Reference:SupabaseVectorStore from
|
4e9727215e95-1663
|
/* [] */};API Reference:SupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiPreviousSingleStoreNextTigrisSetupInstall the library withCreate a table and search function in your databaseUsageStandard
|
4e9727215e95-1664
|
UsageMetadata FilteringMetadata Query Builder FilteringDocument deletion
ModulesData connectionVector storesIntegrationsSupabaseOn this pageSupabaseLangchain supports using Supabase Postgres database as a vector store, using the pgvector postgres extension.
Refer to the Supabase blog post for more information.SetupInstall the library withnpmYarnpnpmnpm install -S @supabase/supabase-jsyarn add @supabase/supabase-jspnpm add @supabase/supabase-jsCreate a table and search function in your databaseRun this in your database:-- Enable the pgvector extension to work with embedding vectorscreate extension vector;-- Create a table to store your documentscreate table documents ( id bigserial primary key, content text, -- corresponds to Document.pageContent metadata jsonb, -- corresponds to Document.metadata embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed);-- Create a function to search for documentscreate function match_documents ( query_embedding vector(1536), match_count int DEFAULT null, filter jsonb DEFAULT '{}') returns table ( id bigint, content text, metadata jsonb, similarity float)language plpgsqlas $$#variable_conflict use_columnbegin return query select id, content, metadata,
|
4e9727215e95-1665
|
1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding limit match_count;end;$$;UsageStandard UsageThe below example shows how to perform a basic similarity search with Supabase:import { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Bye bye", "What's this?
|
4e9727215e95-1666
|
"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const resultOne = await vectorStore.similaritySearch("Hello world", 1); console.log(resultOne);};API Reference:SupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiMetadata FilteringGiven the above match_documents Postgres function, you can also pass a filter parameter to only documents with a specific metadata field value. This filter parameter is a JSON object, and the match_documents function will use the Postgres JSONB Containment operator @> to filter documents by the metadata field values you specify.
See details on the Postgres JSONB Containment operator for more information.Note: If you've previously been using SupabaseVectorStore, you may need to drop and recreate the match_documents function per the updated SQL above to use this functionality.import { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var
|
4e9727215e95-1667
|
SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Hello world", "Hello world"], [{ user_id: 2 }, { user_id: 1 }, { user_id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const result = await vectorStore.similaritySearch("Hello world", 1, { user_id: 3, }); console.log(result);};API Reference:SupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiMetadata Query Builder FilteringYou can also use query builder-style filtering similar to how the Supabase JavaScript library works instead of passing an object. Note that since most of the filter properties are in the metadata column, you need to use arrow operators (-> for integer or ->> for text) as defined in Postgrest API documentation and specify the data type of the property (e.g.
|
4e9727215e95-1668
|
the column should look something like metadata->some_int_value::int).import { SupabaseFilterRPCCall, SupabaseVectorStore,} from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const embeddings = new OpenAIEmbeddings(); const store = new SupabaseVectorStore(embeddings, { client, tableName: "documents", }); const docs = [ { pageContent: "This is a long text, but it actually means something because vector database does not understand Lorem Ipsum. So I would need to expand upon the notion of quantum fluff, a theorectical concept where subatomic particles coalesce to form transient multidimensional spaces. Yet, this abstraction holds no real-world application or comprehensible meaning, reflecting a cosmic puzzle. ", metadata: { b: 1, c: 10, stuff: "right" }, }, { pageContent: "This is a long text, but it actually means something because vector database does not understand Lorem Ipsum.
|
4e9727215e95-1669
|
So I would need to proceed by discussing the echo of virtual tweets in the binary corridors of the digital universe. Each tweet, like a pixelated canary, hums in an unseen frequency, a fascinatingly perplexing phenomenon that, while conjuring vivid imagery, lacks any concrete implication or real-world relevance, portraying a paradox of multidimensional spaces in the age of cyber folklore.
|
4e9727215e95-1670
|
", metadata: { b: 2, c: 9, stuff: "right" }, }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, { pageContent: "hi", metadata: { b: 2, c: 8, stuff: "right" } }, { pageContent: "bye", metadata: { b: 3, c: 7, stuff: "right" } }, { pageContent: "what's this", metadata: { b: 4, c: 6, stuff: "right" } }, ]; // Also supports an additional {ids: []} parameter for upsertion await store.addDocuments(docs); const funcFilterA: SupabaseFilterRPCCall = (rpc) => rpc .filter("metadata->b::int", "lt", 3) .filter("metadata->c::int", "gt", 7) .textSearch("content", `'multidimensional' & 'spaces'`, { config: "english", }); const resultA = await store.similaritySearch("quantum", 4, funcFilterA); const funcFilterB: SupabaseFilterRPCCall = (rpc) => rpc .filter("metadata->b::int", "lt", 3) .filter("metadata->c::int", "gt", 7)
|
4e9727215e95-1671
|
.filter("metadata->c::int", "gt", 7) .filter("metadata->>stuff", "eq", "right"); const resultB = await store.similaritySearch("hello", 2, funcFilterB); console.log(resultA, resultB);};API Reference:SupabaseFilterRPCCall from langchain/vectorstores/supabaseSupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiDocument deletionimport { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from
|
4e9727215e95-1672
|
"langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const embeddings = new OpenAIEmbeddings(); const store = new SupabaseVectorStore(embeddings, { client, tableName: "documents", }); const docs = [ { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, ]; // Also takes an additional {ids: []} parameter for upsertion const ids = await store.addDocuments(docs); const resultA = await store.similaritySearch("hello", 2); console.log(resultA); /* [ Document { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, Document { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, ] */ await store.delete({ ids }); const resultB = await store.similaritySearch("hello", 2); console.log(resultB); /* [] */};API
|
4e9727215e95-1673
|
2); console.log(resultB); /* [] */};API Reference:SupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiPreviousSingleStoreNextTigris
|
4e9727215e95-1674
|
SupabaseLangchain supports using Supabase Postgres database as a vector store, using the pgvector postgres extension.
Refer to the Supabase blog post for more information.SetupInstall the library withnpmYarnpnpmnpm install -S @supabase/supabase-jsyarn add @supabase/supabase-jspnpm add @supabase/supabase-jsCreate a table and search function in your databaseRun this in your database:-- Enable the pgvector extension to work with embedding vectorscreate extension vector;-- Create a table to store your documentscreate table documents ( id bigserial primary key, content text, -- corresponds to Document.pageContent metadata jsonb, -- corresponds to Document.metadata embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed);-- Create a function to search for documentscreate function match_documents ( query_embedding vector(1536), match_count int DEFAULT null, filter jsonb DEFAULT '{}') returns table ( id bigint, content text, metadata jsonb, similarity float)language plpgsqlas $$#variable_conflict use_columnbegin return query select id, content, metadata,
|
4e9727215e95-1675
|
1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding limit match_count;end;$$;UsageStandard UsageThe below example shows how to perform a basic similarity search with Supabase:import { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Bye bye", "What's this?
|
4e9727215e95-1676
|
"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const resultOne = await vectorStore.similaritySearch("Hello world", 1); console.log(resultOne);};API Reference:SupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiMetadata FilteringGiven the above match_documents Postgres function, you can also pass a filter parameter to only documents with a specific metadata field value. This filter parameter is a JSON object, and the match_documents function will use the Postgres JSONB Containment operator @> to filter documents by the metadata field values you specify.
See details on the Postgres JSONB Containment operator for more information.Note: If you've previously been using SupabaseVectorStore, you may need to drop and recreate the match_documents function per the updated SQL above to use this functionality.import { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var
|
4e9727215e95-1677
|
SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Hello world", "Hello world"], [{ user_id: 2 }, { user_id: 1 }, { user_id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const result = await vectorStore.similaritySearch("Hello world", 1, { user_id: 3, }); console.log(result);};API Reference:SupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiMetadata Query Builder FilteringYou can also use query builder-style filtering similar to how the Supabase JavaScript library works instead of passing an object. Note that since most of the filter properties are in the metadata column, you need to use arrow operators (-> for integer or ->> for text) as defined in Postgrest API documentation and specify the data type of the property (e.g.
|
4e9727215e95-1678
|
the column should look something like metadata->some_int_value::int).import { SupabaseFilterRPCCall, SupabaseVectorStore,} from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const embeddings = new OpenAIEmbeddings(); const store = new SupabaseVectorStore(embeddings, { client, tableName: "documents", }); const docs = [ { pageContent: "This is a long text, but it actually means something because vector database does not understand Lorem Ipsum. So I would need to expand upon the notion of quantum fluff, a theorectical concept where subatomic particles coalesce to form transient multidimensional spaces. Yet, this abstraction holds no real-world application or comprehensible meaning, reflecting a cosmic puzzle. ", metadata: { b: 1, c: 10, stuff: "right" }, }, { pageContent: "This is a long text, but it actually means something because vector database does not understand Lorem Ipsum.
|
4e9727215e95-1679
|
So I would need to proceed by discussing the echo of virtual tweets in the binary corridors of the digital universe. Each tweet, like a pixelated canary, hums in an unseen frequency, a fascinatingly perplexing phenomenon that, while conjuring vivid imagery, lacks any concrete implication or real-world relevance, portraying a paradox of multidimensional spaces in the age of cyber folklore.
|
4e9727215e95-1680
|
", metadata: { b: 2, c: 9, stuff: "right" }, }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, { pageContent: "hi", metadata: { b: 2, c: 8, stuff: "right" } }, { pageContent: "bye", metadata: { b: 3, c: 7, stuff: "right" } }, { pageContent: "what's this", metadata: { b: 4, c: 6, stuff: "right" } }, ]; // Also supports an additional {ids: []} parameter for upsertion await store.addDocuments(docs); const funcFilterA: SupabaseFilterRPCCall = (rpc) => rpc .filter("metadata->b::int", "lt", 3) .filter("metadata->c::int", "gt", 7) .textSearch("content", `'multidimensional' & 'spaces'`, { config: "english", }); const resultA = await store.similaritySearch("quantum", 4, funcFilterA); const funcFilterB: SupabaseFilterRPCCall = (rpc) => rpc .filter("metadata->b::int", "lt", 3) .filter("metadata->c::int", "gt", 7)
|
4e9727215e95-1681
|
.filter("metadata->c::int", "gt", 7) .filter("metadata->>stuff", "eq", "right"); const resultB = await store.similaritySearch("hello", 2, funcFilterB); console.log(resultA, resultB);};API Reference:SupabaseFilterRPCCall from langchain/vectorstores/supabaseSupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openaiDocument deletionimport { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from
|
4e9727215e95-1682
|
"langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const embeddings = new OpenAIEmbeddings(); const store = new SupabaseVectorStore(embeddings, { client, tableName: "documents", }); const docs = [ { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, ]; // Also takes an additional {ids: []} parameter for upsertion const ids = await store.addDocuments(docs); const resultA = await store.similaritySearch("hello", 2); console.log(resultA); /* [ Document { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, Document { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, ] */ await store.delete({ ids }); const resultB = await store.similaritySearch("hello", 2); console.log(resultB); /* [] */};API
|
4e9727215e95-1683
|
2); console.log(resultB); /* [] */};API Reference:SupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openai
|
4e9727215e95-1684
|
npmYarnpnpmnpm install -S @supabase/supabase-jsyarn add @supabase/supabase-jspnpm add @supabase/supabase-js
npm install -S @supabase/supabase-jsyarn add @supabase/supabase-jspnpm add @supabase/supabase-js
npm install -S @supabase/supabase-js
yarn add @supabase/supabase-js
pnpm add @supabase/supabase-js
Run this in your database:
-- Enable the pgvector extension to work with embedding vectorscreate extension vector;-- Create a table to store your documentscreate table documents ( id bigserial primary key, content text, -- corresponds to Document.pageContent metadata jsonb, -- corresponds to Document.metadata embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed);-- Create a function to search for documentscreate function match_documents ( query_embedding vector(1536), match_count int DEFAULT null, filter jsonb DEFAULT '{}') returns table ( id bigint, content text, metadata jsonb, similarity float)language plpgsqlas $$#variable_conflict use_columnbegin return query select id, content, metadata, 1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding limit match_count;end;$$;
The below example shows how to perform a basic similarity search with Supabase:
|
4e9727215e95-1685
|
The below example shows how to perform a basic similarity search with Supabase:
import { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Bye bye", "What's this? "], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const resultOne = await vectorStore.similaritySearch("Hello world", 1); console.log(resultOne);};
API Reference:SupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openai
Given the above match_documents Postgres function, you can also pass a filter parameter to only documents with a specific metadata field value. This filter parameter is a JSON object, and the match_documents function will use the Postgres JSONB Containment operator @> to filter documents by the metadata field values you specify. See details on the Postgres JSONB Containment operator for more information.
|
4e9727215e95-1686
|
Note: If you've previously been using SupabaseVectorStore, you may need to drop and recreate the match_documents function per the updated SQL above to use this functionality.
import { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Hello world", "Hello world"], [{ user_id: 2 }, { user_id: 1 }, { user_id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const result = await vectorStore.similaritySearch("Hello world", 1, { user_id: 3, }); console.log(result);};
You can also use query builder-style filtering similar to how the Supabase JavaScript library works instead of passing an object. Note that since most of the filter properties are in the metadata column, you need to use arrow operators (-> for integer or ->> for text) as defined in Postgrest API documentation and specify the data type of the property (e.g. the column should look something like metadata->some_int_value::int).
|
4e9727215e95-1687
|
import { SupabaseFilterRPCCall, SupabaseVectorStore,} from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const embeddings = new OpenAIEmbeddings(); const store = new SupabaseVectorStore(embeddings, { client, tableName: "documents", }); const docs = [ { pageContent: "This is a long text, but it actually means something because vector database does not understand Lorem Ipsum. So I would need to expand upon the notion of quantum fluff, a theorectical concept where subatomic particles coalesce to form transient multidimensional spaces. Yet, this abstraction holds no real-world application or comprehensible meaning, reflecting a cosmic puzzle. ", metadata: { b: 1, c: 10, stuff: "right" }, }, { pageContent: "This is a long text, but it actually means something because vector database does not understand Lorem Ipsum.
|
4e9727215e95-1688
|
So I would need to proceed by discussing the echo of virtual tweets in the binary corridors of the digital universe. Each tweet, like a pixelated canary, hums in an unseen frequency, a fascinatingly perplexing phenomenon that, while conjuring vivid imagery, lacks any concrete implication or real-world relevance, portraying a paradox of multidimensional spaces in the age of cyber folklore.
|
4e9727215e95-1689
|
", metadata: { b: 2, c: 9, stuff: "right" }, }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, { pageContent: "hi", metadata: { b: 2, c: 8, stuff: "right" } }, { pageContent: "bye", metadata: { b: 3, c: 7, stuff: "right" } }, { pageContent: "what's this", metadata: { b: 4, c: 6, stuff: "right" } }, ]; // Also supports an additional {ids: []} parameter for upsertion await store.addDocuments(docs); const funcFilterA: SupabaseFilterRPCCall = (rpc) => rpc .filter("metadata->b::int", "lt", 3) .filter("metadata->c::int", "gt", 7) .textSearch("content", `'multidimensional' & 'spaces'`, { config: "english", }); const resultA = await store.similaritySearch("quantum", 4, funcFilterA); const funcFilterB: SupabaseFilterRPCCall = (rpc) => rpc .filter("metadata->b::int", "lt", 3) .filter("metadata->c::int", "gt", 7) .filter("metadata->>stuff", "eq", "right"); const resultB = await store.similaritySearch("hello", 2, funcFilterB); console.log(resultA, resultB);};
|
4e9727215e95-1690
|
API Reference:SupabaseFilterRPCCall from langchain/vectorstores/supabaseSupabaseVectorStore from langchain/vectorstores/supabaseOpenAIEmbeddings from langchain/embeddings/openai
|
4e9727215e95-1691
|
import { SupabaseVectorStore } from "langchain/vectorstores/supabase";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const embeddings = new OpenAIEmbeddings(); const store = new SupabaseVectorStore(embeddings, { client, tableName: "documents", }); const docs = [ { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, ]; // Also takes an additional {ids: []} parameter for upsertion const ids = await store.addDocuments(docs); const resultA = await store.similaritySearch("hello", 2); console.log(resultA); /* [ Document { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, Document { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, ] */ await store.delete({ ids }); const resultB = await
|
4e9727215e95-1692
|
}, ] */ await store.delete({ ids }); const resultB = await store.similaritySearch("hello", 2); console.log(resultB); /* [] */};
|
4e9727215e95-1693
|
Tigris
SetupInstall the library withCreate a table and search function in your databaseUsageStandard UsageMetadata FilteringMetadata Query Builder FilteringDocument deletion
Page Title: Tigris | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsTigrisOn this pageTigrisTigris makes it easy to build AI applications with vector embeddings.
It is a fully managed cloud-native database that allows you store and
index documents and vector embeddings for fast and scalable vector search.CompatibilityOnly available on Node.js.Setup1. Install the Tigris SDKInstall the SDK as followsnpmYarnpnpmnpm install -S @tigrisdata/vectoryarn add @tigrisdata/vectorpnpm add @tigrisdata/vector2. Fetch Tigris API credentialsYou can sign up for a free Tigris account here.Once you have signed up for the Tigris account, create a new project called vectordemo.
Next, make a note of the clientId and clientSecret, which you can get from the
|
4e9727215e95-1694
|
Next, make a note of the clientId and clientSecret, which you can get from the
Application Keys section of the project.Index docsimport { VectorDocumentStore } from "@tigrisdata/vector";import { Document } from "langchain/document";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TigrisVectorStore } from "langchain/vectorstores/tigris";const index = new VectorDocumentStore({ connection: { serverUrl: "api.preview.tigrisdata.cloud", projectName: process.env.TIGRIS_PROJECT, clientId: process.env.TIGRIS_CLIENT_ID, clientSecret: process.env.TIGRIS_CLIENT_SECRET, }, indexName: "examples_index", numDimensions: 1536, // match the OpenAI embedding size});const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "tigris is a cloud-native vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "tigris is a river", }),];await TigrisVectorStore.fromDocuments(docs, new OpenAIEmbeddings(), { index });API Reference:Document from langchain/documentOpenAIEmbeddings
|
4e9727215e95-1695
|
from langchain/embeddings/openaiTigrisVectorStore from langchain/vectorstores/tigrisQuery docsimport { VectorDocumentStore } from "@tigrisdata/vector";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TigrisVectorStore } from "langchain/vectorstores/tigris";const index = new VectorDocumentStore({ connection: { serverUrl: "api.preview.tigrisdata.cloud", projectName: process.env.TIGRIS_PROJECT, clientId: process.env.TIGRIS_CLIENT_ID, clientSecret: process.env.TIGRIS_CLIENT_SECRET, }, indexName: "examples_index", numDimensions: 1536, // match the OpenAI embedding size});const vectorStore = await TigrisVectorStore.fromExistingIndex( new OpenAIEmbeddings(), { index });/* Search the vector DB independently with metadata filters */const results = await vectorStore.similaritySearch("tigris", 1, { "metadata.foo": "bar",});console.log(JSON.stringify(results, null, 2));/*[ Document { pageContent: 'tigris is a cloud-native vector db', metadata: { foo: 'bar' } }]*/API Reference:OpenAIEmbeddings from langchain/embeddings/openaiTigrisVectorStore from langchain/vectorstores/tigrisPreviousSupabaseNextTypeORMSetup1. Install the Tigris SDK2. Fetch Tigris API credentialsIndex docsQuery docsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
|
4e9727215e95-1696
|
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSearchVectaraWeaviateXataZepRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionVector storesIntegrationsTigrisOn this pageTigrisTigris makes it easy to build AI applications with vector embeddings.
It is a fully managed cloud-native database that allows you store and
index documents and vector embeddings for fast and scalable vector search.CompatibilityOnly available on Node.js.Setup1. Install the Tigris SDKInstall the SDK as followsnpmYarnpnpmnpm install -S @tigrisdata/vectoryarn add @tigrisdata/vectorpnpm add @tigrisdata/vector2. Fetch Tigris API credentialsYou can sign up for a free Tigris account here.Once you have signed up for the Tigris account, create a new project called vectordemo.
Next, make a note of the clientId and clientSecret, which you can get from the
|
4e9727215e95-1697
|
Next, make a note of the clientId and clientSecret, which you can get from the
Application Keys section of the project.Index docsimport { VectorDocumentStore } from "@tigrisdata/vector";import { Document } from "langchain/document";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TigrisVectorStore } from "langchain/vectorstores/tigris";const index = new VectorDocumentStore({ connection: { serverUrl: "api.preview.tigrisdata.cloud", projectName: process.env.TIGRIS_PROJECT, clientId: process.env.TIGRIS_CLIENT_ID, clientSecret: process.env.TIGRIS_CLIENT_SECRET, }, indexName: "examples_index", numDimensions: 1536, // match the OpenAI embedding size});const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "tigris is a cloud-native vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "tigris is a river", }),];await TigrisVectorStore.fromDocuments(docs, new OpenAIEmbeddings(), { index });API Reference:Document from langchain/documentOpenAIEmbeddings
|
4e9727215e95-1698
|
from langchain/embeddings/openaiTigrisVectorStore from langchain/vectorstores/tigrisQuery docsimport { VectorDocumentStore } from "@tigrisdata/vector";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TigrisVectorStore } from "langchain/vectorstores/tigris";const index = new VectorDocumentStore({ connection: { serverUrl: "api.preview.tigrisdata.cloud", projectName: process.env.TIGRIS_PROJECT, clientId: process.env.TIGRIS_CLIENT_ID, clientSecret: process.env.TIGRIS_CLIENT_SECRET, }, indexName: "examples_index", numDimensions: 1536, // match the OpenAI embedding size});const vectorStore = await TigrisVectorStore.fromExistingIndex( new OpenAIEmbeddings(), { index });/* Search the vector DB independently with metadata filters */const results = await vectorStore.similaritySearch("tigris", 1, { "metadata.foo": "bar",});console.log(JSON.stringify(results, null, 2));/*[ Document { pageContent: 'tigris is a cloud-native vector db', metadata: { foo: 'bar' } }]*/API Reference:OpenAIEmbeddings from langchain/embeddings/openaiTigrisVectorStore from langchain/vectorstores/tigrisPreviousSupabaseNextTypeORMSetup1. Install the Tigris SDK2. Fetch Tigris API credentialsIndex docsQuery docs
ModulesData connectionVector storesIntegrationsTigrisOn this pageTigrisTigris makes it easy to build AI applications with vector embeddings.
It is a fully managed cloud-native database that allows you store and
|
4e9727215e95-1699
|
It is a fully managed cloud-native database that allows you store and
index documents and vector embeddings for fast and scalable vector search.CompatibilityOnly available on Node.js.Setup1. Install the Tigris SDKInstall the SDK as followsnpmYarnpnpmnpm install -S @tigrisdata/vectoryarn add @tigrisdata/vectorpnpm add @tigrisdata/vector2. Fetch Tigris API credentialsYou can sign up for a free Tigris account here.Once you have signed up for the Tigris account, create a new project called vectordemo.
Next, make a note of the clientId and clientSecret, which you can get from the
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.