id stringlengths 14 17 | text stringlengths 42 2.11k |
|---|---|
4e9727215e95-1800 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSea... |
4e9727215e95-1801 | Create a table, again you can name it anything, but we will use vectors. Add the following columns via the UI:content of type "Long text". This is used to store the Document.pageContent values.embedding of type "Vector". Use the dimension used by the model you plan to use (1536 for OpenAI).any other columns you want to... |
4e9727215e95-1802 | See the Xata getting started docs for more details on using the Xata JavaScript/TypeScript SDK.UsageExample: Q&A chatbot using OpenAI and Xata as vector storeThis example uses the VectorDBQAChain to search the documents stored in Xata and then pass them as context to the OpenAI model, in order to answer the question ... |
4e9727215e95-1803 | not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};export async function run() { const client = getXataClient(); const table = "vectors"; const embeddings = new OpenAIEmbeddings()... |
4e9727215e95-1804 | Before running it, make sure to add an author column of type String to the vectors table in Xata.import { XataVectorSearch } from "langchain/vectorstores/xata";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { BaseClient } from "@xata.io/client";import { Document } from "langchain/document";// Fir... |
4e9727215e95-1805 | { client, table }); // Add documents const docs = [ new Document({ pageContent: "Xata works great with Langchain.js", metadata: { author: "Xata" }, }), new Document({ pageContent: "Xata works great with Langchain", metadata: { author: "Langchain" }, }), new Document({ pageConte... |
4e9727215e95-1806 | ModulesData connectionVector storesIntegrationsXataOn this pageXataXata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a UI for managing your data.Xata has a native vector type, which can be added to any table, and supports s... |
4e9727215e95-1807 | See the Xata getting started docs for more details on using the Xata JavaScript/TypeScript SDK.UsageExample: Q&A chatbot using OpenAI and Xata as vector storeThis example uses the VectorDBQAChain to search the documents stored in Xata and then pass them as context to the OpenAI model, in order to answer the question ... |
4e9727215e95-1808 | not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};export async function run() { const client = getXataClient(); const table = "vectors"; const embeddings = new OpenAIEmbeddings()... |
4e9727215e95-1809 | Before running it, make sure to add an author column of type String to the vectors table in Xata.import { XataVectorSearch } from "langchain/vectorstores/xata";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { BaseClient } from "@xata.io/client";import { Document } from "langchain/document";// Fir... |
4e9727215e95-1810 | { client, table }); // Add documents const docs = [ new Document({ pageContent: "Xata works great with Langchain.js", metadata: { author: "Xata" }, }), new Document({ pageContent: "Xata works great with Langchain", metadata: { author: "Langchain" }, }), new Document({ pageConte... |
4e9727215e95-1811 | ModulesData connectionVector storesIntegrationsXataOn this pageXataXata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a UI for managing your data.Xata has a native vector type, which can be added to any table, and supports s... |
4e9727215e95-1812 | See the Xata getting started docs for more details on using the Xata JavaScript/TypeScript SDK.UsageExample: Q&A chatbot using OpenAI and Xata as vector storeThis example uses the VectorDBQAChain to search the documents stored in Xata and then pass them as context to the OpenAI model, in order to answer the question ... |
4e9727215e95-1813 | not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};export async function run() { const client = getXataClient(); const table = "vectors"; const embeddings = new OpenAIEmbeddings()... |
4e9727215e95-1814 | Before running it, make sure to add an author column of type String to the vectors table in Xata.import { XataVectorSearch } from "langchain/vectorstores/xata";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { BaseClient } from "@xata.io/client";import { Document } from "langchain/document";// Fir... |
4e9727215e95-1815 | "vectors"; const embeddings = new OpenAIEmbeddings(); const store = new XataVectorSearch(embeddings, { client, table }); // Add documents const docs = [ new Document({ pageContent: "Xata works great with Langchain.js", metadata: { author: "Xata" }, }), new Document({ pageContent: "Xata works... |
4e9727215e95-1816 | XataXata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a UI for managing your data.Xata has a native vector type, which can be added to any table, and supports similarity search. LangChain inserts vectors directly to Xata, a... |
4e9727215e95-1817 | See the Xata getting started docs for more details on using the Xata JavaScript/TypeScript SDK.UsageExample: Q&A chatbot using OpenAI and Xata as vector storeThis example uses the VectorDBQAChain to search the documents stored in Xata and then pass them as context to the OpenAI model, in order to answer the question ... |
4e9727215e95-1818 | not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};export async function run() { const client = getXataClient(); const table = "vectors"; const embeddings = new OpenAIEmbeddings()... |
4e9727215e95-1819 | Before running it, make sure to add an author column of type String to the vectors table in Xata.import { XataVectorSearch } from "langchain/vectorstores/xata";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { BaseClient } from "@xata.io/client";import { Document } from "langchain/document";// Fir... |
4e9727215e95-1820 | table = "vectors"; const embeddings = new OpenAIEmbeddings(); const store = new XataVectorSearch(embeddings, { client, table }); // Add documents const docs = [ new Document({ pageContent: "Xata works great with Langchain.js", metadata: { author: "Xata" }, }), new Document({ pageContent: "Xa... |
4e9727215e95-1821 | In your project, run:
xata init
and then choose the database you created above. This will also generate a xata.ts or xata.js file that defines the client you can use to interact with the database. See the Xata getting started docs for more details on using the Xata JavaScript/TypeScript SDK.
This example uses the Ve... |
4e9727215e95-1822 | }); return xata;};export async function run() { const client = getXataClient(); const table = "vectors"; const embeddings = new OpenAIEmbeddings(); const store = new XataVectorSearch(embeddings, { client, table }); // Add documents const docs = [ new Document({ pageContent: "Xata is a Serverless Data pla... |
4e9727215e95-1823 | import { XataVectorSearch } from "langchain/vectorstores/xata";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { BaseClient } from "@xata.io/client";import { Document } from "langchain/document";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/data_connection/vectors... |
4e9727215e95-1824 | = getXataClient(); const table = "vectors"; const embeddings = new OpenAIEmbeddings(); const store = new XataVectorSearch(embeddings, { client, table }); // Add documents const docs = [ new Document({ pageContent: "Xata works great with Langchain.js", metadata: { author: "Xata" }, }), new Docume... |
4e9727215e95-1825 | Page Title: Zep | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchF... |
4e9727215e95-1826 | for Retrieval Augmented Generation applications as it re-ranks results to ensure diversity in the returned documents.InstallationFollow the Zep Quickstart Guide to install and get started with Zep.UsageYou'll need your Zep API URL and optionally an API key to use the Zep VectorStore. See the Zep docs for more informa... |
4e9727215e95-1827 | docs, embeddings, zepConfig ); const results = await vectorStore.similaritySearchWithScore("bar", 3); console.log("Similarity Results:"); console.log(JSON.stringify(results)); const results2 = await vectorStore.maxMarginalRelevanceSearch("bar", { k: 3, }); console.log("MMR Results:"); console.log(JSON.... |
4e9727215e95-1828 | ", }), new Document({ metadata: { band: "Black Sabbath", album: "Paranoid", year: 1970 }, pageContent: "Paranoid is Black Sabbath's second studio album and includes some of their most notable songs. ", }), new Document({ metadata: { band: "Iron Maiden", album: "The Number of the Beast", ... |
4e9727215e95-1829 | ", }),];export const run = async () => { const collectionName = `collection${randomUUID().split("-")[0]}`; const zepConfig = { apiUrl: "http://localhost:8000", // this should be the URL of your Zep implementation collectionName, embeddingDimensions: 1536, // this much match the width of the embeddings you'r... |
4e9727215e95-1830 | vectorStore .maxMarginalRelevanceSearch("sad music", { k: 3, }) .then((results) => { console.log(`\n\nMMR Results:\n${JSON.stringify(results)}`); }) .catch((e) => { if (e.name === "NotFoundError") { console.log("No results found"); } else { throw e; } });};API Re... |
4e9727215e95-1831 | const zepConfig = { apiUrl: "http://localhost:8000", // this should be the URL of your Zep implementation collectionName, embeddingDimensions: 1536, // this much match the width of the embeddings you're using isAutoEmbedded: false, // set to false to disable auto-embedding }; const embeddings = new OpenAI... |
4e9727215e95-1832 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsMemoryAnalyticDBChromaElasticsearchFaissHNSWLibLanceDBMilvusMongoDB AtlasMyScaleOpenSearchPineconePrismaQdrantRedisSingleStoreSupabaseTigrisTypeORMTypesenseUSea... |
4e9727215e95-1833 | for Retrieval Augmented Generation applications as it re-ranks results to ensure diversity in the returned documents.InstallationFollow the Zep Quickstart Guide to install and get started with Zep.UsageYou'll need your Zep API URL and optionally an API key to use the Zep VectorStore. See the Zep docs for more informa... |
4e9727215e95-1834 | docs, embeddings, zepConfig ); const results = await vectorStore.similaritySearchWithScore("bar", 3); console.log("Similarity Results:"); console.log(JSON.stringify(results)); const results2 = await vectorStore.maxMarginalRelevanceSearch("bar", { k: 3, }); console.log("MMR Results:"); console.log(JSON.... |
4e9727215e95-1835 | ", }), new Document({ metadata: { band: "Black Sabbath", album: "Paranoid", year: 1970 }, pageContent: "Paranoid is Black Sabbath's second studio album and includes some of their most notable songs. ", }), new Document({ metadata: { band: "Iron Maiden", album: "The Number of the Beast", ... |
4e9727215e95-1836 | ", }),];export const run = async () => { const collectionName = `collection${randomUUID().split("-")[0]}`; const zepConfig = { apiUrl: "http://localhost:8000", // this should be the URL of your Zep implementation collectionName, embeddingDimensions: 1536, // this much match the width of the embeddings you'r... |
4e9727215e95-1837 | vectorStore .maxMarginalRelevanceSearch("sad music", { k: 3, }) .then((results) => { console.log(`\n\nMMR Results:\n${JSON.stringify(results)}`); }) .catch((e) => { if (e.name === "NotFoundError") { console.log("No results found"); } else { throw e; } });};API Re... |
4e9727215e95-1838 | const zepConfig = { apiUrl: "http://localhost:8000", // this should be the URL of your Zep implementation collectionName, embeddingDimensions: 1536, // this much match the width of the embeddings you're using isAutoEmbedded: false, // set to false to disable auto-embedding }; const embeddings = new OpenAI... |
4e9727215e95-1839 | and searching your user's chat history.Why Zep's VectorStore? 🤖🚀Zep automatically embeds documents added to the Zep Vector Store using low-latency models local to the Zep server.
The Zep TS/JS client can be used in non-Node edge environments. These two together with Zep's chat memory functionality
make Zep ideal f... |
4e9727215e95-1840 | You must also set your document collection to isAutoEmbedded === false.
See the OpenAIEmbeddings example below.Example: Creating a ZepVectorStore from Documents & Queryingimport { ZepVectorStore } from "langchain/vectorstores/zep";import { FakeEmbeddings } from "langchain/embeddings/fake";import { TextLoader } from "... |
4e9727215e95-1841 | docs, embeddings, zepConfig ); const results = await vectorStore.similaritySearchWithScore("bar", 3); console.log("Similarity Results:"); console.log(JSON.stringify(results)); const results2 = await vectorStore.maxMarginalRelevanceSearch("bar", { k: 3, }); console.log("MMR Results:"); console.log(JSON.... |
4e9727215e95-1842 | ", }), new Document({ metadata: { band: "Black Sabbath", album: "Paranoid", year: 1970 }, pageContent: "Paranoid is Black Sabbath's second studio album and includes some of their most notable songs. ", }), new Document({ metadata: { band: "Iron Maiden", album: "The Number of the Beast", ... |
4e9727215e95-1843 | ", }),];export const run = async () => { const collectionName = `collection${randomUUID().split("-")[0]}`; const zepConfig = { apiUrl: "http://localhost:8000", // this should be the URL of your Zep implementation collectionName, embeddingDimensions: 1536, // this much match the width of the embeddings you'r... |
4e9727215e95-1844 | vectorStore .maxMarginalRelevanceSearch("sad music", { k: 3, }) .then((results) => { console.log(`\n\nMMR Results:\n${JSON.stringify(results)}`); }) .catch((e) => { if (e.name === "NotFoundError") { console.log("No results found"); } else { throw e; } });};API Re... |
4e9727215e95-1845 | const zepConfig = { apiUrl: "http://localhost:8000", // this should be the URL of your Zep implementation collectionName, embeddingDimensions: 1536, // this much match the width of the embeddings you're using isAutoEmbedded: false, // set to false to disable auto-embedding }; const embeddings = new OpenAI... |
4e9727215e95-1846 | and searching your user's chat history.Why Zep's VectorStore? 🤖🚀Zep automatically embeds documents added to the Zep Vector Store using low-latency models local to the Zep server.
The Zep TS/JS client can be used in non-Node edge environments. These two together with Zep's chat memory functionality
make Zep ideal f... |
4e9727215e95-1847 | You must also set your document collection to isAutoEmbedded === false.
See the OpenAIEmbeddings example below.Example: Creating a ZepVectorStore from Documents & Queryingimport { ZepVectorStore } from "langchain/vectorstores/zep";import { FakeEmbeddings } from "langchain/embeddings/fake";import { TextLoader } from "... |
4e9727215e95-1848 | docs, embeddings, zepConfig ); const results = await vectorStore.similaritySearchWithScore("bar", 3); console.log("Similarity Results:"); console.log(JSON.stringify(results)); const results2 = await vectorStore.maxMarginalRelevanceSearch("bar", { k: 3, }); console.log("MMR Results:"); console.log(JSON.... |
4e9727215e95-1849 | ", }), new Document({ metadata: { band: "Black Sabbath", album: "Paranoid", year: 1970 }, pageContent: "Paranoid is Black Sabbath's second studio album and includes some of their most notable songs. ", }), new Document({ metadata: { band: "Iron Maiden", album: "The Number of the Beast", ... |
4e9727215e95-1850 | ", }),];export const run = async () => { const collectionName = `collection${randomUUID().split("-")[0]}`; const zepConfig = { apiUrl: "http://localhost:8000", // this should be the URL of your Zep implementation collectionName, embeddingDimensions: 1536, // this much match the width of the embeddings you'r... |
4e9727215e95-1851 | vectorStore .maxMarginalRelevanceSearch("sad music", { k: 3, }) .then((results) => { console.log(`\n\nMMR Results:\n${JSON.stringify(results)}`); }) .catch((e) => { if (e.name === "NotFoundError") { console.log("No results found"); } else { throw e; } });};API Re... |
4e9727215e95-1852 | const zepConfig = { apiUrl: "http://localhost:8000", // this should be the URL of your Zep implementation collectionName, embeddingDimensions: 1536, // this much match the width of the embeddings you're using isAutoEmbedded: false, // set to false to disable auto-embedding }; const embeddings = new OpenAI... |
4e9727215e95-1853 | make Zep ideal for building conversational LLM apps where latency and performance are important.Supported Search TypesZep supports both similarity search and Maximal Marginal Relevance (MMR) search. MMR search is particularly useful
for Retrieval Augmented Generation applications as it re-ranks results to ensure dive... |
4e9727215e95-1854 | You must also set your document collection to isAutoEmbedded === false.
See the OpenAIEmbeddings example below.Example: Creating a ZepVectorStore from Documents & Queryingimport { ZepVectorStore } from "langchain/vectorstores/zep";import { FakeEmbeddings } from "langchain/embeddings/fake";import { TextLoader } from "... |
4e9727215e95-1855 | docs, embeddings, zepConfig ); const results = await vectorStore.similaritySearchWithScore("bar", 3); console.log("Similarity Results:"); console.log(JSON.stringify(results)); const results2 = await vectorStore.maxMarginalRelevanceSearch("bar", { k: 3, }); console.log("MMR Results:"); console.log(JSON.... |
4e9727215e95-1856 | ", }), new Document({ metadata: { band: "Black Sabbath", album: "Paranoid", year: 1970 }, pageContent: "Paranoid is Black Sabbath's second studio album and includes some of their most notable songs. ", }), new Document({ metadata: { band: "Iron Maiden", album: "The Number of the Beast", ... |
4e9727215e95-1857 | ", }),];export const run = async () => { const collectionName = `collection${randomUUID().split("-")[0]}`; const zepConfig = { apiUrl: "http://localhost:8000", // this should be the URL of your Zep implementation collectionName, embeddingDimensions: 1536, // this much match the width of the embeddings you'r... |
4e9727215e95-1858 | vectorStore .maxMarginalRelevanceSearch("sad music", { k: 3, }) .then((results) => { console.log(`\n\nMMR Results:\n${JSON.stringify(results)}`); }) .catch((e) => { if (e.name === "NotFoundError") { console.log("No results found"); } else { throw e; } });};API Re... |
4e9727215e95-1859 | `collection${randomUUID().split("-")[0]}`; const zepConfig = { apiUrl: "http://localhost:8000", // this should be the URL of your Zep implementation collectionName, embeddingDimensions: 1536, // this much match the width of the embeddings you're using isAutoEmbedded: false, // set to false to disable auto-... |
4e9727215e95-1860 | make Zep ideal for building conversational LLM apps where latency and performance are important.
Zep supports both similarity search and Maximal Marginal Relevance (MMR) search. MMR search is particularly useful
for Retrieval Augmented Generation applications as it re-ranks results to ensure diversity in the returned... |
4e9727215e95-1861 | import { ZepVectorStore } from "langchain/vectorstores/zep";import { FakeEmbeddings } from "langchain/embeddings/fake";import { TextLoader } from "langchain/document_loaders/fs/text";import { randomUUID } from "crypto";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loa... |
4e9727215e95-1862 | import { ZepVectorStore } from "langchain/vectorstores/zep";import { Document } from "langchain/document";import { FakeEmbeddings } from "langchain/embeddings/fake";import { randomUUID } from "crypto";const docs = [ new Document({ metadata: { album: "Led Zeppelin IV", year: 1971 }, pageContent: "Stairway to... |
4e9727215e95-1863 | ", }), new Document({ metadata: { band: "Metallica", album: "Master of Puppets", year: 1986 }, pageContent: "Master of Puppets is widely regarded as Metallica's finest work. ", }), new Document({ metadata: { band: "Megadeth", album: "Rust in Peace", year: 1990 }, pageContent: "Rust in Peace is... |
4e9727215e95-1864 | (@.year == 1973)" }, // We should see a single result: The Rain Song }) .then((results) => { console.log(`\n\nSimilarity Results:\n${JSON.stringify(results)}`); }) .catch((e) => { if (e.name === "NotFoundError") { console.log("No results found"); } else { throw e; } }); ... |
4e9727215e95-1865 | import { ZepVectorStore } from "langchain/vectorstores/zep";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import { randomUUID } from "crypto";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await... |
4e9727215e95-1866 | Page Title: Retrievers | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesRetrieversHow-toIntegrationsExperimentalC... |
4e9727215e95-1867 | We have chosen this as the example for getting started because it nicely combines a lot of different elements (Text splitters, embeddings, vectorstores) and then also shows how to use them in a chain.Question answering over documents consists of four steps:Create an indexCreate a Retriever from that indexCreate a quest... |
4e9727215e95-1868 | This assumes you're using Node, but you can swap in another integration if necessary.First, install the required dependency:npmYarnpnpmnpm install -S hnswlib-nodeyarn add hnswlib-nodepnpm add hnswlib-nodeYou can download the state_of_the_union.txt file here.import { OpenAI } from "langchain/llms/openai";import { Retrie... |
4e9727215e95-1869 | ",});console.log({ res });/*{ res: { text: 'The president said that Justice Breyer was an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court and thanked him for his service.' }}*/API Reference:OpenAI from langchain/llms/openaiRetrievalQAChain from langchain/chainsHNSWLi... |
4e9727215e95-1870 | as the backbone of a retriever, but there are other types of retrievers as well.Get startedThe public API of the BaseRetriever class in LangChain.js is as follows:export abstract class BaseRetriever { abstract getRelevantDocuments(query: string): Promise<Document[]>;}It's that simple! You can call getRelevantDocument... |
4e9727215e95-1871 | This assumes you're using Node, but you can swap in another integration if necessary.First, install the required dependency:npmYarnpnpmnpm install -S hnswlib-nodeyarn add hnswlib-nodepnpm add hnswlib-nodeYou can download the state_of_the_union.txt file here.import { OpenAI } from "langchain/llms/openai";import { Retrie... |
4e9727215e95-1872 | ",});console.log({ res });/*{ res: { text: 'The president said that Justice Breyer was an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court and thanked him for his service.' }}*/API Reference:OpenAI from langchain/llms/openaiRetrievalQAChain from langchain/chainsHNSWLi... |
4e9727215e95-1873 | as the backbone of a retriever, but there are other types of retrievers as well.Get startedThe public API of the BaseRetriever class in LangChain.js is as follows:export abstract class BaseRetriever { abstract getRelevantDocuments(query: string): Promise<Document[]>;}It's that simple! You can call getRelevantDocument... |
4e9727215e95-1874 | This assumes you're using Node, but you can swap in another integration if necessary.First, install the required dependency:npmYarnpnpmnpm install -S hnswlib-nodeyarn add hnswlib-nodepnpm add hnswlib-nodeYou can download the state_of_the_union.txt file here.import { OpenAI } from "langchain/llms/openai";import { Retrie... |
4e9727215e95-1875 | ",});console.log({ res });/*{ res: { text: 'The president said that Justice Breyer was an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court and thanked him for his service.' }}*/API Reference:OpenAI from langchain/llms/openaiRetrievalQAChain from langchain/chainsHNSWLi... |
4e9727215e95-1876 | as the backbone of a retriever, but there are other types of retrievers as well.Get startedThe public API of the BaseRetriever class in LangChain.js is as follows:export abstract class BaseRetriever { abstract getRelevantDocuments(query: string): Promise<Document[]>;}It's that simple! You can call getRelevantDocument... |
4e9727215e95-1877 | This assumes you're using Node, but you can swap in another integration if necessary.First, install the required dependency:npmYarnpnpmnpm install -S hnswlib-nodeyarn add hnswlib-nodepnpm add hnswlib-nodeYou can download the state_of_the_union.txt file here.import { OpenAI } from "langchain/llms/openai";import { Retrie... |
4e9727215e95-1878 | ",});console.log({ res });/*{ res: { text: 'The president said that Justice Breyer was an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court and thanked him for his service.' }}*/API Reference:OpenAI from langchain/llms/openaiRetrievalQAChain from langchain/chainsHNSWLi... |
4e9727215e95-1879 | the specific retriever object you are calling.Of course, we also help construct what we think useful Retrievers are. The main type of Retriever in LangChain is a vector store retriever. We will focus on that here.Note: Before reading, it's important to understand what a vector store is.This example showcases question a... |
4e9727215e95-1880 | This assumes you're using Node, but you can swap in another integration if necessary.First, install the required dependency:npmYarnpnpmnpm install -S hnswlib-nodeyarn add hnswlib-nodepnpm add hnswlib-nodeYou can download the state_of_the_union.txt file here.import { OpenAI } from "langchain/llms/openai";import { Retrie... |
4e9727215e95-1881 | ",});console.log({ res });/*{ res: { text: 'The president said that Justice Breyer was an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court and thanked him for his service.' }}*/API Reference:OpenAI from langchain/llms/openaiRetrievalQAChain from langchain/chainsHNSWLi... |
4e9727215e95-1882 | the specific retriever object you are calling.
Of course, we also help construct what we think useful Retrievers are. The main type of Retriever in LangChain is a vector store retriever. We will focus on that here.
Note: Before reading, it's important to understand what a vector store is.
This example showcases ques... |
4e9727215e95-1883 | You can download the state_of_the_union.txt file here.
import { OpenAI } from "langchain/llms/openai";import { RetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RecursiveCharacterTextSplitter } fro... |
4e9727215e95-1884 | Let's walk through what's happening here.
We first load a long text and split it into smaller documents using a text splitter.
We then load those documents (which also embeds the documents using the passed OpenAIEmbeddings instance) into HNSWLib, our vector store, creating our index.
Though we can query the vector s... |
4e9727215e95-1885 | “Compressing” here refers to both compressing the contents of an individual document and filtering out documents wholesale.To use the Contextual Compression Retriever, you'll need:a base Retrievera Document CompressorThe Contextual Compression Retriever passes queries to the base Retriever, takes the initial documents ... |
4e9727215e95-1886 | The Document Compressor takes a list of Documents and shortens it by reducing the contents of Documents or dropping Documents altogether.Get startedHere's an example of how this works:import * as fs from "fs";import { OpenAI } from "langchain/llms/openai";import { RecursiveCharacterTextSplitter } from "langchain/text_... |
4e9727215e95-1887 | ",});console.log({ res });API Reference:OpenAI from langchain/llms/openaiRecursiveCharacterTextSplitter from langchain/text_splitterRetrievalQAChain from langchain/chainsHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiContextualCompressionRetriever from langchain/retrievers/c... |
4e9727215e95-1888 | “Compressing” here refers to both compressing the contents of an individual document and filtering out documents wholesale.To use the Contextual Compression Retriever, you'll need:a base Retrievera Document CompressorThe Contextual Compression Retriever passes queries to the base Retriever, takes the initial documents ... |
4e9727215e95-1889 | The Document Compressor takes a list of Documents and shortens it by reducing the contents of Documents or dropping Documents altogether.Get startedHere's an example of how this works:import * as fs from "fs";import { OpenAI } from "langchain/llms/openai";import { RecursiveCharacterTextSplitter } from "langchain/text_... |
4e9727215e95-1890 | ",});console.log({ res });API Reference:OpenAI from langchain/llms/openaiRecursiveCharacterTextSplitter from langchain/text_splitterRetrievalQAChain from langchain/chainsHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiContextualCompressionRetriever from langchain/retrievers/c... |
4e9727215e95-1891 | The Document Compressor takes a list of Documents and shortens it by reducing the contents of Documents or dropping Documents altogether.Get startedHere's an example of how this works:import * as fs from "fs";import { OpenAI } from "langchain/llms/openai";import { RecursiveCharacterTextSplitter } from "langchain/text_... |
4e9727215e95-1892 | ",});console.log({ res });API Reference:OpenAI from langchain/llms/openaiRecursiveCharacterTextSplitter from langchain/text_splitterRetrievalQAChain from langchain/chainsHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiContextualCompressionRetriever from langchain/retrievers/c... |
4e9727215e95-1893 | The Document Compressor takes a list of Documents and shortens it by reducing the contents of Documents or dropping Documents altogether.Get startedHere's an example of how this works:import * as fs from "fs";import { OpenAI } from "langchain/llms/openai";import { RecursiveCharacterTextSplitter } from "langchain/text_... |
4e9727215e95-1894 | ",});console.log({ res });API Reference:OpenAI from langchain/llms/openaiRecursiveCharacterTextSplitter from langchain/text_splitterRetrievalQAChain from langchain/chainsHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiContextualCompressionRetriever from langchain/retrievers/c... |
4e9727215e95-1895 | The Document Compressor takes a list of Documents and shortens it by reducing the contents of Documents or dropping Documents altogether.Get startedHere's an example of how this works:import * as fs from "fs";import { OpenAI } from "langchain/llms/openai";import { RecursiveCharacterTextSplitter } from "langchain/text_... |
4e9727215e95-1896 | ",});console.log({ res });API Reference:OpenAI from langchain/llms/openaiRecursiveCharacterTextSplitter from langchain/text_splitterRetrievalQAChain from langchain/chainsHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiContextualCompressionRetriever from langchain/retrievers/c... |
4e9727215e95-1897 | Here's an example of how this works:
import * as fs from "fs";import { OpenAI } from "langchain/llms/openai";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { RetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } ... |
4e9727215e95-1898 | API Reference:OpenAI from langchain/llms/openaiRecursiveCharacterTextSplitter from langchain/text_splitterRetrievalQAChain from langchain/chainsHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiContextualCompressionRetriever from langchain/retrievers/contextual_compressionLLMCh... |
4e9727215e95-1899 | This can either be the whole raw document OR a larger chunk.Usageimport { OpenAIEmbeddings } from "langchain/embeddings/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { InMemoryDocstore } from "langchain/stores/doc/in_memory";import { ParentDocumentRetriever } from "langchain/retrieve... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.