id
stringlengths 14
17
| text
stringlengths 42
2.1k
|
---|---|
4e9727215e95-700 | "), }));const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"sources":{"type":"array","items":{"type":"string"},"description":"sources used to answer the question, should be websites. "}},"required":["answer","sources"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? |
4e9727215e95-701 | /console.log(response);/*{"answer": "Paris", "sources": ["https://en.wikipedia.org/wiki/Paris"]}*/console.log(await parser.parse(response));/*{ answer: 'Paris', sources: [ 'https://en.wikipedia.org/wiki/Paris' ] }*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersPreviousString output parserNextData connection
Structured output parserThis output parser can be used when you want to return multiple fields. If you want complex schema returned (i.e. a JSON object with arrays of strings), use the Zod Schema detailed below.import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// With a `StructuredOutputParser` we can define a schema for the output.const parser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website. ",});const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.You must format your output as a JSON value that adheres to a given "JSON Schema" instance. |
4e9727215e95-702 | "JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}would match an object with one required property, "foo". The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:```json{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website. "}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? |
4e9727215e95-703 | /console.log(response);/*{"answer": "Paris", "source": "https://en.wikipedia.org/wiki/Paris"}*/console.log(await parser.parse(response));// { answer: 'Paris', source: 'https://en.wikipedia.org/wiki/Paris' }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsersStructured Output Parser with Zod SchemaThis output parser can be also be used when you want to define the output schema using Zod, a TypeScript validation library. The Zod schema passed in needs be parseable from a JSON string, so eg. z.date() is not allowed.import { z } from "zod";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// We can use zod to define a schema for the output using the `fromZodSchema` method of `StructuredOutputParser`.const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites. |
4e9727215e95-704 | "), }));const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({ template: "Answer the users question as best as possible.\n{format_instructions}\n{question}", inputVariables: ["question"], partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({ question: "What is the capital of France? ",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"sources":{"type":"array","items":{"type":"string"},"description":"sources used to answer the question, should be websites. "}},"required":["answer","sources"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France? |
4e9727215e95-705 | /console.log(response);/*{"answer": "Paris", "sources": ["https://en.wikipedia.org/wiki/Paris"]}*/console.log(await parser.parse(response));/*{ answer: 'Paris', sources: [ 'https://en.wikipedia.org/wiki/Paris' ] }*/API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsStructuredOutputParser from langchain/output_parsers
Page Title: Data connection | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionData connectionMany LLM applications require user-specific data that is not part of the model's training set. LangChain gives you the
building blocks to load, transform, store and query your data via:Document loaders: Load documents from many different sourcesDocument transformers: Split documents, drop redundant documents, and moreText embedding models: Take unstructured text and turn it into a list of floating point numbersVector stores: Store and search over embedded dataRetrievers: Query your dataPreviousStructured output parserNextDocument loadersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionData connectionMany LLM applications require user-specific data that is not part of the model's training set. LangChain gives you the |
4e9727215e95-706 | building blocks to load, transform, store and query your data via:Document loaders: Load documents from many different sourcesDocument transformers: Split documents, drop redundant documents, and moreText embedding models: Take unstructured text and turn it into a list of floating point numbersVector stores: Store and search over embedded dataRetrievers: Query your dataPreviousStructured output parserNextDocument loaders
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference
Document loaders
Document transformers
Text embedding models
Vector stores
Retrievers
Experimental
ModulesData connectionData connectionMany LLM applications require user-specific data that is not part of the model's training set. LangChain gives you the
building blocks to load, transform, store and query your data via:Document loaders: Load documents from many different sourcesDocument transformers: Split documents, drop redundant documents, and moreText embedding models: Take unstructured text and turn it into a list of floating point numbersVector stores: Store and search over embedded dataRetrievers: Query your dataPreviousStructured output parserNextDocument loaders
Data connectionMany LLM applications require user-specific data that is not part of the model's training set. LangChain gives you the
building blocks to load, transform, store and query your data via:Document loaders: Load documents from many different sourcesDocument transformers: Split documents, drop redundant documents, and moreText embedding models: Take unstructured text and turn it into a list of floating point numbersVector stores: Store and search over embedded dataRetrievers: Query your data
Many LLM applications require user-specific data that is not part of the model's training set. LangChain gives you the
building blocks to load, transform, store and query your data via: |
4e9727215e95-707 | building blocks to load, transform, store and query your data via:
Page Title: Document loaders | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersOn this pageDocument loadersUse document loaders to load data from a source as Document's. A Document is a piece of text
and associated metadata. For example, there are document loaders for loading a simple .txt file, for loading the text
contents of any web page, or even for loading a transcript of a YouTube video.Document loaders expose a "load" method for loading data as documents from a configured source. They optionally
implement a "lazy load" as well for lazily loading data into memory.Get startedThe simplest loader reads in a file as text and places it all into one Document.import { TextLoader } from "langchain/document_loaders/fs/text";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();API Reference:TextLoader from langchain/document_loaders/fs/textPreviousData connectionNextCreating documentsGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersOn this pageDocument loadersUse document loaders to load data from a source as Document's. A Document is a piece of text |
4e9727215e95-708 | and associated metadata. For example, there are document loaders for loading a simple .txt file, for loading the text
contents of any web page, or even for loading a transcript of a YouTube video.Document loaders expose a "load" method for loading data as documents from a configured source. They optionally
implement a "lazy load" as well for lazily loading data into memory.Get startedThe simplest loader reads in a file as text and places it all into one Document.import { TextLoader } from "langchain/document_loaders/fs/text";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();API Reference:TextLoader from langchain/document_loaders/fs/textPreviousData connectionNextCreating documentsGet started
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference
ModulesData connectionDocument loadersOn this pageDocument loadersUse document loaders to load data from a source as Document's. A Document is a piece of text
and associated metadata. For example, there are document loaders for loading a simple .txt file, for loading the text
contents of any web page, or even for loading a transcript of a YouTube video.Document loaders expose a "load" method for loading data as documents from a configured source. They optionally
implement a "lazy load" as well for lazily loading data into memory.Get startedThe simplest loader reads in a file as text and places it all into one Document.import { TextLoader } from "langchain/document_loaders/fs/text";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();API Reference:TextLoader from langchain/document_loaders/fs/textPreviousData connectionNextCreating documentsGet started |
4e9727215e95-709 | ModulesData connectionDocument loadersOn this pageDocument loadersUse document loaders to load data from a source as Document's. A Document is a piece of text
and associated metadata. For example, there are document loaders for loading a simple .txt file, for loading the text
contents of any web page, or even for loading a transcript of a YouTube video.Document loaders expose a "load" method for loading data as documents from a configured source. They optionally
implement a "lazy load" as well for lazily loading data into memory.Get startedThe simplest loader reads in a file as text and places it all into one Document.import { TextLoader } from "langchain/document_loaders/fs/text";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();API Reference:TextLoader from langchain/document_loaders/fs/textPreviousData connectionNextCreating documents
Document loadersUse document loaders to load data from a source as Document's. A Document is a piece of text
and associated metadata. For example, there are document loaders for loading a simple .txt file, for loading the text
contents of any web page, or even for loading a transcript of a YouTube video.Document loaders expose a "load" method for loading data as documents from a configured source. They optionally
implement a "lazy load" as well for lazily loading data into memory.Get startedThe simplest loader reads in a file as text and places it all into one Document.import { TextLoader } from "langchain/document_loaders/fs/text";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();API Reference:TextLoader from langchain/document_loaders/fs/text
Use document loaders to load data from a source as Document's. A Document is a piece of text
and associated metadata. For example, there are document loaders for loading a simple .txt file, for loading the text |
4e9727215e95-710 | contents of any web page, or even for loading a transcript of a YouTube video.
Document loaders expose a "load" method for loading data as documents from a configured source. They optionally
implement a "lazy load" as well for lazily loading data into memory.
The simplest loader reads in a file as text and places it all into one Document.
import { TextLoader } from "langchain/document_loaders/fs/text";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();
API Reference:TextLoader from langchain/document_loaders/fs/text
Creating documents
Page Title: Creating documents | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-711 | Page Title: Creating documents | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toCreating documentsCSVCustom document loadersFile DirectoryJSONPDFIntegrationsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersHow-toCreating documentsCreating documentsA document at its core is fairly simple. It consists of a piece of text and optional metadata. The piece of text is what we interact with the language model, while the optional metadata is useful for keeping track of metadata about the document (such as the source).interface Document { pageContent: string; metadata: Record<string, any>;}You can create a document object rather easily in LangChain with:import { Document } from "langchain/document";const doc = new Document({ pageContent: "foo" });You can create one with metadata with:import { Document } from "langchain/document";const doc = new Document({ pageContent: "foo", metadata: { source: "1" } });PreviousDocument loadersNextCSVCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-712 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toCreating documentsCSVCustom document loadersFile DirectoryJSONPDFIntegrationsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersHow-toCreating documentsCreating documentsA document at its core is fairly simple. It consists of a piece of text and optional metadata. The piece of text is what we interact with the language model, while the optional metadata is useful for keeping track of metadata about the document (such as the source).interface Document { pageContent: string; metadata: Record<string, any>;}You can create a document object rather easily in LangChain with:import { Document } from "langchain/document";const doc = new Document({ pageContent: "foo" });You can create one with metadata with:import { Document } from "langchain/document";const doc = new Document({ pageContent: "foo", metadata: { source: "1" } });PreviousDocument loadersNextCSV
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toCreating documentsCSVCustom document loadersFile DirectoryJSONPDFIntegrationsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference |
4e9727215e95-713 | ModulesData connectionDocument loadersHow-toCreating documentsCreating documentsA document at its core is fairly simple. It consists of a piece of text and optional metadata. The piece of text is what we interact with the language model, while the optional metadata is useful for keeping track of metadata about the document (such as the source).interface Document { pageContent: string; metadata: Record<string, any>;}You can create a document object rather easily in LangChain with:import { Document } from "langchain/document";const doc = new Document({ pageContent: "foo" });You can create one with metadata with:import { Document } from "langchain/document";const doc = new Document({ pageContent: "foo", metadata: { source: "1" } });PreviousDocument loadersNextCSV
Creating documentsA document at its core is fairly simple. It consists of a piece of text and optional metadata. The piece of text is what we interact with the language model, while the optional metadata is useful for keeping track of metadata about the document (such as the source).interface Document { pageContent: string; metadata: Record<string, any>;}You can create a document object rather easily in LangChain with:import { Document } from "langchain/document";const doc = new Document({ pageContent: "foo" });You can create one with metadata with:import { Document } from "langchain/document";const doc = new Document({ pageContent: "foo", metadata: { source: "1" } });
A document at its core is fairly simple. It consists of a piece of text and optional metadata. The piece of text is what we interact with the language model, while the optional metadata is useful for keeping track of metadata about the document (such as the source).
interface Document { pageContent: string; metadata: Record<string, any>;} |
4e9727215e95-714 | interface Document { pageContent: string; metadata: Record<string, any>;}
You can create a document object rather easily in LangChain with:
import { Document } from "langchain/document";const doc = new Document({ pageContent: "foo" });
You can create one with metadata with:
import { Document } from "langchain/document";const doc = new Document({ pageContent: "foo", metadata: { source: "1" } });
CSV
Page Title: CSV | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-715 | Page Title: CSV | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toCreating documentsCSVCustom document loadersFile DirectoryJSONPDFIntegrationsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersHow-toCSVCSVA comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.Load CSV data with a single row per document.Setupnpm install d3-dsv@2Usage, extracting all columnsExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader("src/document_loaders/example_data/example.csv");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 1text: This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 2text: This is another sentence. |
4e9727215e95-716 | ", },]*/Usage, extracting a single columnExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader( "src/document_loaders/example_data/example.csv", "text");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is another sentence. ", },]*/PreviousCreating documentsNextCustom document loadersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-717 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toCreating documentsCSVCustom document loadersFile DirectoryJSONPDFIntegrationsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersHow-toCSVCSVA comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.Load CSV data with a single row per document.Setupnpm install d3-dsv@2Usage, extracting all columnsExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader("src/document_loaders/example_data/example.csv");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 1text: This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 2text: This is another sentence. |
4e9727215e95-718 | ", },]*/Usage, extracting a single columnExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader( "src/document_loaders/example_data/example.csv", "text");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is another sentence. ", },]*/PreviousCreating documentsNextCustom document loaders |
4e9727215e95-719 | ModulesData connectionDocument loadersHow-toCSVCSVA comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.Load CSV data with a single row per document.Setupnpm install d3-dsv@2Usage, extracting all columnsExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader("src/document_loaders/example_data/example.csv");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 1text: This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 2text: This is another sentence. ", },]*/Usage, extracting a single columnExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader( "src/document_loaders/example_data/example.csv", "text");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is a sentence. |
4e9727215e95-720 | ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is another sentence. ", },]*/PreviousCreating documentsNextCustom document loaders |
4e9727215e95-721 | CSVA comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.Load CSV data with a single row per document.Setupnpm install d3-dsv@2Usage, extracting all columnsExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader("src/document_loaders/example_data/example.csv");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 1text: This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 2text: This is another sentence. ", },]*/Usage, extracting a single columnExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader( "src/document_loaders/example_data/example.csv", "text");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is a sentence. |
4e9727215e95-722 | ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is another sentence. ", },]*/
A comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.
Load CSV data with a single row per document.
npm install d3-dsv@2
Example CSV file:
id,text1,This is a sentence.2,This is another sentence.
Example code:
import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader("src/document_loaders/example_data/example.csv");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 1text: This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 2text: This is another sentence. ", },]*/ |
4e9727215e95-723 | import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader( "src/document_loaders/example_data/example.csv", "text");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is another sentence. ", },]*/
Custom document loaders
Page Title: Custom document loaders | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-724 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toCreating documentsCSVCustom document loadersFile DirectoryJSONPDFIntegrationsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersHow-toCustom document loadersOn this pageCustom document loadersIf you want to implement your own Document Loader, you have a few options.Subclassing BaseDocumentLoaderYou can extend the BaseDocumentLoader class directly. The BaseDocumentLoader class provides a few convenience methods for loading documents from a variety of sources.abstract class BaseDocumentLoader implements DocumentLoader { abstract load(): Promise<Document[]>;}Subclassing TextLoaderIf you want to load documents from a text file, you can extend the TextLoader class. The TextLoader class takes care of reading the file, so all you have to do is implement a parse method.abstract class TextLoader extends BaseDocumentLoader { abstract parse(raw: string): Promise<string[]>;}Subclassing BufferLoaderIf you want to load documents from a binary file, you can extend the BufferLoader class.
The BufferLoader class takes care of reading the file, so all you have to do is implement a parse method.abstract class BufferLoader extends BaseDocumentLoader { abstract parse( raw: Buffer, metadata: Document["metadata"] ): Promise<Document[]>;}PreviousCSVNextFile DirectorySubclassing BaseDocumentLoaderSubclassing TextLoaderSubclassing BufferLoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-725 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toCreating documentsCSVCustom document loadersFile DirectoryJSONPDFIntegrationsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersHow-toCustom document loadersOn this pageCustom document loadersIf you want to implement your own Document Loader, you have a few options.Subclassing BaseDocumentLoaderYou can extend the BaseDocumentLoader class directly. The BaseDocumentLoader class provides a few convenience methods for loading documents from a variety of sources.abstract class BaseDocumentLoader implements DocumentLoader { abstract load(): Promise<Document[]>;}Subclassing TextLoaderIf you want to load documents from a text file, you can extend the TextLoader class. The TextLoader class takes care of reading the file, so all you have to do is implement a parse method.abstract class TextLoader extends BaseDocumentLoader { abstract parse(raw: string): Promise<string[]>;}Subclassing BufferLoaderIf you want to load documents from a binary file, you can extend the BufferLoader class.
The BufferLoader class takes care of reading the file, so all you have to do is implement a parse method.abstract class BufferLoader extends BaseDocumentLoader { abstract parse( raw: Buffer, metadata: Document["metadata"] ): Promise<Document[]>;}PreviousCSVNextFile DirectorySubclassing BaseDocumentLoaderSubclassing TextLoaderSubclassing BufferLoader |
4e9727215e95-726 | ModulesData connectionDocument loadersHow-toCustom document loadersOn this pageCustom document loadersIf you want to implement your own Document Loader, you have a few options.Subclassing BaseDocumentLoaderYou can extend the BaseDocumentLoader class directly. The BaseDocumentLoader class provides a few convenience methods for loading documents from a variety of sources.abstract class BaseDocumentLoader implements DocumentLoader { abstract load(): Promise<Document[]>;}Subclassing TextLoaderIf you want to load documents from a text file, you can extend the TextLoader class. The TextLoader class takes care of reading the file, so all you have to do is implement a parse method.abstract class TextLoader extends BaseDocumentLoader { abstract parse(raw: string): Promise<string[]>;}Subclassing BufferLoaderIf you want to load documents from a binary file, you can extend the BufferLoader class. The BufferLoader class takes care of reading the file, so all you have to do is implement a parse method.abstract class BufferLoader extends BaseDocumentLoader { abstract parse( raw: Buffer, metadata: Document["metadata"] ): Promise<Document[]>;}PreviousCSVNextFile DirectorySubclassing BaseDocumentLoaderSubclassing TextLoaderSubclassing BufferLoader |
4e9727215e95-727 | ModulesData connectionDocument loadersHow-toCustom document loadersOn this pageCustom document loadersIf you want to implement your own Document Loader, you have a few options.Subclassing BaseDocumentLoaderYou can extend the BaseDocumentLoader class directly. The BaseDocumentLoader class provides a few convenience methods for loading documents from a variety of sources.abstract class BaseDocumentLoader implements DocumentLoader { abstract load(): Promise<Document[]>;}Subclassing TextLoaderIf you want to load documents from a text file, you can extend the TextLoader class. The TextLoader class takes care of reading the file, so all you have to do is implement a parse method.abstract class TextLoader extends BaseDocumentLoader { abstract parse(raw: string): Promise<string[]>;}Subclassing BufferLoaderIf you want to load documents from a binary file, you can extend the BufferLoader class. The BufferLoader class takes care of reading the file, so all you have to do is implement a parse method.abstract class BufferLoader extends BaseDocumentLoader { abstract parse( raw: Buffer, metadata: Document["metadata"] ): Promise<Document[]>;}PreviousCSVNextFile Directory |
4e9727215e95-728 | Custom document loadersIf you want to implement your own Document Loader, you have a few options.Subclassing BaseDocumentLoaderYou can extend the BaseDocumentLoader class directly. The BaseDocumentLoader class provides a few convenience methods for loading documents from a variety of sources.abstract class BaseDocumentLoader implements DocumentLoader { abstract load(): Promise<Document[]>;}Subclassing TextLoaderIf you want to load documents from a text file, you can extend the TextLoader class. The TextLoader class takes care of reading the file, so all you have to do is implement a parse method.abstract class TextLoader extends BaseDocumentLoader { abstract parse(raw: string): Promise<string[]>;}Subclassing BufferLoaderIf you want to load documents from a binary file, you can extend the BufferLoader class. The BufferLoader class takes care of reading the file, so all you have to do is implement a parse method.abstract class BufferLoader extends BaseDocumentLoader { abstract parse( raw: Buffer, metadata: Document["metadata"] ): Promise<Document[]>;}
If you want to implement your own Document Loader, you have a few options.
You can extend the BaseDocumentLoader class directly. The BaseDocumentLoader class provides a few convenience methods for loading documents from a variety of sources.
abstract class BaseDocumentLoader implements DocumentLoader { abstract load(): Promise<Document[]>;}
If you want to load documents from a text file, you can extend the TextLoader class. The TextLoader class takes care of reading the file, so all you have to do is implement a parse method.
abstract class TextLoader extends BaseDocumentLoader { abstract parse(raw: string): Promise<string[]>;}
If you want to load documents from a binary file, you can extend the BufferLoader class. The BufferLoader class takes care of reading the file, so all you have to do is implement a parse method. |
4e9727215e95-729 | abstract class BufferLoader extends BaseDocumentLoader { abstract parse( raw: Buffer, metadata: Document["metadata"] ): Promise<Document[]>;}
File Directory
Subclassing BaseDocumentLoaderSubclassing TextLoaderSubclassing BufferLoader
Page Title: File Directory | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toCreating documentsCSVCustom document loadersFile DirectoryJSONPDFIntegrationsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersHow-toFile DirectoryFile DirectoryThis covers how to load all documents in a directory.The second argument is a map of file extensions to loader factories. |
4e9727215e95-730 | Each file will be passed to the matching loader, and the resulting documents will be concatenated together.Example folder:src/document_loaders/example_data/example/├── example.json├── example.jsonl├── example.txt└── example.csvExample code:import { DirectoryLoader } from "langchain/document_loaders/fs/directory";import { JSONLoader, JSONLinesLoader,} from "langchain/document_loaders/fs/json";import { TextLoader } from "langchain/document_loaders/fs/text";import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new DirectoryLoader( "src/document_loaders/example_data/example", { ".json": (path) => new JSONLoader(path, "/texts"), ".jsonl": (path) => new JSONLinesLoader(path, "/html"), ".txt": (path) => new TextLoader(path), ".csv": (path) => new CSVLoader(path, "text"), });const docs = await loader.load();console.log({ docs });PreviousCustom document loadersNextJSONCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-731 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toCreating documentsCSVCustom document loadersFile DirectoryJSONPDFIntegrationsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersHow-toFile DirectoryFile DirectoryThis covers how to load all documents in a directory.The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.Example folder:src/document_loaders/example_data/example/├── example.json├── example.jsonl├── example.txt└── example.csvExample code:import { DirectoryLoader } from "langchain/document_loaders/fs/directory";import { JSONLoader, JSONLinesLoader,} from "langchain/document_loaders/fs/json";import { TextLoader } from "langchain/document_loaders/fs/text";import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new DirectoryLoader( "src/document_loaders/example_data/example", { ".json": (path) => new JSONLoader(path, "/texts"), ".jsonl": (path) => new JSONLinesLoader(path, "/html"), ".txt": (path) => new TextLoader(path), ".csv": (path) => new CSVLoader(path, "text"), });const docs = await loader.load();console.log({ docs });PreviousCustom document loadersNextJSON |
4e9727215e95-732 | ModulesData connectionDocument loadersHow-toFile DirectoryFile DirectoryThis covers how to load all documents in a directory.The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.Example folder:src/document_loaders/example_data/example/├── example.json├── example.jsonl├── example.txt└── example.csvExample code:import { DirectoryLoader } from "langchain/document_loaders/fs/directory";import { JSONLoader, JSONLinesLoader,} from "langchain/document_loaders/fs/json";import { TextLoader } from "langchain/document_loaders/fs/text";import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new DirectoryLoader( "src/document_loaders/example_data/example", { ".json": (path) => new JSONLoader(path, "/texts"), ".jsonl": (path) => new JSONLinesLoader(path, "/html"), ".txt": (path) => new TextLoader(path), ".csv": (path) => new CSVLoader(path, "text"), });const docs = await loader.load();console.log({ docs });PreviousCustom document loadersNextJSON |
4e9727215e95-733 | File DirectoryThis covers how to load all documents in a directory.The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.Example folder:src/document_loaders/example_data/example/├── example.json├── example.jsonl├── example.txt└── example.csvExample code:import { DirectoryLoader } from "langchain/document_loaders/fs/directory";import { JSONLoader, JSONLinesLoader,} from "langchain/document_loaders/fs/json";import { TextLoader } from "langchain/document_loaders/fs/text";import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new DirectoryLoader( "src/document_loaders/example_data/example", { ".json": (path) => new JSONLoader(path, "/texts"), ".jsonl": (path) => new JSONLinesLoader(path, "/html"), ".txt": (path) => new TextLoader(path), ".csv": (path) => new CSVLoader(path, "text"), });const docs = await loader.load();console.log({ docs });
This covers how to load all documents in a directory.
The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.
Example folder:
src/document_loaders/example_data/example/├── example.json├── example.jsonl├── example.txt└── example.csv |
4e9727215e95-734 | import { DirectoryLoader } from "langchain/document_loaders/fs/directory";import { JSONLoader, JSONLinesLoader,} from "langchain/document_loaders/fs/json";import { TextLoader } from "langchain/document_loaders/fs/text";import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new DirectoryLoader( "src/document_loaders/example_data/example", { ".json": (path) => new JSONLoader(path, "/texts"), ".jsonl": (path) => new JSONLinesLoader(path, "/html"), ".txt": (path) => new TextLoader(path), ".csv": (path) => new CSVLoader(path, "text"), });const docs = await loader.load();console.log({ docs });
JSON
Page Title: JSON | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toCreating documentsCSVCustom document loadersFile DirectoryJSONPDFIntegrationsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersHow-toJSONJSONJSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).JSON Lines is a file format where each line is a valid JSON value.The JSON loader uses JSON pointer to target keys in your JSON files you want to target.No JSON pointer exampleThe most simple way of using it is to specify no JSON pointer. |
4e9727215e95-735 | The loader will load all strings it finds in the JSON object.Example JSON file:{ "texts": ["This is a sentence. ", "This is another sentence. "]}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader("src/document_loaders/example_data/example.json");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence. ", },]*/Using JSON pointer exampleYou can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from.In this example, we want to only extract information from "from" and "surname" entries. |
4e9727215e95-736 | { "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } }}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"]);const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "BD 2023 SUMMER", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "LinkedIn Job", }, ...]PreviousFile DirectoryNextPDFCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-737 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toCreating documentsCSVCustom document loadersFile DirectoryJSONPDFIntegrationsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersHow-toJSONJSONJSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).JSON Lines is a file format where each line is a valid JSON value.The JSON loader uses JSON pointer to target keys in your JSON files you want to target.No JSON pointer exampleThe most simple way of using it is to specify no JSON pointer.
The loader will load all strings it finds in the JSON object.Example JSON file:{ "texts": ["This is a sentence. ", "This is another sentence. "]}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader("src/document_loaders/example_data/example.json");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence. ", },]*/Using JSON pointer exampleYou can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from.In this example, we want to only extract information from "from" and "surname" entries. |
4e9727215e95-738 | { "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } }}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"]);const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "BD 2023 SUMMER", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "LinkedIn Job", }, ...]PreviousFile DirectoryNextPDF
ModulesData connectionDocument loadersHow-toJSONJSONJSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).JSON Lines is a file format where each line is a valid JSON value.The JSON loader uses JSON pointer to target keys in your JSON files you want to target.No JSON pointer exampleThe most simple way of using it is to specify no JSON pointer. |
4e9727215e95-739 | The loader will load all strings it finds in the JSON object.Example JSON file:{ "texts": ["This is a sentence. ", "This is another sentence. "]}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader("src/document_loaders/example_data/example.json");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence. ", },]*/Using JSON pointer exampleYou can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from.In this example, we want to only extract information from "from" and "surname" entries. |
4e9727215e95-740 | { "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } }}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"]);const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "BD 2023 SUMMER", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "LinkedIn Job", }, ...]PreviousFile DirectoryNextPDF
JSONJSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).JSON Lines is a file format where each line is a valid JSON value.The JSON loader uses JSON pointer to target keys in your JSON files you want to target.No JSON pointer exampleThe most simple way of using it is to specify no JSON pointer. |
4e9727215e95-741 | The loader will load all strings it finds in the JSON object.Example JSON file:{ "texts": ["This is a sentence. ", "This is another sentence. "]}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader("src/document_loaders/example_data/example.json");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence. ", },]*/Using JSON pointer exampleYou can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from.In this example, we want to only extract information from "from" and "surname" entries. |
4e9727215e95-742 | { "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } }}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"]);const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "BD 2023 SUMMER", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "LinkedIn Job", }, ...]
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).
JSON Lines is a file format where each line is a valid JSON value.
The JSON loader uses JSON pointer to target keys in your JSON files you want to target.
The most simple way of using it is to specify no JSON pointer.
The loader will load all strings it finds in the JSON object.
Example JSON file:
{ "texts": ["This is a sentence. ", "This is another sentence."]} |
4e9727215e95-743 | { "texts": ["This is a sentence. ", "This is another sentence."]}
import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader("src/document_loaders/example_data/example.json");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence. ", },]*/
You can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from.
In this example, we want to only extract information from "from" and "surname" entries.
{ "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } }} |
4e9727215e95-744 | import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"]);const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "BD 2023 SUMMER", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "LinkedIn Job", }, ...]
PDF
Page Title: PDF | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toCreating documentsCSVCustom document loadersFile DirectoryJSONPDFIntegrationsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersHow-toPDFPDFPortable Document Format (PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems.This covers how to load PDF documents into the Document format that we use downstream.By default, one document will be created for each page in the PDF file. |
4e9727215e95-745 | You can change this behavior by setting the splitPages option to false.Setupnpm install pdf-parseUsage, one document per pageimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf");const docs = await loader.load();Usage, one document per fileimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { splitPages: false,});const docs = await loader.load();Usage, custom pdfjs buildBy default we use the pdfjs build bundled with pdf-parse, which is compatible with most environments, including Node.js and modern browsers. If you want to use a more recent version of pdfjs-dist or if you want to use a custom build of pdfjs-dist, you can do so by providing a custom pdfjs function that returns a promise that resolves to the PDFJS object.In the following example we use the "legacy" (see pdfjs docs) build of pdfjs-dist, which includes several polyfills not included in the default build.npm install pdfjs-distimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),});PreviousJSONNextFile LoadersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-746 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toCreating documentsCSVCustom document loadersFile DirectoryJSONPDFIntegrationsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersHow-toPDFPDFPortable Document Format (PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems.This covers how to load PDF documents into the Document format that we use downstream.By default, one document will be created for each page in the PDF file. |
4e9727215e95-747 | You can change this behavior by setting the splitPages option to false.Setupnpm install pdf-parseUsage, one document per pageimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf");const docs = await loader.load();Usage, one document per fileimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { splitPages: false,});const docs = await loader.load();Usage, custom pdfjs buildBy default we use the pdfjs build bundled with pdf-parse, which is compatible with most environments, including Node.js and modern browsers. If you want to use a more recent version of pdfjs-dist or if you want to use a custom build of pdfjs-dist, you can do so by providing a custom pdfjs function that returns a promise that resolves to the PDFJS object.In the following example we use the "legacy" (see pdfjs docs) build of pdfjs-dist, which includes several polyfills not included in the default build.npm install pdfjs-distimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),});PreviousJSONNextFile Loaders |
4e9727215e95-748 | ModulesData connectionDocument loadersHow-toPDFPDFPortable Document Format (PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems.This covers how to load PDF documents into the Document format that we use downstream.By default, one document will be created for each page in the PDF file. You can change this behavior by setting the splitPages option to false.Setupnpm install pdf-parseUsage, one document per pageimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf");const docs = await loader.load();Usage, one document per fileimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { splitPages: false,});const docs = await loader.load();Usage, custom pdfjs buildBy default we use the pdfjs build bundled with pdf-parse, which is compatible with most environments, including Node.js and modern browsers. |
4e9727215e95-749 | If you want to use a more recent version of pdfjs-dist or if you want to use a custom build of pdfjs-dist, you can do so by providing a custom pdfjs function that returns a promise that resolves to the PDFJS object.In the following example we use the "legacy" (see pdfjs docs) build of pdfjs-dist, which includes several polyfills not included in the default build.npm install pdfjs-distimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),});PreviousJSONNextFile Loaders
PDFPortable Document Format (PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems.This covers how to load PDF documents into the Document format that we use downstream.By default, one document will be created for each page in the PDF file. You can change this behavior by setting the splitPages option to false.Setupnpm install pdf-parseUsage, one document per pageimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf");const docs = await loader.load();Usage, one document per fileimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { splitPages: false,});const docs = await loader.load();Usage, custom pdfjs buildBy default we use the pdfjs build bundled with pdf-parse, which is compatible with most environments, including Node.js and modern browsers. |
4e9727215e95-750 | If you want to use a more recent version of pdfjs-dist or if you want to use a custom build of pdfjs-dist, you can do so by providing a custom pdfjs function that returns a promise that resolves to the PDFJS object.In the following example we use the "legacy" (see pdfjs docs) build of pdfjs-dist, which includes several polyfills not included in the default build.npm install pdfjs-distimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),});
Portable Document Format (PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems.
This covers how to load PDF documents into the Document format that we use downstream.
By default, one document will be created for each page in the PDF file. You can change this behavior by setting the splitPages option to false.
npm install pdf-parse
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf");const docs = await loader.load();
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { splitPages: false,});const docs = await loader.load(); |
4e9727215e95-751 | By default we use the pdfjs build bundled with pdf-parse, which is compatible with most environments, including Node.js and modern browsers. If you want to use a more recent version of pdfjs-dist or if you want to use a custom build of pdfjs-dist, you can do so by providing a custom pdfjs function that returns a promise that resolves to the PDFJS object.
In the following example we use the "legacy" (see pdfjs docs) build of pdfjs-dist, which includes several polyfills not included in the default build.
npm install pdfjs-dist
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),});
File Loaders
Page Title: File Loaders | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-752 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersFile LoadersCompatibilityOnly available on Node.js.These loaders are used to load files given a filesystem path or a Blob object.📄️ Folders with multiple filesThis example goes over how to load data from folders with multiple files. The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.📄️ CSV filesThis example goes over how to load data from CSV files. The second argument is the column name to extract from the CSV file. One document will be created for each row in the CSV file. When column is not specified, each row is converted into a key/value pair with each key/value pair outputted to a new line in the document's pageContent. |
4e9727215e95-753 | When column is specified, one document is created for each row, and the value of the specified column is used as the document's pageContent.📄️ Docx filesThis example goes over how to load data from docx files.📄️ EPUB filesThis example goes over how to load data from EPUB files. By default, one document will be created for each chapter in the EPUB file, you can change this behavior by setting the splitChapters option to false.📄️ JSON filesThe JSON loader use JSON pointer to target keys in your JSON files you want to target.📄️ JSONLines filesThis example goes over how to load data from JSONLines or JSONL files. The second argument is a JSONPointer to the property to extract from each JSON object in the file. One document will be created for each JSON object in the file.📄️ Notion markdown exportThis example goes over how to load data from your Notion pages exported from the notion dashboard.📄️ PDF filesThis example goes over how to load data from PDF files. By default, one document will be created for each page in the PDF file, you can change this behavior by setting the splitPages option to false.📄️ SubtitlesThis example goes over how to load data from subtitle files. One document will be created for each subtitles file.📄️ Text filesThis example goes over how to load data from text files.📄️ UnstructuredThis example covers how to use Unstructured to load files of many types.
Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.PreviousPDFNextFolders with multiple filesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-754 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersFile LoadersCompatibilityOnly available on Node.js.These loaders are used to load files given a filesystem path or a Blob object.📄️ Folders with multiple filesThis example goes over how to load data from folders with multiple files. The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.📄️ CSV filesThis example goes over how to load data from CSV files. The second argument is the column name to extract from the CSV file. One document will be created for each row in the CSV file. When column is not specified, each row is converted into a key/value pair with each key/value pair outputted to a new line in the document's pageContent. |
4e9727215e95-755 | When column is specified, one document is created for each row, and the value of the specified column is used as the document's pageContent.📄️ Docx filesThis example goes over how to load data from docx files.📄️ EPUB filesThis example goes over how to load data from EPUB files. By default, one document will be created for each chapter in the EPUB file, you can change this behavior by setting the splitChapters option to false.📄️ JSON filesThe JSON loader use JSON pointer to target keys in your JSON files you want to target.📄️ JSONLines filesThis example goes over how to load data from JSONLines or JSONL files. The second argument is a JSONPointer to the property to extract from each JSON object in the file. One document will be created for each JSON object in the file.📄️ Notion markdown exportThis example goes over how to load data from your Notion pages exported from the notion dashboard.📄️ PDF filesThis example goes over how to load data from PDF files. By default, one document will be created for each page in the PDF file, you can change this behavior by setting the splitPages option to false.📄️ SubtitlesThis example goes over how to load data from subtitle files. One document will be created for each subtitles file.📄️ Text filesThis example goes over how to load data from text files.📄️ UnstructuredThis example covers how to use Unstructured to load files of many types.
Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.PreviousPDFNextFolders with multiple files |
4e9727215e95-756 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference
Web Loaders |
4e9727215e95-757 | Web Loaders
ModulesData connectionDocument loadersIntegrationsFile LoadersFile LoadersCompatibilityOnly available on Node.js.These loaders are used to load files given a filesystem path or a Blob object.📄️ Folders with multiple filesThis example goes over how to load data from folders with multiple files. The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.📄️ CSV filesThis example goes over how to load data from CSV files. The second argument is the column name to extract from the CSV file. One document will be created for each row in the CSV file. When column is not specified, each row is converted into a key/value pair with each key/value pair outputted to a new line in the document's pageContent. When column is specified, one document is created for each row, and the value of the specified column is used as the document's pageContent.📄️ Docx filesThis example goes over how to load data from docx files.📄️ EPUB filesThis example goes over how to load data from EPUB files. By default, one document will be created for each chapter in the EPUB file, you can change this behavior by setting the splitChapters option to false.📄️ JSON filesThe JSON loader use JSON pointer to target keys in your JSON files you want to target.📄️ JSONLines filesThis example goes over how to load data from JSONLines or JSONL files. |
4e9727215e95-758 | The second argument is a JSONPointer to the property to extract from each JSON object in the file. One document will be created for each JSON object in the file.📄️ Notion markdown exportThis example goes over how to load data from your Notion pages exported from the notion dashboard.📄️ PDF filesThis example goes over how to load data from PDF files. By default, one document will be created for each page in the PDF file, you can change this behavior by setting the splitPages option to false.📄️ SubtitlesThis example goes over how to load data from subtitle files. One document will be created for each subtitles file.📄️ Text filesThis example goes over how to load data from text files.📄️ UnstructuredThis example covers how to use Unstructured to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.PreviousPDFNextFolders with multiple files |
4e9727215e95-759 | File LoadersCompatibilityOnly available on Node.js.These loaders are used to load files given a filesystem path or a Blob object.📄️ Folders with multiple filesThis example goes over how to load data from folders with multiple files. The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.📄️ CSV filesThis example goes over how to load data from CSV files. The second argument is the column name to extract from the CSV file. One document will be created for each row in the CSV file. When column is not specified, each row is converted into a key/value pair with each key/value pair outputted to a new line in the document's pageContent. When column is specified, one document is created for each row, and the value of the specified column is used as the document's pageContent.📄️ Docx filesThis example goes over how to load data from docx files.📄️ EPUB filesThis example goes over how to load data from EPUB files. By default, one document will be created for each chapter in the EPUB file, you can change this behavior by setting the splitChapters option to false.📄️ JSON filesThe JSON loader use JSON pointer to target keys in your JSON files you want to target.📄️ JSONLines filesThis example goes over how to load data from JSONLines or JSONL files. The second argument is a JSONPointer to the property to extract from each JSON object in the file. |
4e9727215e95-760 | One document will be created for each JSON object in the file.📄️ Notion markdown exportThis example goes over how to load data from your Notion pages exported from the notion dashboard.📄️ PDF filesThis example goes over how to load data from PDF files. By default, one document will be created for each page in the PDF file, you can change this behavior by setting the splitPages option to false.📄️ SubtitlesThis example goes over how to load data from subtitle files. One document will be created for each subtitles file.📄️ Text filesThis example goes over how to load data from text files.📄️ UnstructuredThis example covers how to use Unstructured to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.
CompatibilityOnly available on Node.js.
Compatibility
Only available on Node.js.
These loaders are used to load files given a filesystem path or a Blob object.
This example goes over how to load data from folders with multiple files. The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.
This example goes over how to load data from CSV files. The second argument is the column name to extract from the CSV file. One document will be created for each row in the CSV file. When column is not specified, each row is converted into a key/value pair with each key/value pair outputted to a new line in the document's pageContent. When column is specified, one document is created for each row, and the value of the specified column is used as the document's pageContent.
This example goes over how to load data from docx files. |
4e9727215e95-761 | This example goes over how to load data from docx files.
This example goes over how to load data from EPUB files. By default, one document will be created for each chapter in the EPUB file, you can change this behavior by setting the splitChapters option to false.
The JSON loader use JSON pointer to target keys in your JSON files you want to target.
This example goes over how to load data from JSONLines or JSONL files. The second argument is a JSONPointer to the property to extract from each JSON object in the file. One document will be created for each JSON object in the file.
This example goes over how to load data from your Notion pages exported from the notion dashboard.
This example goes over how to load data from PDF files. By default, one document will be created for each page in the PDF file, you can change this behavior by setting the splitPages option to false.
This example goes over how to load data from subtitle files. One document will be created for each subtitles file.
This example goes over how to load data from text files.
This example covers how to use Unstructured to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.
Folders with multiple files
Page Title: Folders with multiple files | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-762 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersFolders with multiple filesFolders with multiple filesThis example goes over how to load data from folders with multiple files. The second argument is a map of file extensions to loader factories. |
4e9727215e95-763 | Each file will be passed to the matching loader, and the resulting documents will be concatenated together.Example folder:src/document_loaders/example_data/example/├── example.json├── example.jsonl├── example.txt└── example.csvExample code:import { DirectoryLoader } from "langchain/document_loaders/fs/directory";import { JSONLoader, JSONLinesLoader,} from "langchain/document_loaders/fs/json";import { TextLoader } from "langchain/document_loaders/fs/text";import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new DirectoryLoader( "src/document_loaders/example_data/example", { ".json": (path) => new JSONLoader(path, "/texts"), ".jsonl": (path) => new JSONLinesLoader(path, "/html"), ".txt": (path) => new TextLoader(path), ".csv": (path) => new CSVLoader(path, "text"), });const docs = await loader.load();console.log({ docs });PreviousFile LoadersNextCSV filesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersFolders with multiple filesFolders with multiple filesThis example goes over how to load data from folders with multiple files. The second argument is a map of file extensions to loader factories. |
4e9727215e95-764 | Each file will be passed to the matching loader, and the resulting documents will be concatenated together.Example folder:src/document_loaders/example_data/example/├── example.json├── example.jsonl├── example.txt└── example.csvExample code:import { DirectoryLoader } from "langchain/document_loaders/fs/directory";import { JSONLoader, JSONLinesLoader,} from "langchain/document_loaders/fs/json";import { TextLoader } from "langchain/document_loaders/fs/text";import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new DirectoryLoader( "src/document_loaders/example_data/example", { ".json": (path) => new JSONLoader(path, "/texts"), ".jsonl": (path) => new JSONLinesLoader(path, "/html"), ".txt": (path) => new TextLoader(path), ".csv": (path) => new CSVLoader(path, "text"), });const docs = await loader.load();console.log({ docs });PreviousFile LoadersNextCSV files |
4e9727215e95-765 | ModulesData connectionDocument loadersIntegrationsFile LoadersFolders with multiple filesFolders with multiple filesThis example goes over how to load data from folders with multiple files. The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.Example folder:src/document_loaders/example_data/example/├── example.json├── example.jsonl├── example.txt└── example.csvExample code:import { DirectoryLoader } from "langchain/document_loaders/fs/directory";import { JSONLoader, JSONLinesLoader,} from "langchain/document_loaders/fs/json";import { TextLoader } from "langchain/document_loaders/fs/text";import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new DirectoryLoader( "src/document_loaders/example_data/example", { ".json": (path) => new JSONLoader(path, "/texts"), ".jsonl": (path) => new JSONLinesLoader(path, "/html"), ".txt": (path) => new TextLoader(path), ".csv": (path) => new CSVLoader(path, "text"), });const docs = await loader.load();console.log({ docs });PreviousFile LoadersNextCSV files |
4e9727215e95-766 | Folders with multiple filesThis example goes over how to load data from folders with multiple files. The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.Example folder:src/document_loaders/example_data/example/├── example.json├── example.jsonl├── example.txt└── example.csvExample code:import { DirectoryLoader } from "langchain/document_loaders/fs/directory";import { JSONLoader, JSONLinesLoader,} from "langchain/document_loaders/fs/json";import { TextLoader } from "langchain/document_loaders/fs/text";import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new DirectoryLoader( "src/document_loaders/example_data/example", { ".json": (path) => new JSONLoader(path, "/texts"), ".jsonl": (path) => new JSONLinesLoader(path, "/html"), ".txt": (path) => new TextLoader(path), ".csv": (path) => new CSVLoader(path, "text"), });const docs = await loader.load();console.log({ docs });
CSV files
Page Title: CSV files | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-767 | Page Title: CSV files | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersCSV filesOn this pageCSV filesThis example goes over how to load data from CSV files. The second argument is the column name to extract from the CSV file. One document will be created for each row in the CSV file. When column is not specified, each row is converted into a key/value pair with each key/value pair outputted to a new line in the document's pageContent. |
4e9727215e95-768 | When column is specified, one document is created for each row, and the value of the specified column is used as the document's pageContent.SetupnpmYarnpnpmnpm install d3-dsv@2yarn add d3-dsv@2pnpm add d3-dsv@2Usage, extracting all columnsExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader("src/document_loaders/example_data/example.csv");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 1text: This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 2text: This is another sentence. ", },]*/Usage, extracting a single columnExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader( "src/document_loaders/example_data/example.csv", "text");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is another sentence. |
4e9727215e95-769 | ", },]*/PreviousFolders with multiple filesNextDocx filesSetupUsage, extracting all columnsUsage, extracting a single columnCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersCSV filesOn this pageCSV filesThis example goes over how to load data from CSV files. The second argument is the column name to extract from the CSV file. One document will be created for each row in the CSV file. When column is not specified, each row is converted into a key/value pair with each key/value pair outputted to a new line in the document's pageContent. |
4e9727215e95-770 | When column is specified, one document is created for each row, and the value of the specified column is used as the document's pageContent.SetupnpmYarnpnpmnpm install d3-dsv@2yarn add d3-dsv@2pnpm add d3-dsv@2Usage, extracting all columnsExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader("src/document_loaders/example_data/example.csv");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 1text: This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 2text: This is another sentence. ", },]*/Usage, extracting a single columnExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader( "src/document_loaders/example_data/example.csv", "text");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is another sentence. |
4e9727215e95-771 | ", },]*/PreviousFolders with multiple filesNextDocx filesSetupUsage, extracting all columnsUsage, extracting a single column
ModulesData connectionDocument loadersIntegrationsFile LoadersCSV filesOn this pageCSV filesThis example goes over how to load data from CSV files. The second argument is the column name to extract from the CSV file. One document will be created for each row in the CSV file. When column is not specified, each row is converted into a key/value pair with each key/value pair outputted to a new line in the document's pageContent. When column is specified, one document is created for each row, and the value of the specified column is used as the document's pageContent.SetupnpmYarnpnpmnpm install d3-dsv@2yarn add d3-dsv@2pnpm add d3-dsv@2Usage, extracting all columnsExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader("src/document_loaders/example_data/example.csv");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 1text: This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 2text: This is another sentence. |
4e9727215e95-772 | ", },]*/Usage, extracting a single columnExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader( "src/document_loaders/example_data/example.csv", "text");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is another sentence. ", },]*/PreviousFolders with multiple filesNextDocx filesSetupUsage, extracting all columnsUsage, extracting a single column |
4e9727215e95-773 | ModulesData connectionDocument loadersIntegrationsFile LoadersCSV filesOn this pageCSV filesThis example goes over how to load data from CSV files. The second argument is the column name to extract from the CSV file. One document will be created for each row in the CSV file. When column is not specified, each row is converted into a key/value pair with each key/value pair outputted to a new line in the document's pageContent. When column is specified, one document is created for each row, and the value of the specified column is used as the document's pageContent.SetupnpmYarnpnpmnpm install d3-dsv@2yarn add d3-dsv@2pnpm add d3-dsv@2Usage, extracting all columnsExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader("src/document_loaders/example_data/example.csv");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 1text: This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 2text: This is another sentence. |
4e9727215e95-774 | ", },]*/Usage, extracting a single columnExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader( "src/document_loaders/example_data/example.csv", "text");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is another sentence. ", },]*/PreviousFolders with multiple filesNextDocx files |
4e9727215e95-775 | CSV filesThis example goes over how to load data from CSV files. The second argument is the column name to extract from the CSV file. One document will be created for each row in the CSV file. When column is not specified, each row is converted into a key/value pair with each key/value pair outputted to a new line in the document's pageContent. When column is specified, one document is created for each row, and the value of the specified column is used as the document's pageContent.SetupnpmYarnpnpmnpm install d3-dsv@2yarn add d3-dsv@2pnpm add d3-dsv@2Usage, extracting all columnsExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader("src/document_loaders/example_data/example.csv");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 1text: This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 2text: This is another sentence. |
4e9727215e95-776 | ", },]*/Usage, extracting a single columnExample CSV file:id,text1,This is a sentence.2,This is another sentence.Example code:import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader( "src/document_loaders/example_data/example.csv", "text");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is another sentence. ", },]*/
npmYarnpnpmnpm install d3-dsv@2yarn add d3-dsv@2pnpm add d3-dsv@2
npm install d3-dsv@2yarn add d3-dsv@2pnpm add d3-dsv@2
yarn add d3-dsv@2
pnpm add d3-dsv@2
Docx files
SetupUsage, extracting all columnsUsage, extracting a single column
Page Title: Docx files | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-777 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersDocx filesDocx filesThis example goes over how to load data from docx files.SetupnpmYarnpnpmnpm install mammothyarn add mammothpnpm add mammothUsageimport { DocxLoader } from "langchain/document_loaders/fs/docx";const loader = new DocxLoader( "src/document_loaders/tests/example_data/attention.docx");const docs = await loader.load();PreviousCSV filesNextEPUB filesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-778 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersDocx filesDocx filesThis example goes over how to load data from docx files.SetupnpmYarnpnpmnpm install mammothyarn add mammothpnpm add mammothUsageimport { DocxLoader } from "langchain/document_loaders/fs/docx";const loader = new DocxLoader( "src/document_loaders/tests/example_data/attention.docx");const docs = await loader.load();PreviousCSV filesNextEPUB files
ModulesData connectionDocument loadersIntegrationsFile LoadersDocx filesDocx filesThis example goes over how to load data from docx files.SetupnpmYarnpnpmnpm install mammothyarn add mammothpnpm add mammothUsageimport { DocxLoader } from "langchain/document_loaders/fs/docx";const loader = new DocxLoader( "src/document_loaders/tests/example_data/attention.docx");const docs = await loader.load();PreviousCSV filesNextEPUB files
Docx filesThis example goes over how to load data from docx files.SetupnpmYarnpnpmnpm install mammothyarn add mammothpnpm add mammothUsageimport { DocxLoader } from "langchain/document_loaders/fs/docx";const loader = new DocxLoader( "src/document_loaders/tests/example_data/attention.docx");const docs = await loader.load();
npmYarnpnpmnpm install mammothyarn add mammothpnpm add mammoth |
4e9727215e95-779 | npmYarnpnpmnpm install mammothyarn add mammothpnpm add mammoth
npm install mammothyarn add mammothpnpm add mammoth
npm install mammoth
yarn add mammoth
pnpm add mammoth
import { DocxLoader } from "langchain/document_loaders/fs/docx";const loader = new DocxLoader( "src/document_loaders/tests/example_data/attention.docx");const docs = await loader.load();
EPUB files
Page Title: EPUB files | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersEPUB filesEPUB filesThis example goes over how to load data from EPUB files. |
4e9727215e95-780 | By default, one document will be created for each chapter in the EPUB file, you can change this behavior by setting the splitChapters option to false.SetupnpmYarnpnpmnpm install epub2 html-to-textyarn add epub2 html-to-textpnpm add epub2 html-to-textUsage, one document per chapterimport { EPubLoader } from "langchain/document_loaders/fs/epub";const loader = new EPubLoader("src/document_loaders/example_data/example.epub");const docs = await loader.load();Usage, one document per fileimport { EPubLoader } from "langchain/document_loaders/fs/epub";const loader = new EPubLoader( "src/document_loaders/example_data/example.epub", { splitChapters: false, });const docs = await loader.load();PreviousDocx filesNextJSON filesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-781 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersEPUB filesEPUB filesThis example goes over how to load data from EPUB files. By default, one document will be created for each chapter in the EPUB file, you can change this behavior by setting the splitChapters option to false.SetupnpmYarnpnpmnpm install epub2 html-to-textyarn add epub2 html-to-textpnpm add epub2 html-to-textUsage, one document per chapterimport { EPubLoader } from "langchain/document_loaders/fs/epub";const loader = new EPubLoader("src/document_loaders/example_data/example.epub");const docs = await loader.load();Usage, one document per fileimport { EPubLoader } from "langchain/document_loaders/fs/epub";const loader = new EPubLoader( "src/document_loaders/example_data/example.epub", { splitChapters: false, });const docs = await loader.load();PreviousDocx filesNextJSON files |
4e9727215e95-782 | ModulesData connectionDocument loadersIntegrationsFile LoadersEPUB filesEPUB filesThis example goes over how to load data from EPUB files. By default, one document will be created for each chapter in the EPUB file, you can change this behavior by setting the splitChapters option to false.SetupnpmYarnpnpmnpm install epub2 html-to-textyarn add epub2 html-to-textpnpm add epub2 html-to-textUsage, one document per chapterimport { EPubLoader } from "langchain/document_loaders/fs/epub";const loader = new EPubLoader("src/document_loaders/example_data/example.epub");const docs = await loader.load();Usage, one document per fileimport { EPubLoader } from "langchain/document_loaders/fs/epub";const loader = new EPubLoader( "src/document_loaders/example_data/example.epub", { splitChapters: false, });const docs = await loader.load();PreviousDocx filesNextJSON files
EPUB filesThis example goes over how to load data from EPUB files. By default, one document will be created for each chapter in the EPUB file, you can change this behavior by setting the splitChapters option to false.SetupnpmYarnpnpmnpm install epub2 html-to-textyarn add epub2 html-to-textpnpm add epub2 html-to-textUsage, one document per chapterimport { EPubLoader } from "langchain/document_loaders/fs/epub";const loader = new EPubLoader("src/document_loaders/example_data/example.epub");const docs = await loader.load();Usage, one document per fileimport { EPubLoader } from "langchain/document_loaders/fs/epub";const loader = new EPubLoader( "src/document_loaders/example_data/example.epub", { splitChapters: false, });const docs = await loader.load(); |
4e9727215e95-783 | npmYarnpnpmnpm install epub2 html-to-textyarn add epub2 html-to-textpnpm add epub2 html-to-text
npm install epub2 html-to-textyarn add epub2 html-to-textpnpm add epub2 html-to-text
npm install epub2 html-to-text
yarn add epub2 html-to-text
pnpm add epub2 html-to-text
import { EPubLoader } from "langchain/document_loaders/fs/epub";const loader = new EPubLoader("src/document_loaders/example_data/example.epub");const docs = await loader.load();
import { EPubLoader } from "langchain/document_loaders/fs/epub";const loader = new EPubLoader( "src/document_loaders/example_data/example.epub", { splitChapters: false, });const docs = await loader.load();
JSON files
Page Title: JSON files | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersJSON filesOn this pageJSON filesThe JSON loader use JSON pointer to target keys in your JSON files you want to target.No JSON pointer exampleThe most simple way of using it, is to specify no JSON pointer. |
4e9727215e95-784 | The loader will load all strings it finds in the JSON object.Example JSON file:{ "texts": ["This is a sentence. ", "This is another sentence. "]}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader("src/document_loaders/example_data/example.json");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence. ", },]*/Using JSON pointer exampleYou can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from.In this example, we want to only extract information from "from" and "surname" entries. |
4e9727215e95-785 | { "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } }}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"]);const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "BD 2023 SUMMER", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "LinkedIn Job", }, ...]PreviousEPUB filesNextJSONLines filesNo JSON pointer exampleUsing JSON pointer exampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-786 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersJSON filesOn this pageJSON filesThe JSON loader use JSON pointer to target keys in your JSON files you want to target.No JSON pointer exampleThe most simple way of using it, is to specify no JSON pointer.
The loader will load all strings it finds in the JSON object.Example JSON file:{ "texts": ["This is a sentence. ", "This is another sentence. "]}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader("src/document_loaders/example_data/example.json");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence. ", },]*/Using JSON pointer exampleYou can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from.In this example, we want to only extract information from "from" and "surname" entries. |
4e9727215e95-787 | { "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } }}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"]);const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "BD 2023 SUMMER", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "LinkedIn Job", }, ...]PreviousEPUB filesNextJSONLines filesNo JSON pointer exampleUsing JSON pointer example
ModulesData connectionDocument loadersIntegrationsFile LoadersJSON filesOn this pageJSON filesThe JSON loader use JSON pointer to target keys in your JSON files you want to target.No JSON pointer exampleThe most simple way of using it, is to specify no JSON pointer. |
4e9727215e95-788 | The loader will load all strings it finds in the JSON object.Example JSON file:{ "texts": ["This is a sentence. ", "This is another sentence. "]}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader("src/document_loaders/example_data/example.json");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence. ", },]*/Using JSON pointer exampleYou can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from.In this example, we want to only extract information from "from" and "surname" entries. |
4e9727215e95-789 | { "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } }}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"]);const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "BD 2023 SUMMER", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "LinkedIn Job", }, ...]PreviousEPUB filesNextJSONLines filesNo JSON pointer exampleUsing JSON pointer example
ModulesData connectionDocument loadersIntegrationsFile LoadersJSON filesOn this pageJSON filesThe JSON loader use JSON pointer to target keys in your JSON files you want to target.No JSON pointer exampleThe most simple way of using it, is to specify no JSON pointer. |
4e9727215e95-790 | The loader will load all strings it finds in the JSON object.Example JSON file:{ "texts": ["This is a sentence. ", "This is another sentence. "]}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader("src/document_loaders/example_data/example.json");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence. ", },]*/Using JSON pointer exampleYou can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from.In this example, we want to only extract information from "from" and "surname" entries. |
4e9727215e95-791 | { "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } }}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"]);const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "BD 2023 SUMMER", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "LinkedIn Job", }, ...]PreviousEPUB filesNextJSONLines files
JSON filesThe JSON loader use JSON pointer to target keys in your JSON files you want to target.No JSON pointer exampleThe most simple way of using it, is to specify no JSON pointer. |
4e9727215e95-792 | The loader will load all strings it finds in the JSON object.Example JSON file:{ "texts": ["This is a sentence. ", "This is another sentence. "]}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader("src/document_loaders/example_data/example.json");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence. ", },]*/Using JSON pointer exampleYou can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from.In this example, we want to only extract information from "from" and "surname" entries. |
4e9727215e95-793 | { "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } }}Example code:import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"]);const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "BD 2023 SUMMER", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "LinkedIn Job", }, ...]
The most simple way of using it, is to specify no JSON pointer.
The loader will load all strings it finds in the JSON object.
JSONLines files
No JSON pointer exampleUsing JSON pointer example
Page Title: JSONLines files | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-794 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersJSONLines filesJSONLines filesThis example goes over how to load data from JSONLines or JSONL files. The second argument is a JSONPointer to the property to extract from each JSON object in the file. One document will be created for each JSON object in the file.Example JSONLines file:{"html": "This is a sentence. "}{"html": "This is another sentence. "}Example code:import { JSONLinesLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLinesLoader( "src/document_loaders/example_data/example.jsonl", "/html");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/jsonl+json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence.
", }, Document { "metadata": { "blobType": "application/jsonl+json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence. ", },]*/PreviousJSON filesNextNotion markdown exportCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-795 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersJSONLines filesJSONLines filesThis example goes over how to load data from JSONLines or JSONL files. The second argument is a JSONPointer to the property to extract from each JSON object in the file. One document will be created for each JSON object in the file.Example JSONLines file:{"html": "This is a sentence. "}{"html": "This is another sentence. "}Example code:import { JSONLinesLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLinesLoader( "src/document_loaders/example_data/example.jsonl", "/html");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/jsonl+json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "blobType": "application/jsonl+json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence. ", },]*/PreviousJSON filesNextNotion markdown export |
4e9727215e95-796 | ModulesData connectionDocument loadersIntegrationsFile LoadersJSONLines filesJSONLines filesThis example goes over how to load data from JSONLines or JSONL files. The second argument is a JSONPointer to the property to extract from each JSON object in the file. One document will be created for each JSON object in the file.Example JSONLines file:{"html": "This is a sentence. "}{"html": "This is another sentence. "}Example code:import { JSONLinesLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLinesLoader( "src/document_loaders/example_data/example.jsonl", "/html");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/jsonl+json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "blobType": "application/jsonl+json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence. ", },]*/PreviousJSON filesNextNotion markdown export |
4e9727215e95-797 | JSONLines filesThis example goes over how to load data from JSONLines or JSONL files. The second argument is a JSONPointer to the property to extract from each JSON object in the file. One document will be created for each JSON object in the file.Example JSONLines file:{"html": "This is a sentence. "}{"html": "This is another sentence. "}Example code:import { JSONLinesLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLinesLoader( "src/document_loaders/example_data/example.jsonl", "/html");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/jsonl+json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "blobType": "application/jsonl+json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence. ", },]*/
Example JSONLines file:
{"html": "This is a sentence. "}{"html": "This is another sentence."} |
4e9727215e95-798 | import { JSONLinesLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLinesLoader( "src/document_loaders/example_data/example.jsonl", "/html");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/jsonl+json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence. ", }, Document { "metadata": { "blobType": "application/jsonl+json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence. ", },]*/
Notion markdown export
Page Title: Notion markdown export | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersNotion markdown exportNotion markdown exportThis example goes over how to load data from your Notion pages exported from the notion dashboard.First, export your notion pages as Markdown & CSV as per the offical explanation here. Make sure to select include subpages and Create folders for subpages.Then, unzip the downloaded file and move the unzipped folder into your repository. |
4e9727215e95-799 | It should contain the markdown files of your pages.Once the folder is in your repository, simply run the example below:import { NotionLoader } from "langchain/document_loaders/fs/notion";export const run = async () => { /** Provide the directory path of your notion folder */ const directoryPath = "Notion_DB"; const loader = new NotionLoader(directoryPath); const docs = await loader.load(); console.log({ docs });};API Reference:NotionLoader from langchain/document_loaders/fs/notionPreviousJSONLines filesNextPDF filesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.