id
stringlengths 14
17
| text
stringlengths 42
2.1k
|
---|---|
4e9727215e95-800 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersNotion markdown exportNotion markdown exportThis example goes over how to load data from your Notion pages exported from the notion dashboard.First, export your notion pages as Markdown & CSV as per the offical explanation here. Make sure to select include subpages and Create folders for subpages.Then, unzip the downloaded file and move the unzipped folder into your repository. It should contain the markdown files of your pages.Once the folder is in your repository, simply run the example below:import { NotionLoader } from "langchain/document_loaders/fs/notion";export const run = async () => { /** Provide the directory path of your notion folder */ const directoryPath = "Notion_DB"; const loader = new NotionLoader(directoryPath); const docs = await loader.load(); console.log({ docs });};API Reference:NotionLoader from langchain/document_loaders/fs/notionPreviousJSONLines filesNextPDF files |
4e9727215e95-801 | ModulesData connectionDocument loadersIntegrationsFile LoadersNotion markdown exportNotion markdown exportThis example goes over how to load data from your Notion pages exported from the notion dashboard.First, export your notion pages as Markdown & CSV as per the offical explanation here. Make sure to select include subpages and Create folders for subpages.Then, unzip the downloaded file and move the unzipped folder into your repository. It should contain the markdown files of your pages.Once the folder is in your repository, simply run the example below:import { NotionLoader } from "langchain/document_loaders/fs/notion";export const run = async () => { /** Provide the directory path of your notion folder */ const directoryPath = "Notion_DB"; const loader = new NotionLoader(directoryPath); const docs = await loader.load(); console.log({ docs });};API Reference:NotionLoader from langchain/document_loaders/fs/notionPreviousJSONLines filesNextPDF files
Notion markdown exportThis example goes over how to load data from your Notion pages exported from the notion dashboard.First, export your notion pages as Markdown & CSV as per the offical explanation here. Make sure to select include subpages and Create folders for subpages.Then, unzip the downloaded file and move the unzipped folder into your repository. It should contain the markdown files of your pages.Once the folder is in your repository, simply run the example below:import { NotionLoader } from "langchain/document_loaders/fs/notion";export const run = async () => { /** Provide the directory path of your notion folder */ const directoryPath = "Notion_DB"; const loader = new NotionLoader(directoryPath); const docs = await loader.load(); console.log({ docs });};API Reference:NotionLoader from langchain/document_loaders/fs/notion |
4e9727215e95-802 | First, export your notion pages as Markdown & CSV as per the offical explanation here. Make sure to select include subpages and Create folders for subpages.
Then, unzip the downloaded file and move the unzipped folder into your repository. It should contain the markdown files of your pages.
Once the folder is in your repository, simply run the example below:
import { NotionLoader } from "langchain/document_loaders/fs/notion";export const run = async () => { /** Provide the directory path of your notion folder */ const directoryPath = "Notion_DB"; const loader = new NotionLoader(directoryPath); const docs = await loader.load(); console.log({ docs });};
API Reference:NotionLoader from langchain/document_loaders/fs/notion
PDF files
Page Title: PDF files | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersPDF filesOn this pagePDF filesThis example goes over how to load data from PDF files. |
4e9727215e95-803 | By default, one document will be created for each page in the PDF file, you can change this behavior by setting the splitPages option to false.SetupnpmYarnpnpmnpm install pdf-parseyarn add pdf-parsepnpm add pdf-parseUsage, one document per pageimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf");const docs = await loader.load();Usage, one document per fileimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { splitPages: false,});const docs = await loader.load();Usage, custom pdfjs buildBy default we use the pdfjs build bundled with pdf-parse, which is compatible with most environments, including Node.js and modern browsers. |
4e9727215e95-804 | If you want to use a more recent version of pdfjs-dist or if you want to use a custom build of pdfjs-dist, you can do so by providing a custom pdfjs function that returns a promise that resolves to the PDFJS object.In the following example we use the "legacy" (see pdfjs docs) build of pdfjs-dist, which includes several polyfills not included in the default build.npmYarnpnpmnpm install pdfjs-distyarn add pdfjs-distpnpm add pdfjs-distimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),});PreviousNotion markdown exportNextSubtitlesSetupUsage, one document per pageUsage, one document per fileUsage, custom pdfjs buildCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-805 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersPDF filesOn this pagePDF filesThis example goes over how to load data from PDF files. By default, one document will be created for each page in the PDF file, you can change this behavior by setting the splitPages option to false.SetupnpmYarnpnpmnpm install pdf-parseyarn add pdf-parsepnpm add pdf-parseUsage, one document per pageimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf");const docs = await loader.load();Usage, one document per fileimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { splitPages: false,});const docs = await loader.load();Usage, custom pdfjs buildBy default we use the pdfjs build bundled with pdf-parse, which is compatible with most environments, including Node.js and modern browsers. |
4e9727215e95-806 | If you want to use a more recent version of pdfjs-dist or if you want to use a custom build of pdfjs-dist, you can do so by providing a custom pdfjs function that returns a promise that resolves to the PDFJS object.In the following example we use the "legacy" (see pdfjs docs) build of pdfjs-dist, which includes several polyfills not included in the default build.npmYarnpnpmnpm install pdfjs-distyarn add pdfjs-distpnpm add pdfjs-distimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),});PreviousNotion markdown exportNextSubtitlesSetupUsage, one document per pageUsage, one document per fileUsage, custom pdfjs build |
4e9727215e95-807 | ModulesData connectionDocument loadersIntegrationsFile LoadersPDF filesOn this pagePDF filesThis example goes over how to load data from PDF files. By default, one document will be created for each page in the PDF file, you can change this behavior by setting the splitPages option to false.SetupnpmYarnpnpmnpm install pdf-parseyarn add pdf-parsepnpm add pdf-parseUsage, one document per pageimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf");const docs = await loader.load();Usage, one document per fileimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { splitPages: false,});const docs = await loader.load();Usage, custom pdfjs buildBy default we use the pdfjs build bundled with pdf-parse, which is compatible with most environments, including Node.js and modern browsers. |
4e9727215e95-808 | If you want to use a more recent version of pdfjs-dist or if you want to use a custom build of pdfjs-dist, you can do so by providing a custom pdfjs function that returns a promise that resolves to the PDFJS object.In the following example we use the "legacy" (see pdfjs docs) build of pdfjs-dist, which includes several polyfills not included in the default build.npmYarnpnpmnpm install pdfjs-distyarn add pdfjs-distpnpm add pdfjs-distimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),});PreviousNotion markdown exportNextSubtitlesSetupUsage, one document per pageUsage, one document per fileUsage, custom pdfjs build |
4e9727215e95-809 | ModulesData connectionDocument loadersIntegrationsFile LoadersPDF filesOn this pagePDF filesThis example goes over how to load data from PDF files. By default, one document will be created for each page in the PDF file, you can change this behavior by setting the splitPages option to false.SetupnpmYarnpnpmnpm install pdf-parseyarn add pdf-parsepnpm add pdf-parseUsage, one document per pageimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf");const docs = await loader.load();Usage, one document per fileimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { splitPages: false,});const docs = await loader.load();Usage, custom pdfjs buildBy default we use the pdfjs build bundled with pdf-parse, which is compatible with most environments, including Node.js and modern browsers.
If you want to use a more recent version of pdfjs-dist or if you want to use a custom build of pdfjs-dist, you can do so by providing a custom pdfjs function that returns a promise that resolves to the PDFJS object.In the following example we use the "legacy" (see pdfjs docs) build of pdfjs-dist, which includes several polyfills not included in the default build.npmYarnpnpmnpm install pdfjs-distyarn add pdfjs-distpnpm add pdfjs-distimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),});PreviousNotion markdown exportNextSubtitles |
4e9727215e95-810 | PDF filesThis example goes over how to load data from PDF files. By default, one document will be created for each page in the PDF file, you can change this behavior by setting the splitPages option to false.SetupnpmYarnpnpmnpm install pdf-parseyarn add pdf-parsepnpm add pdf-parseUsage, one document per pageimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf");const docs = await loader.load();Usage, one document per fileimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { splitPages: false,});const docs = await loader.load();Usage, custom pdfjs buildBy default we use the pdfjs build bundled with pdf-parse, which is compatible with most environments, including Node.js and modern browsers.
If you want to use a more recent version of pdfjs-dist or if you want to use a custom build of pdfjs-dist, you can do so by providing a custom pdfjs function that returns a promise that resolves to the PDFJS object.In the following example we use the "legacy" (see pdfjs docs) build of pdfjs-dist, which includes several polyfills not included in the default build.npmYarnpnpmnpm install pdfjs-distyarn add pdfjs-distpnpm add pdfjs-distimport { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),});
npmYarnpnpmnpm install pdf-parseyarn add pdf-parsepnpm add pdf-parse |
4e9727215e95-811 | npmYarnpnpmnpm install pdf-parseyarn add pdf-parsepnpm add pdf-parse
npm install pdf-parseyarn add pdf-parsepnpm add pdf-parse
yarn add pdf-parse
pnpm add pdf-parse
npmYarnpnpmnpm install pdfjs-distyarn add pdfjs-distpnpm add pdfjs-dist
npm install pdfjs-distyarn add pdfjs-distpnpm add pdfjs-dist
yarn add pdfjs-dist
pnpm add pdfjs-dist
Subtitles
SetupUsage, one document per pageUsage, one document per fileUsage, custom pdfjs build
Page Title: Subtitles | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-812 | Page Title: Subtitles | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersSubtitlesSubtitlesThis example goes over how to load data from subtitle files. One document will be created for each subtitles file.SetupnpmYarnpnpmnpm install srt-parser-2yarn add srt-parser-2pnpm add srt-parser-2Usageimport { SRTLoader } from "langchain/document_loaders/fs/srt";const loader = new SRTLoader( "src/document_loaders/example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt");const docs = await loader.load();PreviousPDF filesNextText filesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-813 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersSubtitlesSubtitlesThis example goes over how to load data from subtitle files. One document will be created for each subtitles file.SetupnpmYarnpnpmnpm install srt-parser-2yarn add srt-parser-2pnpm add srt-parser-2Usageimport { SRTLoader } from "langchain/document_loaders/fs/srt";const loader = new SRTLoader( "src/document_loaders/example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt");const docs = await loader.load();PreviousPDF filesNextText files
ModulesData connectionDocument loadersIntegrationsFile LoadersSubtitlesSubtitlesThis example goes over how to load data from subtitle files. One document will be created for each subtitles file.SetupnpmYarnpnpmnpm install srt-parser-2yarn add srt-parser-2pnpm add srt-parser-2Usageimport { SRTLoader } from "langchain/document_loaders/fs/srt";const loader = new SRTLoader( "src/document_loaders/example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt");const docs = await loader.load();PreviousPDF filesNextText files |
4e9727215e95-814 | SubtitlesThis example goes over how to load data from subtitle files. One document will be created for each subtitles file.SetupnpmYarnpnpmnpm install srt-parser-2yarn add srt-parser-2pnpm add srt-parser-2Usageimport { SRTLoader } from "langchain/document_loaders/fs/srt";const loader = new SRTLoader( "src/document_loaders/example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt");const docs = await loader.load();
npmYarnpnpmnpm install srt-parser-2yarn add srt-parser-2pnpm add srt-parser-2
npm install srt-parser-2yarn add srt-parser-2pnpm add srt-parser-2
npm install srt-parser-2
yarn add srt-parser-2
pnpm add srt-parser-2
import { SRTLoader } from "langchain/document_loaders/fs/srt";const loader = new SRTLoader( "src/document_loaders/example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt");const docs = await loader.load();
Text files
Page Title: Text files | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-815 | Page Title: Text files | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersText filesText filesThis example goes over how to load data from text files.import { TextLoader } from "langchain/document_loaders/fs/text";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();PreviousSubtitlesNextUnstructuredCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersText filesText filesThis example goes over how to load data from text files.import { TextLoader } from "langchain/document_loaders/fs/text";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();PreviousSubtitlesNextUnstructured |
4e9727215e95-816 | ModulesData connectionDocument loadersIntegrationsFile LoadersText filesText filesThis example goes over how to load data from text files.import { TextLoader } from "langchain/document_loaders/fs/text";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();PreviousSubtitlesNextUnstructured
Text filesThis example goes over how to load data from text files.import { TextLoader } from "langchain/document_loaders/fs/text";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();
Unstructured
Page Title: Unstructured | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-817 | Page Title: Unstructured | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersUnstructuredUnstructuredThis example covers how to use Unstructured to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.SetupYou can run Unstructured locally in your computer using Docker. To do so, you need to have Docker installed. You can find the instructions to install Docker here.docker run -p 8000:8000 -d --rm --name unstructured-api quay.io/unstructured-io/unstructured-api:latest --port 8000 --host 0.0.0.0UsageOnce Unstructured is running, you can use it to load files from your computer. |
4e9727215e95-818 | You can use the following code to load a file from your computer.import { UnstructuredLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredLoader( "src/document_loaders/example_data/notion.md", options);const docs = await loader.load();API Reference:UnstructuredLoader from langchain/document_loaders/fs/unstructuredDirectoriesYou can also load all of the files in the directory using UnstructuredDirectoryLoader, which inherits from DirectoryLoader:import { UnstructuredDirectoryLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredDirectoryLoader( "langchain/src/document_loaders/tests/example_data", options);const docs = await loader.load();API Reference:UnstructuredDirectoryLoader from langchain/document_loaders/fs/unstructuredPreviousText filesNextWeb LoadersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-819 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersFolders with multiple filesCSV filesDocx filesEPUB filesJSON filesJSONLines filesNotion markdown exportPDF filesSubtitlesText filesUnstructuredWeb LoadersDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsFile LoadersUnstructuredUnstructuredThis example covers how to use Unstructured to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.SetupYou can run Unstructured locally in your computer using Docker. To do so, you need to have Docker installed. You can find the instructions to install Docker here.docker run -p 8000:8000 -d --rm --name unstructured-api quay.io/unstructured-io/unstructured-api:latest --port 8000 --host 0.0.0.0UsageOnce Unstructured is running, you can use it to load files from your computer. |
4e9727215e95-820 | You can use the following code to load a file from your computer.import { UnstructuredLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredLoader( "src/document_loaders/example_data/notion.md", options);const docs = await loader.load();API Reference:UnstructuredLoader from langchain/document_loaders/fs/unstructuredDirectoriesYou can also load all of the files in the directory using UnstructuredDirectoryLoader, which inherits from DirectoryLoader:import { UnstructuredDirectoryLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredDirectoryLoader( "langchain/src/document_loaders/tests/example_data", options);const docs = await loader.load();API Reference:UnstructuredDirectoryLoader from langchain/document_loaders/fs/unstructuredPreviousText filesNextWeb Loaders
ModulesData connectionDocument loadersIntegrationsFile LoadersUnstructuredUnstructuredThis example covers how to use Unstructured to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.SetupYou can run Unstructured locally in your computer using Docker. To do so, you need to have Docker installed. You can find the instructions to install Docker here.docker run -p 8000:8000 -d --rm --name unstructured-api quay.io/unstructured-io/unstructured-api:latest --port 8000 --host 0.0.0.0UsageOnce Unstructured is running, you can use it to load files from your computer. |
4e9727215e95-821 | You can use the following code to load a file from your computer.import { UnstructuredLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredLoader( "src/document_loaders/example_data/notion.md", options);const docs = await loader.load();API Reference:UnstructuredLoader from langchain/document_loaders/fs/unstructuredDirectoriesYou can also load all of the files in the directory using UnstructuredDirectoryLoader, which inherits from DirectoryLoader:import { UnstructuredDirectoryLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredDirectoryLoader( "langchain/src/document_loaders/tests/example_data", options);const docs = await loader.load();API Reference:UnstructuredDirectoryLoader from langchain/document_loaders/fs/unstructuredPreviousText filesNextWeb Loaders |
4e9727215e95-822 | UnstructuredThis example covers how to use Unstructured to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.SetupYou can run Unstructured locally in your computer using Docker. To do so, you need to have Docker installed. You can find the instructions to install Docker here.docker run -p 8000:8000 -d --rm --name unstructured-api quay.io/unstructured-io/unstructured-api:latest --port 8000 --host 0.0.0.0UsageOnce Unstructured is running, you can use it to load files from your computer. You can use the following code to load a file from your computer.import { UnstructuredLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredLoader( "src/document_loaders/example_data/notion.md", options);const docs = await loader.load();API Reference:UnstructuredLoader from langchain/document_loaders/fs/unstructuredDirectoriesYou can also load all of the files in the directory using UnstructuredDirectoryLoader, which inherits from DirectoryLoader:import { UnstructuredDirectoryLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredDirectoryLoader( "langchain/src/document_loaders/tests/example_data", options);const docs = await loader.load();API Reference:UnstructuredDirectoryLoader from langchain/document_loaders/fs/unstructured
You can run Unstructured locally in your computer using Docker. To do so, you need to have Docker installed. You can find the instructions to install Docker here. |
4e9727215e95-823 | docker run -p 8000:8000 -d --rm --name unstructured-api quay.io/unstructured-io/unstructured-api:latest --port 8000 --host 0.0.0.0
Once Unstructured is running, you can use it to load files from your computer. You can use the following code to load a file from your computer.
import { UnstructuredLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredLoader( "src/document_loaders/example_data/notion.md", options);const docs = await loader.load();
API Reference:UnstructuredLoader from langchain/document_loaders/fs/unstructured
You can also load all of the files in the directory using UnstructuredDirectoryLoader, which inherits from DirectoryLoader:
import { UnstructuredDirectoryLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredDirectoryLoader( "langchain/src/document_loaders/tests/example_data", options);const docs = await loader.load();
API Reference:UnstructuredDirectoryLoader from langchain/document_loaders/fs/unstructured
Page Title: Web Loaders | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-824 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersWeb LoadersThese loaders are used to load web resources.📄️ CheerioThis example goes over how to load data from webpages using Cheerio. One document will be created for each webpage.📄️ PuppeteerOnly available on Node.js.📄️ PlaywrightOnly available on Node.js.📄️ Apify DatasetThis guide shows how to use Apify with LangChain to load documents from an Apify Dataset.📄️ AssemblyAI Audio TranscriptThis covers how to load audio (and video) transcripts as document objects from a file using the AssemblyAI API.📄️ Azure Blob Storage ContainerOnly available on Node.js.📄️ Azure Blob Storage FileOnly available on Node.js.📄️ College ConfidentialThis example goes over how to load data from the college confidential website, using Cheerio. |
4e9727215e95-825 | One document will be created for each page.📄️ ConfluenceOnly available on Node.js.📄️ FigmaThis example goes over how to load data from a Figma file.📄️ GitBookThis example goes over how to load data from any GitBook, using Cheerio. One document will be created for each page.📄️ GitHubThis example goes over how to load data from a GitHub repository.📄️ Hacker NewsThis example goes over how to load data from the hacker news website, using Cheerio. One document will be created for each page.📄️ IMSDBThis example goes over how to load data from the internet movie script database website, using Cheerio. One document will be created for each page.📄️ Notion APIThis guide will take you through the steps required to load documents from Notion pages and databases using the Notion API.📄️ S3 FileOnly available on Node.js.📄️ SerpAPI LoaderThis guide shows how to use SerpAPI with LangChain to load web search results.📄️ Sonix AudioOnly available on Node.js.📄️ Blockchain DataThis example shows how to load blockchain data, including NFT metadata and transactions for a contract address, via the sort.xyz SQL API.📄️ YouTube transcriptsThis covers how to load youtube transcript into LangChain documents.PreviousUnstructuredNextCheerioCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-826 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersWeb LoadersThese loaders are used to load web resources.📄️ CheerioThis example goes over how to load data from webpages using Cheerio. One document will be created for each webpage.📄️ PuppeteerOnly available on Node.js.📄️ PlaywrightOnly available on Node.js.📄️ Apify DatasetThis guide shows how to use Apify with LangChain to load documents from an Apify Dataset.📄️ AssemblyAI Audio TranscriptThis covers how to load audio (and video) transcripts as document objects from a file using the AssemblyAI API.📄️ Azure Blob Storage ContainerOnly available on Node.js.📄️ Azure Blob Storage FileOnly available on Node.js.📄️ College ConfidentialThis example goes over how to load data from the college confidential website, using Cheerio. |
4e9727215e95-827 | One document will be created for each page.📄️ ConfluenceOnly available on Node.js.📄️ FigmaThis example goes over how to load data from a Figma file.📄️ GitBookThis example goes over how to load data from any GitBook, using Cheerio. One document will be created for each page.📄️ GitHubThis example goes over how to load data from a GitHub repository.📄️ Hacker NewsThis example goes over how to load data from the hacker news website, using Cheerio. One document will be created for each page.📄️ IMSDBThis example goes over how to load data from the internet movie script database website, using Cheerio. One document will be created for each page.📄️ Notion APIThis guide will take you through the steps required to load documents from Notion pages and databases using the Notion API.📄️ S3 FileOnly available on Node.js.📄️ SerpAPI LoaderThis guide shows how to use SerpAPI with LangChain to load web search results.📄️ Sonix AudioOnly available on Node.js.📄️ Blockchain DataThis example shows how to load blockchain data, including NFT metadata and transactions for a contract address, via the sort.xyz SQL API.📄️ YouTube transcriptsThis covers how to load youtube transcript into LangChain documents.PreviousUnstructuredNextCheerio |
4e9727215e95-828 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference |
4e9727215e95-829 | ModulesData connectionDocument loadersIntegrationsWeb LoadersWeb LoadersThese loaders are used to load web resources.📄️ CheerioThis example goes over how to load data from webpages using Cheerio. One document will be created for each webpage.📄️ PuppeteerOnly available on Node.js.📄️ PlaywrightOnly available on Node.js.📄️ Apify DatasetThis guide shows how to use Apify with LangChain to load documents from an Apify Dataset.📄️ AssemblyAI Audio TranscriptThis covers how to load audio (and video) transcripts as document objects from a file using the AssemblyAI API.📄️ Azure Blob Storage ContainerOnly available on Node.js.📄️ Azure Blob Storage FileOnly available on Node.js.📄️ College ConfidentialThis example goes over how to load data from the college confidential website, using Cheerio. One document will be created for each page.📄️ ConfluenceOnly available on Node.js.📄️ FigmaThis example goes over how to load data from a Figma file.📄️ GitBookThis example goes over how to load data from any GitBook, using Cheerio. One document will be created for each page.📄️ GitHubThis example goes over how to load data from a GitHub repository.📄️ Hacker NewsThis example goes over how to load data from the hacker news website, using Cheerio. One document will be created for each page.📄️ IMSDBThis example goes over how to load data from the internet movie script database website, using Cheerio. |
4e9727215e95-830 | One document will be created for each page.📄️ Notion APIThis guide will take you through the steps required to load documents from Notion pages and databases using the Notion API.📄️ S3 FileOnly available on Node.js.📄️ SerpAPI LoaderThis guide shows how to use SerpAPI with LangChain to load web search results.📄️ Sonix AudioOnly available on Node.js.📄️ Blockchain DataThis example shows how to load blockchain data, including NFT metadata and transactions for a contract address, via the sort.xyz SQL API.📄️ YouTube transcriptsThis covers how to load youtube transcript into LangChain documents.PreviousUnstructuredNextCheerio |
4e9727215e95-831 | Web LoadersThese loaders are used to load web resources.📄️ CheerioThis example goes over how to load data from webpages using Cheerio. One document will be created for each webpage.📄️ PuppeteerOnly available on Node.js.📄️ PlaywrightOnly available on Node.js.📄️ Apify DatasetThis guide shows how to use Apify with LangChain to load documents from an Apify Dataset.📄️ AssemblyAI Audio TranscriptThis covers how to load audio (and video) transcripts as document objects from a file using the AssemblyAI API.📄️ Azure Blob Storage ContainerOnly available on Node.js.📄️ Azure Blob Storage FileOnly available on Node.js.📄️ College ConfidentialThis example goes over how to load data from the college confidential website, using Cheerio. One document will be created for each page.📄️ ConfluenceOnly available on Node.js.📄️ FigmaThis example goes over how to load data from a Figma file.📄️ GitBookThis example goes over how to load data from any GitBook, using Cheerio. One document will be created for each page.📄️ GitHubThis example goes over how to load data from a GitHub repository.📄️ Hacker NewsThis example goes over how to load data from the hacker news website, using Cheerio. One document will be created for each page.📄️ IMSDBThis example goes over how to load data from the internet movie script database website, using Cheerio. |
4e9727215e95-832 | One document will be created for each page.📄️ Notion APIThis guide will take you through the steps required to load documents from Notion pages and databases using the Notion API.📄️ S3 FileOnly available on Node.js.📄️ SerpAPI LoaderThis guide shows how to use SerpAPI with LangChain to load web search results.📄️ Sonix AudioOnly available on Node.js.📄️ Blockchain DataThis example shows how to load blockchain data, including NFT metadata and transactions for a contract address, via the sort.xyz SQL API.📄️ YouTube transcriptsThis covers how to load youtube transcript into LangChain documents.
These loaders are used to load web resources.
This example goes over how to load data from webpages using Cheerio. One document will be created for each webpage.
This guide shows how to use Apify with LangChain to load documents from an Apify Dataset.
This covers how to load audio (and video) transcripts as document objects from a file using the AssemblyAI API.
This example goes over how to load data from the college confidential website, using Cheerio. One document will be created for each page.
This example goes over how to load data from a Figma file.
This example goes over how to load data from any GitBook, using Cheerio. One document will be created for each page.
This example goes over how to load data from a GitHub repository.
This example goes over how to load data from the hacker news website, using Cheerio. One document will be created for each page.
This example goes over how to load data from the internet movie script database website, using Cheerio. One document will be created for each page.
This guide will take you through the steps required to load documents from Notion pages and databases using the Notion API. |
4e9727215e95-833 | This guide shows how to use SerpAPI with LangChain to load web search results.
This example shows how to load blockchain data, including NFT metadata and transactions for a contract address, via the sort.xyz SQL API.
This covers how to load youtube transcript into LangChain documents.
Cheerio
Page Title: Webpages, with Cheerio | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersCheerioWebpages, with CheerioThis example goes over how to load data from webpages using Cheerio. One document will be created for each webpage.Cheerio is a fast and lightweight library that allows you to parse and traverse HTML documents using a jQuery-like syntax. You can use Cheerio to extract data from web pages, without having to render them in a browser.However, Cheerio does not simulate a web browser, so it cannot execute JavaScript code on the page. This means that it cannot extract data from dynamic web pages that require JavaScript to render. |
4e9727215e95-834 | To do that, you can use the PlaywrightWebBaseLoader or PuppeteerWebBaseLoader instead.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioUsageimport { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";const loader = new CheerioWebBaseLoader( "https://news.ycombinator.com/item?id=34817881");const docs = await loader.load();Usage, with a custom selectorimport { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";const loader = new CheerioWebBaseLoader( "https://news.ycombinator.com/item?id=34817881", { selector: "p.athing", });const docs = await loader.load();PreviousWeb LoadersNextPuppeteerCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-835 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersCheerioWebpages, with CheerioThis example goes over how to load data from webpages using Cheerio. One document will be created for each webpage.Cheerio is a fast and lightweight library that allows you to parse and traverse HTML documents using a jQuery-like syntax. You can use Cheerio to extract data from web pages, without having to render them in a browser.However, Cheerio does not simulate a web browser, so it cannot execute JavaScript code on the page. This means that it cannot extract data from dynamic web pages that require JavaScript to render. |
4e9727215e95-836 | To do that, you can use the PlaywrightWebBaseLoader or PuppeteerWebBaseLoader instead.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioUsageimport { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";const loader = new CheerioWebBaseLoader( "https://news.ycombinator.com/item?id=34817881");const docs = await loader.load();Usage, with a custom selectorimport { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";const loader = new CheerioWebBaseLoader( "https://news.ycombinator.com/item?id=34817881", { selector: "p.athing", });const docs = await loader.load();PreviousWeb LoadersNextPuppeteer |
4e9727215e95-837 | ModulesData connectionDocument loadersIntegrationsWeb LoadersCheerioWebpages, with CheerioThis example goes over how to load data from webpages using Cheerio. One document will be created for each webpage.Cheerio is a fast and lightweight library that allows you to parse and traverse HTML documents using a jQuery-like syntax. You can use Cheerio to extract data from web pages, without having to render them in a browser.However, Cheerio does not simulate a web browser, so it cannot execute JavaScript code on the page. This means that it cannot extract data from dynamic web pages that require JavaScript to render. To do that, you can use the PlaywrightWebBaseLoader or PuppeteerWebBaseLoader instead.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioUsageimport { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";const loader = new CheerioWebBaseLoader( "https://news.ycombinator.com/item?id=34817881");const docs = await loader.load();Usage, with a custom selectorimport { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";const loader = new CheerioWebBaseLoader( "https://news.ycombinator.com/item?id=34817881", { selector: "p.athing", });const docs = await loader.load();PreviousWeb LoadersNextPuppeteer |
4e9727215e95-838 | Webpages, with CheerioThis example goes over how to load data from webpages using Cheerio. One document will be created for each webpage.Cheerio is a fast and lightweight library that allows you to parse and traverse HTML documents using a jQuery-like syntax. You can use Cheerio to extract data from web pages, without having to render them in a browser.However, Cheerio does not simulate a web browser, so it cannot execute JavaScript code on the page. This means that it cannot extract data from dynamic web pages that require JavaScript to render. To do that, you can use the PlaywrightWebBaseLoader or PuppeteerWebBaseLoader instead.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioUsageimport { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";const loader = new CheerioWebBaseLoader( "https://news.ycombinator.com/item?id=34817881");const docs = await loader.load();Usage, with a custom selectorimport { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";const loader = new CheerioWebBaseLoader( "https://news.ycombinator.com/item?id=34817881", { selector: "p.athing", });const docs = await loader.load();
Cheerio is a fast and lightweight library that allows you to parse and traverse HTML documents using a jQuery-like syntax. You can use Cheerio to extract data from web pages, without having to render them in a browser.
However, Cheerio does not simulate a web browser, so it cannot execute JavaScript code on the page. This means that it cannot extract data from dynamic web pages that require JavaScript to render. To do that, you can use the PlaywrightWebBaseLoader or PuppeteerWebBaseLoader instead. |
4e9727215e95-839 | npmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerio
npm install cheerioyarn add cheeriopnpm add cheerio
npm install cheerio
yarn add cheerio
pnpm add cheerio
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";const loader = new CheerioWebBaseLoader( "https://news.ycombinator.com/item?id=34817881");const docs = await loader.load();
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";const loader = new CheerioWebBaseLoader( "https://news.ycombinator.com/item?id=34817881", { selector: "p.athing", });const docs = await loader.load();
Puppeteer
Page Title: Webpages, with Puppeteer | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-840 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersPuppeteerWebpages, with PuppeteerCompatibilityOnly available on Node.js.This example goes over how to load data from webpages using Puppeteer. One document will be created for each webpage.Puppeteer is a Node.js library that provides a high-level API for controlling headless Chrome or Chromium. |
4e9727215e95-841 | You can use Puppeteer to automate web page interactions, including extracting data from dynamic web pages that require JavaScript to render.If you want a lighterweight solution, and the webpages you want to load do not require JavaScript to render, you can use the CheerioWebBaseLoader instead.SetupnpmYarnpnpmnpm install puppeteeryarn add puppeteerpnpm add puppeteerUsageimport { PuppeteerWebBaseLoader } from "langchain/document_loaders/web/puppeteer";/** * Loader uses `page.evaluate(() => document.body.innerHTML)` * as default evaluate function **/const loader = new PuppeteerWebBaseLoader("https://www.tabnews.com.br/");const docs = await loader.load();OptionsHere's an explanation of the parameters you can pass to the PuppeteerWebBaseLoader constructor using the PuppeteerWebBaseLoaderOptions interface:type PuppeteerWebBaseLoaderOptions = { launchOptions? : PuppeteerLaunchOptions; gotoOptions? : PuppeteerGotoOptions; evaluate? : (page: Page, browser: Browser) => Promise<string>;};launchOptions: an optional object that specifies additional options to pass to the puppeteer.launch() method. This can include options such as the headless flag to launch the browser in headless mode, or the slowMo option to slow down Puppeteer's actions to make them easier to follow.gotoOptions: an optional object that specifies additional options to pass to the page.goto() method. |
4e9727215e95-842 | This can include options such as the timeout option to specify the maximum navigation time in milliseconds, or the waitUntil option to specify when to consider the navigation as successful.evaluate: an optional function that can be used to evaluate JavaScript code on the page using the page.evaluate() method. This can be useful for extracting data from the page or interacting with page elements. The function should return a Promise that resolves to a string containing the result of the evaluation.By passing these options to the PuppeteerWebBaseLoader constructor, you can customize the behavior of the loader and use Puppeteer's powerful features to scrape and interact with web pages.Here is a basic example to do it:import { PuppeteerWebBaseLoader } from "langchain/document_loaders/web/puppeteer";const loader = new PuppeteerWebBaseLoader("https://www.tabnews.com.br/", { launchOptions: { headless: true, }, gotoOptions: { waitUntil: "domcontentloaded", }, /** Pass custom evaluate, in this case you get page and browser instances */ async evaluate(page: Page, browser: Browser) { await page.waitForResponse("https://www.tabnews.com.br/va/view"); const result = await page.evaluate(() => document.body.innerHTML); return result; },});const docs = await loader.load();PreviousCheerioNextPlaywrightCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-843 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersPuppeteerWebpages, with PuppeteerCompatibilityOnly available on Node.js.This example goes over how to load data from webpages using Puppeteer. One document will be created for each webpage.Puppeteer is a Node.js library that provides a high-level API for controlling headless Chrome or Chromium. |
4e9727215e95-844 | You can use Puppeteer to automate web page interactions, including extracting data from dynamic web pages that require JavaScript to render.If you want a lighterweight solution, and the webpages you want to load do not require JavaScript to render, you can use the CheerioWebBaseLoader instead.SetupnpmYarnpnpmnpm install puppeteeryarn add puppeteerpnpm add puppeteerUsageimport { PuppeteerWebBaseLoader } from "langchain/document_loaders/web/puppeteer";/** * Loader uses `page.evaluate(() => document.body.innerHTML)` * as default evaluate function **/const loader = new PuppeteerWebBaseLoader("https://www.tabnews.com.br/");const docs = await loader.load();OptionsHere's an explanation of the parameters you can pass to the PuppeteerWebBaseLoader constructor using the PuppeteerWebBaseLoaderOptions interface:type PuppeteerWebBaseLoaderOptions = { launchOptions? : PuppeteerLaunchOptions; gotoOptions? : PuppeteerGotoOptions; evaluate? : (page: Page, browser: Browser) => Promise<string>;};launchOptions: an optional object that specifies additional options to pass to the puppeteer.launch() method. This can include options such as the headless flag to launch the browser in headless mode, or the slowMo option to slow down Puppeteer's actions to make them easier to follow.gotoOptions: an optional object that specifies additional options to pass to the page.goto() method. |
4e9727215e95-845 | This can include options such as the timeout option to specify the maximum navigation time in milliseconds, or the waitUntil option to specify when to consider the navigation as successful.evaluate: an optional function that can be used to evaluate JavaScript code on the page using the page.evaluate() method. This can be useful for extracting data from the page or interacting with page elements. The function should return a Promise that resolves to a string containing the result of the evaluation.By passing these options to the PuppeteerWebBaseLoader constructor, you can customize the behavior of the loader and use Puppeteer's powerful features to scrape and interact with web pages.Here is a basic example to do it:import { PuppeteerWebBaseLoader } from "langchain/document_loaders/web/puppeteer";const loader = new PuppeteerWebBaseLoader("https://www.tabnews.com.br/", { launchOptions: { headless: true, }, gotoOptions: { waitUntil: "domcontentloaded", }, /** Pass custom evaluate, in this case you get page and browser instances */ async evaluate(page: Page, browser: Browser) { await page.waitForResponse("https://www.tabnews.com.br/va/view"); const result = await page.evaluate(() => document.body.innerHTML); return result; },});const docs = await loader.load();PreviousCheerioNextPlaywright |
4e9727215e95-846 | ModulesData connectionDocument loadersIntegrationsWeb LoadersPuppeteerWebpages, with PuppeteerCompatibilityOnly available on Node.js.This example goes over how to load data from webpages using Puppeteer. One document will be created for each webpage.Puppeteer is a Node.js library that provides a high-level API for controlling headless Chrome or Chromium. You can use Puppeteer to automate web page interactions, including extracting data from dynamic web pages that require JavaScript to render.If you want a lighterweight solution, and the webpages you want to load do not require JavaScript to render, you can use the CheerioWebBaseLoader instead.SetupnpmYarnpnpmnpm install puppeteeryarn add puppeteerpnpm add puppeteerUsageimport { PuppeteerWebBaseLoader } from "langchain/document_loaders/web/puppeteer";/** * Loader uses `page.evaluate(() => document.body.innerHTML)` * as default evaluate function **/const loader = new PuppeteerWebBaseLoader("https://www.tabnews.com.br/");const docs = await loader.load();OptionsHere's an explanation of the parameters you can pass to the PuppeteerWebBaseLoader constructor using the PuppeteerWebBaseLoaderOptions interface:type PuppeteerWebBaseLoaderOptions = { launchOptions? : PuppeteerLaunchOptions; gotoOptions? : PuppeteerGotoOptions; evaluate? : (page: Page, browser: Browser) => Promise<string>;};launchOptions: an optional object that specifies additional options to pass to the puppeteer.launch() method. |
4e9727215e95-847 | This can include options such as the headless flag to launch the browser in headless mode, or the slowMo option to slow down Puppeteer's actions to make them easier to follow.gotoOptions: an optional object that specifies additional options to pass to the page.goto() method. This can include options such as the timeout option to specify the maximum navigation time in milliseconds, or the waitUntil option to specify when to consider the navigation as successful.evaluate: an optional function that can be used to evaluate JavaScript code on the page using the page.evaluate() method. This can be useful for extracting data from the page or interacting with page elements.
The function should return a Promise that resolves to a string containing the result of the evaluation.By passing these options to the PuppeteerWebBaseLoader constructor, you can customize the behavior of the loader and use Puppeteer's powerful features to scrape and interact with web pages.Here is a basic example to do it:import { PuppeteerWebBaseLoader } from "langchain/document_loaders/web/puppeteer";const loader = new PuppeteerWebBaseLoader("https://www.tabnews.com.br/", { launchOptions: { headless: true, }, gotoOptions: { waitUntil: "domcontentloaded", }, /** Pass custom evaluate, in this case you get page and browser instances */ async evaluate(page: Page, browser: Browser) { await page.waitForResponse("https://www.tabnews.com.br/va/view"); const result = await page.evaluate(() => document.body.innerHTML); return result; },});const docs = await loader.load();PreviousCheerioNextPlaywright |
4e9727215e95-848 | Webpages, with PuppeteerCompatibilityOnly available on Node.js.This example goes over how to load data from webpages using Puppeteer. One document will be created for each webpage.Puppeteer is a Node.js library that provides a high-level API for controlling headless Chrome or Chromium. You can use Puppeteer to automate web page interactions, including extracting data from dynamic web pages that require JavaScript to render.If you want a lighterweight solution, and the webpages you want to load do not require JavaScript to render, you can use the CheerioWebBaseLoader instead.SetupnpmYarnpnpmnpm install puppeteeryarn add puppeteerpnpm add puppeteerUsageimport { PuppeteerWebBaseLoader } from "langchain/document_loaders/web/puppeteer";/** * Loader uses `page.evaluate(() => document.body.innerHTML)` * as default evaluate function **/const loader = new PuppeteerWebBaseLoader("https://www.tabnews.com.br/");const docs = await loader.load();OptionsHere's an explanation of the parameters you can pass to the PuppeteerWebBaseLoader constructor using the PuppeteerWebBaseLoaderOptions interface:type PuppeteerWebBaseLoaderOptions = { launchOptions? : PuppeteerLaunchOptions; gotoOptions? : PuppeteerGotoOptions; evaluate? : (page: Page, browser: Browser) => Promise<string>;};launchOptions: an optional object that specifies additional options to pass to the puppeteer.launch() method. |
4e9727215e95-849 | This can include options such as the headless flag to launch the browser in headless mode, or the slowMo option to slow down Puppeteer's actions to make them easier to follow.gotoOptions: an optional object that specifies additional options to pass to the page.goto() method. This can include options such as the timeout option to specify the maximum navigation time in milliseconds, or the waitUntil option to specify when to consider the navigation as successful.evaluate: an optional function that can be used to evaluate JavaScript code on the page using the page.evaluate() method. This can be useful for extracting data from the page or interacting with page elements.
The function should return a Promise that resolves to a string containing the result of the evaluation.By passing these options to the PuppeteerWebBaseLoader constructor, you can customize the behavior of the loader and use Puppeteer's powerful features to scrape and interact with web pages.Here is a basic example to do it:import { PuppeteerWebBaseLoader } from "langchain/document_loaders/web/puppeteer";const loader = new PuppeteerWebBaseLoader("https://www.tabnews.com.br/", { launchOptions: { headless: true, }, gotoOptions: { waitUntil: "domcontentloaded", }, /** Pass custom evaluate, in this case you get page and browser instances */ async evaluate(page: Page, browser: Browser) { await page.waitForResponse("https://www.tabnews.com.br/va/view"); const result = await page.evaluate(() => document.body.innerHTML); return result; },});const docs = await loader.load();
This example goes over how to load data from webpages using Puppeteer. One document will be created for each webpage. |
4e9727215e95-850 | Puppeteer is a Node.js library that provides a high-level API for controlling headless Chrome or Chromium. You can use Puppeteer to automate web page interactions, including extracting data from dynamic web pages that require JavaScript to render.
If you want a lighterweight solution, and the webpages you want to load do not require JavaScript to render, you can use the CheerioWebBaseLoader instead.
npmYarnpnpmnpm install puppeteeryarn add puppeteerpnpm add puppeteer
npm install puppeteeryarn add puppeteerpnpm add puppeteer
npm install puppeteer
yarn add puppeteer
pnpm add puppeteer
import { PuppeteerWebBaseLoader } from "langchain/document_loaders/web/puppeteer";/** * Loader uses `page.evaluate(() => document.body.innerHTML)` * as default evaluate function **/const loader = new PuppeteerWebBaseLoader("https://www.tabnews.com.br/");const docs = await loader.load();
Here's an explanation of the parameters you can pass to the PuppeteerWebBaseLoader constructor using the PuppeteerWebBaseLoaderOptions interface:
type PuppeteerWebBaseLoaderOptions = { launchOptions? : PuppeteerLaunchOptions; gotoOptions? : PuppeteerGotoOptions; evaluate? : (page: Page, browser: Browser) => Promise<string>;};
launchOptions: an optional object that specifies additional options to pass to the puppeteer.launch() method. This can include options such as the headless flag to launch the browser in headless mode, or the slowMo option to slow down Puppeteer's actions to make them easier to follow.
gotoOptions: an optional object that specifies additional options to pass to the page.goto() method. This can include options such as the timeout option to specify the maximum navigation time in milliseconds, or the waitUntil option to specify when to consider the navigation as successful. |
4e9727215e95-851 | evaluate: an optional function that can be used to evaluate JavaScript code on the page using the page.evaluate() method. This can be useful for extracting data from the page or interacting with page elements. The function should return a Promise that resolves to a string containing the result of the evaluation.
By passing these options to the PuppeteerWebBaseLoader constructor, you can customize the behavior of the loader and use Puppeteer's powerful features to scrape and interact with web pages.
Here is a basic example to do it:
import { PuppeteerWebBaseLoader } from "langchain/document_loaders/web/puppeteer";const loader = new PuppeteerWebBaseLoader("https://www.tabnews.com.br/", { launchOptions: { headless: true, }, gotoOptions: { waitUntil: "domcontentloaded", }, /** Pass custom evaluate, in this case you get page and browser instances */ async evaluate(page: Page, browser: Browser) { await page.waitForResponse("https://www.tabnews.com.br/va/view"); const result = await page.evaluate(() => document.body.innerHTML); return result; },});const docs = await loader.load();
Playwright
Page Title: Webpages, with Playwright | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-852 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersPlaywrightWebpages, with PlaywrightCompatibilityOnly available on Node.js.This example goes over how to load data from webpages using Playwright. One document will be created for each webpage.Playwright is a Node.js library that provides a high-level API for controlling multiple browser engines, including Chromium, Firefox, and WebKit. |
4e9727215e95-853 | You can use Playwright to automate web page interactions, including extracting data from dynamic web pages that require JavaScript to render.If you want a lighterweight solution, and the webpages you want to load do not require JavaScript to render, you can use the CheerioWebBaseLoader instead.SetupnpmYarnpnpmnpm install playwrightyarn add playwrightpnpm add playwrightUsageimport { PlaywrightWebBaseLoader } from "langchain/document_loaders/web/playwright";/** * Loader uses `page.content()` * as default evaluate function **/const loader = new PlaywrightWebBaseLoader("https://www.tabnews.com.br/");const docs = await loader.load();OptionsHere's an explanation of the parameters you can pass to the PlaywrightWebBaseLoader constructor using the PlaywrightWebBaseLoaderOptions interface:type PlaywrightWebBaseLoaderOptions = { launchOptions? : LaunchOptions; gotoOptions? : PlaywrightGotoOptions; evaluate? : PlaywrightEvaluate;};launchOptions: an optional object that specifies additional options to pass to the playwright.chromium.launch() method. This can include options such as the headless flag to launch the browser in headless mode.gotoOptions: an optional object that specifies additional options to pass to the page.goto() method.
This can include options such as the timeout option to specify the maximum navigation time in milliseconds, or the waitUntil option to specify when to consider the navigation as successful.evaluate: an optional function that can be used to evaluate JavaScript code on the page using a custom evaluation function. This can be useful for extracting data from the page, interacting with page elements, or handling specific HTTP responses. |
4e9727215e95-854 | The function should return a Promise that resolves to a string containing the result of the evaluation.By passing these options to the PlaywrightWebBaseLoader constructor, you can customize the behavior of the loader and use Playwright's powerful features to scrape and interact with web pages.Here is a basic example to do it:import { PlaywrightWebBaseLoader, Page, Browser,} from "langchain/document_loaders/web/playwright";const url = "https://www.tabnews.com.br/";const loader = new PlaywrightWebBaseLoader(url);const docs = await loader.load();// raw HTML page contentconst extractedContents = docs[0].pageContent;And a more advanced example:import { PlaywrightWebBaseLoader, Page, Browser,} from "langchain/document_loaders/web/playwright";const loader = new PlaywrightWebBaseLoader("https://www.tabnews.com.br/", { launchOptions: { headless: true, }, gotoOptions: { waitUntil: "domcontentloaded", }, /** Pass custom evaluate, in this case you get page and browser instances */ async evaluate(page: Page, browser: Browser, response: Response | null) { await page.waitForResponse("https://www.tabnews.com.br/va/view"); const result = await page.evaluate(() => document.body.innerHTML); return result; },});const docs = await loader.load();PreviousPuppeteerNextApify DatasetCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-855 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersPlaywrightWebpages, with PlaywrightCompatibilityOnly available on Node.js.This example goes over how to load data from webpages using Playwright. One document will be created for each webpage.Playwright is a Node.js library that provides a high-level API for controlling multiple browser engines, including Chromium, Firefox, and WebKit. |
4e9727215e95-856 | You can use Playwright to automate web page interactions, including extracting data from dynamic web pages that require JavaScript to render.If you want a lighterweight solution, and the webpages you want to load do not require JavaScript to render, you can use the CheerioWebBaseLoader instead.SetupnpmYarnpnpmnpm install playwrightyarn add playwrightpnpm add playwrightUsageimport { PlaywrightWebBaseLoader } from "langchain/document_loaders/web/playwright";/** * Loader uses `page.content()` * as default evaluate function **/const loader = new PlaywrightWebBaseLoader("https://www.tabnews.com.br/");const docs = await loader.load();OptionsHere's an explanation of the parameters you can pass to the PlaywrightWebBaseLoader constructor using the PlaywrightWebBaseLoaderOptions interface:type PlaywrightWebBaseLoaderOptions = { launchOptions? : LaunchOptions; gotoOptions? : PlaywrightGotoOptions; evaluate? : PlaywrightEvaluate;};launchOptions: an optional object that specifies additional options to pass to the playwright.chromium.launch() method. This can include options such as the headless flag to launch the browser in headless mode.gotoOptions: an optional object that specifies additional options to pass to the page.goto() method.
This can include options such as the timeout option to specify the maximum navigation time in milliseconds, or the waitUntil option to specify when to consider the navigation as successful.evaluate: an optional function that can be used to evaluate JavaScript code on the page using a custom evaluation function. This can be useful for extracting data from the page, interacting with page elements, or handling specific HTTP responses. |
4e9727215e95-857 | The function should return a Promise that resolves to a string containing the result of the evaluation.By passing these options to the PlaywrightWebBaseLoader constructor, you can customize the behavior of the loader and use Playwright's powerful features to scrape and interact with web pages.Here is a basic example to do it:import { PlaywrightWebBaseLoader, Page, Browser,} from "langchain/document_loaders/web/playwright";const url = "https://www.tabnews.com.br/";const loader = new PlaywrightWebBaseLoader(url);const docs = await loader.load();// raw HTML page contentconst extractedContents = docs[0].pageContent;And a more advanced example:import { PlaywrightWebBaseLoader, Page, Browser,} from "langchain/document_loaders/web/playwright";const loader = new PlaywrightWebBaseLoader("https://www.tabnews.com.br/", { launchOptions: { headless: true, }, gotoOptions: { waitUntil: "domcontentloaded", }, /** Pass custom evaluate, in this case you get page and browser instances */ async evaluate(page: Page, browser: Browser, response: Response | null) { await page.waitForResponse("https://www.tabnews.com.br/va/view"); const result = await page.evaluate(() => document.body.innerHTML); return result; },});const docs = await loader.load();PreviousPuppeteerNextApify Dataset |
4e9727215e95-858 | ModulesData connectionDocument loadersIntegrationsWeb LoadersPlaywrightWebpages, with PlaywrightCompatibilityOnly available on Node.js.This example goes over how to load data from webpages using Playwright. One document will be created for each webpage.Playwright is a Node.js library that provides a high-level API for controlling multiple browser engines, including Chromium, Firefox, and WebKit. You can use Playwright to automate web page interactions, including extracting data from dynamic web pages that require JavaScript to render.If you want a lighterweight solution, and the webpages you want to load do not require JavaScript to render, you can use the CheerioWebBaseLoader instead.SetupnpmYarnpnpmnpm install playwrightyarn add playwrightpnpm add playwrightUsageimport { PlaywrightWebBaseLoader } from "langchain/document_loaders/web/playwright";/** * Loader uses `page.content()` * as default evaluate function **/const loader = new PlaywrightWebBaseLoader("https://www.tabnews.com.br/");const docs = await loader.load();OptionsHere's an explanation of the parameters you can pass to the PlaywrightWebBaseLoader constructor using the PlaywrightWebBaseLoaderOptions interface:type PlaywrightWebBaseLoaderOptions = { launchOptions? : LaunchOptions; gotoOptions? : PlaywrightGotoOptions; evaluate? : PlaywrightEvaluate;};launchOptions: an optional object that specifies additional options to pass to the playwright.chromium.launch() method. |
4e9727215e95-859 | This can include options such as the headless flag to launch the browser in headless mode.gotoOptions: an optional object that specifies additional options to pass to the page.goto() method. This can include options such as the timeout option to specify the maximum navigation time in milliseconds, or the waitUntil option to specify when to consider the navigation as successful.evaluate: an optional function that can be used to evaluate JavaScript code on the page using a custom evaluation function. This can be useful for extracting data from the page, interacting with page elements, or handling specific HTTP responses. |
4e9727215e95-860 | The function should return a Promise that resolves to a string containing the result of the evaluation.By passing these options to the PlaywrightWebBaseLoader constructor, you can customize the behavior of the loader and use Playwright's powerful features to scrape and interact with web pages.Here is a basic example to do it:import { PlaywrightWebBaseLoader, Page, Browser,} from "langchain/document_loaders/web/playwright";const url = "https://www.tabnews.com.br/";const loader = new PlaywrightWebBaseLoader(url);const docs = await loader.load();// raw HTML page contentconst extractedContents = docs[0].pageContent;And a more advanced example:import { PlaywrightWebBaseLoader, Page, Browser,} from "langchain/document_loaders/web/playwright";const loader = new PlaywrightWebBaseLoader("https://www.tabnews.com.br/", { launchOptions: { headless: true, }, gotoOptions: { waitUntil: "domcontentloaded", }, /** Pass custom evaluate, in this case you get page and browser instances */ async evaluate(page: Page, browser: Browser, response: Response | null) { await page.waitForResponse("https://www.tabnews.com.br/va/view"); const result = await page.evaluate(() => document.body.innerHTML); return result; },});const docs = await loader.load();PreviousPuppeteerNextApify Dataset |
4e9727215e95-861 | Webpages, with PlaywrightCompatibilityOnly available on Node.js.This example goes over how to load data from webpages using Playwright. One document will be created for each webpage.Playwright is a Node.js library that provides a high-level API for controlling multiple browser engines, including Chromium, Firefox, and WebKit. You can use Playwright to automate web page interactions, including extracting data from dynamic web pages that require JavaScript to render.If you want a lighterweight solution, and the webpages you want to load do not require JavaScript to render, you can use the CheerioWebBaseLoader instead.SetupnpmYarnpnpmnpm install playwrightyarn add playwrightpnpm add playwrightUsageimport { PlaywrightWebBaseLoader } from "langchain/document_loaders/web/playwright";/** * Loader uses `page.content()` * as default evaluate function **/const loader = new PlaywrightWebBaseLoader("https://www.tabnews.com.br/");const docs = await loader.load();OptionsHere's an explanation of the parameters you can pass to the PlaywrightWebBaseLoader constructor using the PlaywrightWebBaseLoaderOptions interface:type PlaywrightWebBaseLoaderOptions = { launchOptions? : LaunchOptions; gotoOptions? : PlaywrightGotoOptions; evaluate? : PlaywrightEvaluate;};launchOptions: an optional object that specifies additional options to pass to the playwright.chromium.launch() method. |
4e9727215e95-862 | This can include options such as the headless flag to launch the browser in headless mode.gotoOptions: an optional object that specifies additional options to pass to the page.goto() method. This can include options such as the timeout option to specify the maximum navigation time in milliseconds, or the waitUntil option to specify when to consider the navigation as successful.evaluate: an optional function that can be used to evaluate JavaScript code on the page using a custom evaluation function. This can be useful for extracting data from the page, interacting with page elements, or handling specific HTTP responses. |
4e9727215e95-863 | The function should return a Promise that resolves to a string containing the result of the evaluation.By passing these options to the PlaywrightWebBaseLoader constructor, you can customize the behavior of the loader and use Playwright's powerful features to scrape and interact with web pages.Here is a basic example to do it:import { PlaywrightWebBaseLoader, Page, Browser,} from "langchain/document_loaders/web/playwright";const url = "https://www.tabnews.com.br/";const loader = new PlaywrightWebBaseLoader(url);const docs = await loader.load();// raw HTML page contentconst extractedContents = docs[0].pageContent;And a more advanced example:import { PlaywrightWebBaseLoader, Page, Browser,} from "langchain/document_loaders/web/playwright";const loader = new PlaywrightWebBaseLoader("https://www.tabnews.com.br/", { launchOptions: { headless: true, }, gotoOptions: { waitUntil: "domcontentloaded", }, /** Pass custom evaluate, in this case you get page and browser instances */ async evaluate(page: Page, browser: Browser, response: Response | null) { await page.waitForResponse("https://www.tabnews.com.br/va/view"); const result = await page.evaluate(() => document.body.innerHTML); return result; },});const docs = await loader.load();
This example goes over how to load data from webpages using Playwright. One document will be created for each webpage.
Playwright is a Node.js library that provides a high-level API for controlling multiple browser engines, including Chromium, Firefox, and WebKit. You can use Playwright to automate web page interactions, including extracting data from dynamic web pages that require JavaScript to render.
npmYarnpnpmnpm install playwrightyarn add playwrightpnpm add playwright |
4e9727215e95-864 | npmYarnpnpmnpm install playwrightyarn add playwrightpnpm add playwright
npm install playwrightyarn add playwrightpnpm add playwright
npm install playwright
yarn add playwright
pnpm add playwright
import { PlaywrightWebBaseLoader } from "langchain/document_loaders/web/playwright";/** * Loader uses `page.content()` * as default evaluate function **/const loader = new PlaywrightWebBaseLoader("https://www.tabnews.com.br/");const docs = await loader.load();
Here's an explanation of the parameters you can pass to the PlaywrightWebBaseLoader constructor using the PlaywrightWebBaseLoaderOptions interface:
type PlaywrightWebBaseLoaderOptions = { launchOptions? : LaunchOptions; gotoOptions? : PlaywrightGotoOptions; evaluate? : PlaywrightEvaluate;};
launchOptions: an optional object that specifies additional options to pass to the playwright.chromium.launch() method. This can include options such as the headless flag to launch the browser in headless mode.
evaluate: an optional function that can be used to evaluate JavaScript code on the page using a custom evaluation function. This can be useful for extracting data from the page, interacting with page elements, or handling specific HTTP responses. The function should return a Promise that resolves to a string containing the result of the evaluation.
By passing these options to the PlaywrightWebBaseLoader constructor, you can customize the behavior of the loader and use Playwright's powerful features to scrape and interact with web pages.
import { PlaywrightWebBaseLoader, Page, Browser,} from "langchain/document_loaders/web/playwright";const url = "https://www.tabnews.com.br/";const loader = new PlaywrightWebBaseLoader(url);const docs = await loader.load();// raw HTML page contentconst extractedContents = docs[0].pageContent;
And a more advanced example: |
4e9727215e95-865 | And a more advanced example:
import { PlaywrightWebBaseLoader, Page, Browser,} from "langchain/document_loaders/web/playwright";const loader = new PlaywrightWebBaseLoader("https://www.tabnews.com.br/", { launchOptions: { headless: true, }, gotoOptions: { waitUntil: "domcontentloaded", }, /** Pass custom evaluate, in this case you get page and browser instances */ async evaluate(page: Page, browser: Browser, response: Response | null) { await page.waitForResponse("https://www.tabnews.com.br/va/view"); const result = await page.evaluate(() => document.body.innerHTML); return result; },});const docs = await loader.load();
Apify Dataset
Page Title: Apify Dataset | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersApify DatasetApify DatasetThis guide shows how to use Apify with LangChain to load documents from an Apify Dataset.OverviewApify is a cloud platform for web scraping and data extraction,
which provides an ecosystem of more than a thousand |
4e9727215e95-866 | which provides an ecosystem of more than a thousand
ready-made apps called Actors for various web scraping, crawling, and data extraction use cases.This guide shows how to load documents
from an Apify Dataset — a scalable append-only
storage built for storing structured web scraping results,
such as a list of products or Google SERPs, and then export them to various
formats like JSON, CSV, or Excel.Datasets are typically used to save results of Actors.
For example, Website Content Crawler Actor
deeply crawls websites such as documentation, knowledge bases, help centers, or blogs,
and then stores the text content of webpages into a dataset, |
4e9727215e95-867 | and then stores the text content of webpages into a dataset,
from which you can feed the documents into a vector index and answer questions from it.SetupYou'll first need to install the official Apify client:npmYarnpnpmnpm install apify-clientyarn add apify-clientpnpm add apify-clientYou'll also need to sign up and retrieve your Apify API token.UsageFrom a New DatasetIf you don't already have an existing dataset on the Apify platform, you'll need to initialize the document loader by calling an Actor and waiting for the results.Note: Calling an Actor can take a significant amount of time, on the order of hours, or even days for large sites!Here's an example:import { ApifyDatasetLoader } from "langchain/document_loaders/web/apify_dataset";import { Document } from "langchain/document";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RetrievalQAChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";/* * datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents. * In the below example, the Apify dataset format looks like this: * { * "url": "https://apify.com", * "text": "Apify is the best web scraping and automation platform." |
4e9727215e95-868 | } */const loader = await ApifyDatasetLoader.fromActorCall( "apify/website-content-crawler", { startUrls: [{ url: "https://js.langchain.com/docs/" }], }, { datasetMappingFunction: (item) => new Document({ pageContent: (item.text || "") as string, metadata: { source: item.url }, }), clientOptions: { token: "your-apify-token", // Or set as process.env.APIFY_API_TOKEN }, });const docs = await loader.load();const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const model = new OpenAI({ temperature: 0,});const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(), { returnSourceDocuments: true,});const res = await chain.call({ query: "What is LangChain?" });console.log(res.text);console.log(res.sourceDocuments.map((d: Document) => d.metadata.source));/* LangChain is a framework for developing applications powered by language models. |
4e9727215e95-869 | [ 'https://js.langchain.com/docs/', 'https://js.langchain.com/docs/modules/chains/', 'https://js.langchain.com/docs/modules/chains/llmchain/', 'https://js.langchain.com/docs/category/functions-4' ]*/API Reference:ApifyDatasetLoader from langchain/document_loaders/web/apify_datasetDocument from langchain/documentHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiRetrievalQAChain from langchain/chainsOpenAI from langchain/llms/openaiFrom an Existing DatasetIf you already have an existing dataset on the Apify platform, you can initialize the document loader with the constructor directly:import { ApifyDatasetLoader } from "langchain/document_loaders/web/apify_dataset";import { Document } from "langchain/document";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RetrievalQAChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";/* * datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents. * In the below example, the Apify dataset format looks like this: * { * "url": "https://apify.com", * "text": "Apify is the best web scraping and automation platform." |
4e9727215e95-870 | } */const loader = new ApifyDatasetLoader("your-dataset-id", { datasetMappingFunction: (item) => new Document({ pageContent: (item.text || "") as string, metadata: { source: item.url }, }), clientOptions: { token: "your-apify-token", // Or set as process.env.APIFY_API_TOKEN },});const docs = await loader.load();const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const model = new OpenAI({ temperature: 0,});const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(), { returnSourceDocuments: true,});const res = await chain.call({ query: "What is LangChain?" });console.log(res.text);console.log(res.sourceDocuments.map((d: Document) => d.metadata.source));/* LangChain is a framework for developing applications powered by language models. [ 'https://js.langchain.com/docs/', 'https://js.langchain.com/docs/modules/chains/', 'https://js.langchain.com/docs/modules/chains/llmchain/', 'https://js.langchain.com/docs/category/functions-4' ]*/API Reference:ApifyDatasetLoader from langchain/document_loaders/web/apify_datasetDocument from langchain/documentHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiRetrievalQAChain from langchain/chainsOpenAI from langchain/llms/openaiPreviousPlaywrightNextAssemblyAI Audio TranscriptCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-871 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersApify DatasetApify DatasetThis guide shows how to use Apify with LangChain to load documents from an Apify Dataset.OverviewApify is a cloud platform for web scraping and data extraction,
which provides an ecosystem of more than a thousand
ready-made apps called Actors for various web scraping, crawling, and data extraction use cases.This guide shows how to load documents
from an Apify Dataset — a scalable append-only
storage built for storing structured web scraping results,
such as a list of products or Google SERPs, and then export them to various
formats like JSON, CSV, or Excel.Datasets are typically used to save results of Actors.
For example, Website Content Crawler Actor
deeply crawls websites such as documentation, knowledge bases, help centers, or blogs,
and then stores the text content of webpages into a dataset, |
4e9727215e95-872 | and then stores the text content of webpages into a dataset,
from which you can feed the documents into a vector index and answer questions from it.SetupYou'll first need to install the official Apify client:npmYarnpnpmnpm install apify-clientyarn add apify-clientpnpm add apify-clientYou'll also need to sign up and retrieve your Apify API token.UsageFrom a New DatasetIf you don't already have an existing dataset on the Apify platform, you'll need to initialize the document loader by calling an Actor and waiting for the results.Note: Calling an Actor can take a significant amount of time, on the order of hours, or even days for large sites!Here's an example:import { ApifyDatasetLoader } from "langchain/document_loaders/web/apify_dataset";import { Document } from "langchain/document";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RetrievalQAChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";/* * datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents. * In the below example, the Apify dataset format looks like this: * { * "url": "https://apify.com", * "text": "Apify is the best web scraping and automation platform." |
4e9727215e95-873 | } */const loader = await ApifyDatasetLoader.fromActorCall( "apify/website-content-crawler", { startUrls: [{ url: "https://js.langchain.com/docs/" }], }, { datasetMappingFunction: (item) => new Document({ pageContent: (item.text || "") as string, metadata: { source: item.url }, }), clientOptions: { token: "your-apify-token", // Or set as process.env.APIFY_API_TOKEN }, });const docs = await loader.load();const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const model = new OpenAI({ temperature: 0,});const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(), { returnSourceDocuments: true,});const res = await chain.call({ query: "What is LangChain?" });console.log(res.text);console.log(res.sourceDocuments.map((d: Document) => d.metadata.source));/* LangChain is a framework for developing applications powered by language models. |
4e9727215e95-874 | [ 'https://js.langchain.com/docs/', 'https://js.langchain.com/docs/modules/chains/', 'https://js.langchain.com/docs/modules/chains/llmchain/', 'https://js.langchain.com/docs/category/functions-4' ]*/API Reference:ApifyDatasetLoader from langchain/document_loaders/web/apify_datasetDocument from langchain/documentHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiRetrievalQAChain from langchain/chainsOpenAI from langchain/llms/openaiFrom an Existing DatasetIf you already have an existing dataset on the Apify platform, you can initialize the document loader with the constructor directly:import { ApifyDatasetLoader } from "langchain/document_loaders/web/apify_dataset";import { Document } from "langchain/document";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RetrievalQAChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";/* * datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents. * In the below example, the Apify dataset format looks like this: * { * "url": "https://apify.com", * "text": "Apify is the best web scraping and automation platform." |
4e9727215e95-875 | } */const loader = new ApifyDatasetLoader("your-dataset-id", { datasetMappingFunction: (item) => new Document({ pageContent: (item.text || "") as string, metadata: { source: item.url }, }), clientOptions: { token: "your-apify-token", // Or set as process.env.APIFY_API_TOKEN },});const docs = await loader.load();const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const model = new OpenAI({ temperature: 0,});const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(), { returnSourceDocuments: true,});const res = await chain.call({ query: "What is LangChain?" });console.log(res.text);console.log(res.sourceDocuments.map((d: Document) => d.metadata.source));/* LangChain is a framework for developing applications powered by language models. [ 'https://js.langchain.com/docs/', 'https://js.langchain.com/docs/modules/chains/', 'https://js.langchain.com/docs/modules/chains/llmchain/', 'https://js.langchain.com/docs/category/functions-4' ]*/API Reference:ApifyDatasetLoader from langchain/document_loaders/web/apify_datasetDocument from langchain/documentHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiRetrievalQAChain from langchain/chainsOpenAI from langchain/llms/openaiPreviousPlaywrightNextAssemblyAI Audio Transcript
ModulesData connectionDocument loadersIntegrationsWeb LoadersApify DatasetApify DatasetThis guide shows how to use Apify with LangChain to load documents from an Apify Dataset.OverviewApify is a cloud platform for web scraping and data extraction,
which provides an ecosystem of more than a thousand |
4e9727215e95-876 | which provides an ecosystem of more than a thousand
ready-made apps called Actors for various web scraping, crawling, and data extraction use cases.This guide shows how to load documents
from an Apify Dataset — a scalable append-only
storage built for storing structured web scraping results,
such as a list of products or Google SERPs, and then export them to various
formats like JSON, CSV, or Excel.Datasets are typically used to save results of Actors.
For example, Website Content Crawler Actor
deeply crawls websites such as documentation, knowledge bases, help centers, or blogs,
and then stores the text content of webpages into a dataset, |
4e9727215e95-877 | and then stores the text content of webpages into a dataset,
from which you can feed the documents into a vector index and answer questions from it.SetupYou'll first need to install the official Apify client:npmYarnpnpmnpm install apify-clientyarn add apify-clientpnpm add apify-clientYou'll also need to sign up and retrieve your Apify API token.UsageFrom a New DatasetIf you don't already have an existing dataset on the Apify platform, you'll need to initialize the document loader by calling an Actor and waiting for the results.Note: Calling an Actor can take a significant amount of time, on the order of hours, or even days for large sites!Here's an example:import { ApifyDatasetLoader } from "langchain/document_loaders/web/apify_dataset";import { Document } from "langchain/document";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RetrievalQAChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";/* * datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents. * In the below example, the Apify dataset format looks like this: * { * "url": "https://apify.com", * "text": "Apify is the best web scraping and automation platform." |
4e9727215e95-878 | } */const loader = await ApifyDatasetLoader.fromActorCall( "apify/website-content-crawler", { startUrls: [{ url: "https://js.langchain.com/docs/" }], }, { datasetMappingFunction: (item) => new Document({ pageContent: (item.text || "") as string, metadata: { source: item.url }, }), clientOptions: { token: "your-apify-token", // Or set as process.env.APIFY_API_TOKEN }, });const docs = await loader.load();const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const model = new OpenAI({ temperature: 0,});const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(), { returnSourceDocuments: true,});const res = await chain.call({ query: "What is LangChain?" });console.log(res.text);console.log(res.sourceDocuments.map((d: Document) => d.metadata.source));/* LangChain is a framework for developing applications powered by language models. |
4e9727215e95-879 | [ 'https://js.langchain.com/docs/', 'https://js.langchain.com/docs/modules/chains/', 'https://js.langchain.com/docs/modules/chains/llmchain/', 'https://js.langchain.com/docs/category/functions-4' ]*/API Reference:ApifyDatasetLoader from langchain/document_loaders/web/apify_datasetDocument from langchain/documentHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiRetrievalQAChain from langchain/chainsOpenAI from langchain/llms/openaiFrom an Existing DatasetIf you already have an existing dataset on the Apify platform, you can initialize the document loader with the constructor directly:import { ApifyDatasetLoader } from "langchain/document_loaders/web/apify_dataset";import { Document } from "langchain/document";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RetrievalQAChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";/* * datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents. * In the below example, the Apify dataset format looks like this: * { * "url": "https://apify.com", * "text": "Apify is the best web scraping and automation platform." |
4e9727215e95-880 | } */const loader = new ApifyDatasetLoader("your-dataset-id", { datasetMappingFunction: (item) => new Document({ pageContent: (item.text || "") as string, metadata: { source: item.url }, }), clientOptions: { token: "your-apify-token", // Or set as process.env.APIFY_API_TOKEN },});const docs = await loader.load();const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const model = new OpenAI({ temperature: 0,});const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(), { returnSourceDocuments: true,});const res = await chain.call({ query: "What is LangChain?" });console.log(res.text);console.log(res.sourceDocuments.map((d: Document) => d.metadata.source));/* LangChain is a framework for developing applications powered by language models. [ 'https://js.langchain.com/docs/', 'https://js.langchain.com/docs/modules/chains/', 'https://js.langchain.com/docs/modules/chains/llmchain/', 'https://js.langchain.com/docs/category/functions-4' ]*/API Reference:ApifyDatasetLoader from langchain/document_loaders/web/apify_datasetDocument from langchain/documentHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiRetrievalQAChain from langchain/chainsOpenAI from langchain/llms/openaiPreviousPlaywrightNextAssemblyAI Audio Transcript
Apify DatasetThis guide shows how to use Apify with LangChain to load documents from an Apify Dataset.OverviewApify is a cloud platform for web scraping and data extraction,
which provides an ecosystem of more than a thousand |
4e9727215e95-881 | which provides an ecosystem of more than a thousand
ready-made apps called Actors for various web scraping, crawling, and data extraction use cases.This guide shows how to load documents
from an Apify Dataset — a scalable append-only
storage built for storing structured web scraping results,
such as a list of products or Google SERPs, and then export them to various
formats like JSON, CSV, or Excel.Datasets are typically used to save results of Actors.
For example, Website Content Crawler Actor
deeply crawls websites such as documentation, knowledge bases, help centers, or blogs,
and then stores the text content of webpages into a dataset, |
4e9727215e95-882 | and then stores the text content of webpages into a dataset,
from which you can feed the documents into a vector index and answer questions from it.SetupYou'll first need to install the official Apify client:npmYarnpnpmnpm install apify-clientyarn add apify-clientpnpm add apify-clientYou'll also need to sign up and retrieve your Apify API token.UsageFrom a New DatasetIf you don't already have an existing dataset on the Apify platform, you'll need to initialize the document loader by calling an Actor and waiting for the results.Note: Calling an Actor can take a significant amount of time, on the order of hours, or even days for large sites!Here's an example:import { ApifyDatasetLoader } from "langchain/document_loaders/web/apify_dataset";import { Document } from "langchain/document";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RetrievalQAChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";/* * datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents. * In the below example, the Apify dataset format looks like this: * { * "url": "https://apify.com", * "text": "Apify is the best web scraping and automation platform." |
4e9727215e95-883 | } */const loader = await ApifyDatasetLoader.fromActorCall( "apify/website-content-crawler", { startUrls: [{ url: "https://js.langchain.com/docs/" }], }, { datasetMappingFunction: (item) => new Document({ pageContent: (item.text || "") as string, metadata: { source: item.url }, }), clientOptions: { token: "your-apify-token", // Or set as process.env.APIFY_API_TOKEN }, });const docs = await loader.load();const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const model = new OpenAI({ temperature: 0,});const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(), { returnSourceDocuments: true,});const res = await chain.call({ query: "What is LangChain?" });console.log(res.text);console.log(res.sourceDocuments.map((d: Document) => d.metadata.source));/* LangChain is a framework for developing applications powered by language models. |
4e9727215e95-884 | [ 'https://js.langchain.com/docs/', 'https://js.langchain.com/docs/modules/chains/', 'https://js.langchain.com/docs/modules/chains/llmchain/', 'https://js.langchain.com/docs/category/functions-4' ]*/API Reference:ApifyDatasetLoader from langchain/document_loaders/web/apify_datasetDocument from langchain/documentHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiRetrievalQAChain from langchain/chainsOpenAI from langchain/llms/openaiFrom an Existing DatasetIf you already have an existing dataset on the Apify platform, you can initialize the document loader with the constructor directly:import { ApifyDatasetLoader } from "langchain/document_loaders/web/apify_dataset";import { Document } from "langchain/document";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RetrievalQAChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";/* * datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents. * In the below example, the Apify dataset format looks like this: * { * "url": "https://apify.com", * "text": "Apify is the best web scraping and automation platform." |
4e9727215e95-885 | } */const loader = new ApifyDatasetLoader("your-dataset-id", { datasetMappingFunction: (item) => new Document({ pageContent: (item.text || "") as string, metadata: { source: item.url }, }), clientOptions: { token: "your-apify-token", // Or set as process.env.APIFY_API_TOKEN },});const docs = await loader.load();const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const model = new OpenAI({ temperature: 0,});const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(), { returnSourceDocuments: true,});const res = await chain.call({ query: "What is LangChain?" });console.log(res.text);console.log(res.sourceDocuments.map((d: Document) => d.metadata.source));/* LangChain is a framework for developing applications powered by language models. [ 'https://js.langchain.com/docs/', 'https://js.langchain.com/docs/modules/chains/', 'https://js.langchain.com/docs/modules/chains/llmchain/', 'https://js.langchain.com/docs/category/functions-4' ]*/API Reference:ApifyDatasetLoader from langchain/document_loaders/web/apify_datasetDocument from langchain/documentHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiRetrievalQAChain from langchain/chainsOpenAI from langchain/llms/openai
Apify is a cloud platform for web scraping and data extraction,
which provides an ecosystem of more than a thousand
ready-made apps called Actors for various web scraping, crawling, and data extraction use cases.
This guide shows how to load documents
from an Apify Dataset — a scalable append-only
storage built for storing structured web scraping results, |
4e9727215e95-886 | storage built for storing structured web scraping results,
such as a list of products or Google SERPs, and then export them to various
formats like JSON, CSV, or Excel.
Datasets are typically used to save results of Actors.
For example, Website Content Crawler Actor
deeply crawls websites such as documentation, knowledge bases, help centers, or blogs,
and then stores the text content of webpages into a dataset,
from which you can feed the documents into a vector index and answer questions from it.
You'll first need to install the official Apify client:
npmYarnpnpmnpm install apify-clientyarn add apify-clientpnpm add apify-client
npm install apify-clientyarn add apify-clientpnpm add apify-client
npm install apify-client
yarn add apify-client
pnpm add apify-client
You'll also need to sign up and retrieve your Apify API token.
If you don't already have an existing dataset on the Apify platform, you'll need to initialize the document loader by calling an Actor and waiting for the results.
Note: Calling an Actor can take a significant amount of time, on the order of hours, or even days for large sites! |
4e9727215e95-887 | import { ApifyDatasetLoader } from "langchain/document_loaders/web/apify_dataset";import { Document } from "langchain/document";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RetrievalQAChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";/* * datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents. * In the below example, the Apify dataset format looks like this: * { * "url": "https://apify.com", * "text": "Apify is the best web scraping and automation platform." * } */const loader = await ApifyDatasetLoader.fromActorCall( "apify/website-content-crawler", { startUrls: [{ url: "https://js.langchain.com/docs/" }], }, { datasetMappingFunction: (item) => new Document({ pageContent: (item.text || "") as string, metadata: { source: item.url }, }), clientOptions: { token: "your-apify-token", // Or set as process.env.APIFY_API_TOKEN }, });const docs = await loader.load();const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const model = new OpenAI({ temperature: 0,});const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(), { returnSourceDocuments: true,});const res = await chain.call({ query: "What is LangChain?" |
4e9727215e95-888 | });console.log(res.text);console.log(res.sourceDocuments.map((d: Document) => d.metadata.source));/* LangChain is a framework for developing applications powered by language models. [ 'https://js.langchain.com/docs/', 'https://js.langchain.com/docs/modules/chains/', 'https://js.langchain.com/docs/modules/chains/llmchain/', 'https://js.langchain.com/docs/category/functions-4' ]*/
API Reference:ApifyDatasetLoader from langchain/document_loaders/web/apify_datasetDocument from langchain/documentHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiRetrievalQAChain from langchain/chainsOpenAI from langchain/llms/openai
If you already have an existing dataset on the Apify platform, you can initialize the document loader with the constructor directly: |
4e9727215e95-889 | import { ApifyDatasetLoader } from "langchain/document_loaders/web/apify_dataset";import { Document } from "langchain/document";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RetrievalQAChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";/* * datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents. * In the below example, the Apify dataset format looks like this: * { * "url": "https://apify.com", * "text": "Apify is the best web scraping and automation platform." * } */const loader = new ApifyDatasetLoader("your-dataset-id", { datasetMappingFunction: (item) => new Document({ pageContent: (item.text || "") as string, metadata: { source: item.url }, }), clientOptions: { token: "your-apify-token", // Or set as process.env.APIFY_API_TOKEN },});const docs = await loader.load();const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const model = new OpenAI({ temperature: 0,});const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(), { returnSourceDocuments: true,});const res = await chain.call({ query: "What is LangChain?" });console.log(res.text);console.log(res.sourceDocuments.map((d: Document) => d.metadata.source));/* LangChain is a framework for developing applications powered by language models. |
4e9727215e95-890 | [ 'https://js.langchain.com/docs/', 'https://js.langchain.com/docs/modules/chains/', 'https://js.langchain.com/docs/modules/chains/llmchain/', 'https://js.langchain.com/docs/category/functions-4' ]*/
AssemblyAI Audio Transcript
Page Title: AssemblyAI Audio Transcript | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersAssemblyAI Audio TranscriptAssemblyAI Audio TranscriptThis covers how to load audio (and video) transcripts as document objects from a file using the AssemblyAI API.UsageTo use the loaders you need an AssemblyAI account and |
4e9727215e95-891 | get your AssemblyAI API key from the dashboard.Then, configure the API key as the ASSEMBLYAI_API_KEY environment variable or the apiKey options parameter.import { AudioTranscriptLoader, // AudioTranscriptParagraphsLoader, // AudioTranscriptSentencesLoader} from "langchain/document_loaders/web/assemblyai";// You can also use a local file path and the loader will upload it to AssemblyAI for you.const audioUrl = "https://storage.googleapis.com/aai-docs-samples/espn.m4a";// Use `AudioTranscriptParagraphsLoader` or `AudioTranscriptSentencesLoader` for splitting the transcript into paragraphs or sentencesconst loader = new AudioTranscriptLoader( { audio_url: audioUrl, // any other parameters as documented here: https://www.assemblyai.com/docs/API%20reference/transcript#create-a-transcript }, { apiKey: "<ASSEMBLYAI_API_KEY>", // or set the `ASSEMBLYAI_API_KEY` env variable });const docs = await loader.load();console.dir(docs, { depth: Infinity });API Reference:AudioTranscriptLoader from langchain/document_loaders/web/assemblyai info You can use the AudioTranscriptParagraphsLoader or AudioTranscriptSentencesLoader to split the transcript into paragraphs or sentences.If the audio_file is a local file path and the loader will upload it to AssemblyAI for you.The audio_file can also be a video file. |
4e9727215e95-892 | See the list of supported file types in the FAQ doc.If you don't pass in the apiKey option, the loader will use the ASSEMBLYAI_API_KEY environment variable.You can add more properties in addition to audio_url. Find the full list of request parameters in the AssemblyAI API docs.You can also use the AudioSubtitleLoader to get srt or vtt subtitles as a document.import { AudioSubtitleLoader, SubtitleFormat,} from "langchain/document_loaders/web/assemblyai";// You can also use a local file path and the loader will upload it to AssemblyAI for you.const audioUrl = "https://storage.googleapis.com/aai-docs-samples/espn.m4a";const loader = new AudioSubtitleLoader( { audio_url: audioUrl, // any other parameters as documented here: https://www.assemblyai.com/docs/API%20reference/transcript#create-a-transcript }, SubtitleFormat.Srt, // srt or vtt { apiKey: "<ASSEMBLYAI_API_KEY>", // or set the `ASSEMBLYAI_API_KEY` env variable });const docs = await loader.load();console.dir(docs, { depth: Infinity });API Reference:AudioSubtitleLoader from langchain/document_loaders/web/assemblyaiSubtitleFormat from langchain/document_loaders/web/assemblyaiPreviousApify DatasetNextAzure Blob Storage ContainerCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-893 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersAssemblyAI Audio TranscriptAssemblyAI Audio TranscriptThis covers how to load audio (and video) transcripts as document objects from a file using the AssemblyAI API.UsageTo use the loaders you need an AssemblyAI account and |
4e9727215e95-894 | get your AssemblyAI API key from the dashboard.Then, configure the API key as the ASSEMBLYAI_API_KEY environment variable or the apiKey options parameter.import { AudioTranscriptLoader, // AudioTranscriptParagraphsLoader, // AudioTranscriptSentencesLoader} from "langchain/document_loaders/web/assemblyai";// You can also use a local file path and the loader will upload it to AssemblyAI for you.const audioUrl = "https://storage.googleapis.com/aai-docs-samples/espn.m4a";// Use `AudioTranscriptParagraphsLoader` or `AudioTranscriptSentencesLoader` for splitting the transcript into paragraphs or sentencesconst loader = new AudioTranscriptLoader( { audio_url: audioUrl, // any other parameters as documented here: https://www.assemblyai.com/docs/API%20reference/transcript#create-a-transcript }, { apiKey: "<ASSEMBLYAI_API_KEY>", // or set the `ASSEMBLYAI_API_KEY` env variable });const docs = await loader.load();console.dir(docs, { depth: Infinity });API Reference:AudioTranscriptLoader from langchain/document_loaders/web/assemblyai info You can use the AudioTranscriptParagraphsLoader or AudioTranscriptSentencesLoader to split the transcript into paragraphs or sentences.If the audio_file is a local file path and the loader will upload it to AssemblyAI for you.The audio_file can also be a video file. |
4e9727215e95-895 | See the list of supported file types in the FAQ doc.If you don't pass in the apiKey option, the loader will use the ASSEMBLYAI_API_KEY environment variable.You can add more properties in addition to audio_url. Find the full list of request parameters in the AssemblyAI API docs.You can also use the AudioSubtitleLoader to get srt or vtt subtitles as a document.import { AudioSubtitleLoader, SubtitleFormat,} from "langchain/document_loaders/web/assemblyai";// You can also use a local file path and the loader will upload it to AssemblyAI for you.const audioUrl = "https://storage.googleapis.com/aai-docs-samples/espn.m4a";const loader = new AudioSubtitleLoader( { audio_url: audioUrl, // any other parameters as documented here: https://www.assemblyai.com/docs/API%20reference/transcript#create-a-transcript }, SubtitleFormat.Srt, // srt or vtt { apiKey: "<ASSEMBLYAI_API_KEY>", // or set the `ASSEMBLYAI_API_KEY` env variable });const docs = await loader.load();console.dir(docs, { depth: Infinity });API Reference:AudioSubtitleLoader from langchain/document_loaders/web/assemblyaiSubtitleFormat from langchain/document_loaders/web/assemblyaiPreviousApify DatasetNextAzure Blob Storage Container
ModulesData connectionDocument loadersIntegrationsWeb LoadersAssemblyAI Audio TranscriptAssemblyAI Audio TranscriptThis covers how to load audio (and video) transcripts as document objects from a file using the AssemblyAI API.UsageTo use the loaders you need an AssemblyAI account and |
4e9727215e95-896 | get your AssemblyAI API key from the dashboard.Then, configure the API key as the ASSEMBLYAI_API_KEY environment variable or the apiKey options parameter.import { AudioTranscriptLoader, // AudioTranscriptParagraphsLoader, // AudioTranscriptSentencesLoader} from "langchain/document_loaders/web/assemblyai";// You can also use a local file path and the loader will upload it to AssemblyAI for you.const audioUrl = "https://storage.googleapis.com/aai-docs-samples/espn.m4a";// Use `AudioTranscriptParagraphsLoader` or `AudioTranscriptSentencesLoader` for splitting the transcript into paragraphs or sentencesconst loader = new AudioTranscriptLoader( { audio_url: audioUrl, // any other parameters as documented here: https://www.assemblyai.com/docs/API%20reference/transcript#create-a-transcript }, { apiKey: "<ASSEMBLYAI_API_KEY>", // or set the `ASSEMBLYAI_API_KEY` env variable });const docs = await loader.load();console.dir(docs, { depth: Infinity });API Reference:AudioTranscriptLoader from langchain/document_loaders/web/assemblyai info You can use the AudioTranscriptParagraphsLoader or AudioTranscriptSentencesLoader to split the transcript into paragraphs or sentences.If the audio_file is a local file path and the loader will upload it to AssemblyAI for you.The audio_file can also be a video file. |
4e9727215e95-897 | See the list of supported file types in the FAQ doc.If you don't pass in the apiKey option, the loader will use the ASSEMBLYAI_API_KEY environment variable.You can add more properties in addition to audio_url. Find the full list of request parameters in the AssemblyAI API docs.You can also use the AudioSubtitleLoader to get srt or vtt subtitles as a document.import { AudioSubtitleLoader, SubtitleFormat,} from "langchain/document_loaders/web/assemblyai";// You can also use a local file path and the loader will upload it to AssemblyAI for you.const audioUrl = "https://storage.googleapis.com/aai-docs-samples/espn.m4a";const loader = new AudioSubtitleLoader( { audio_url: audioUrl, // any other parameters as documented here: https://www.assemblyai.com/docs/API%20reference/transcript#create-a-transcript }, SubtitleFormat.Srt, // srt or vtt { apiKey: "<ASSEMBLYAI_API_KEY>", // or set the `ASSEMBLYAI_API_KEY` env variable });const docs = await loader.load();console.dir(docs, { depth: Infinity });API Reference:AudioSubtitleLoader from langchain/document_loaders/web/assemblyaiSubtitleFormat from langchain/document_loaders/web/assemblyaiPreviousApify DatasetNextAzure Blob Storage Container
AssemblyAI Audio TranscriptThis covers how to load audio (and video) transcripts as document objects from a file using the AssemblyAI API.UsageTo use the loaders you need an AssemblyAI account and |
4e9727215e95-898 | get your AssemblyAI API key from the dashboard.Then, configure the API key as the ASSEMBLYAI_API_KEY environment variable or the apiKey options parameter.import { AudioTranscriptLoader, // AudioTranscriptParagraphsLoader, // AudioTranscriptSentencesLoader} from "langchain/document_loaders/web/assemblyai";// You can also use a local file path and the loader will upload it to AssemblyAI for you.const audioUrl = "https://storage.googleapis.com/aai-docs-samples/espn.m4a";// Use `AudioTranscriptParagraphsLoader` or `AudioTranscriptSentencesLoader` for splitting the transcript into paragraphs or sentencesconst loader = new AudioTranscriptLoader( { audio_url: audioUrl, // any other parameters as documented here: https://www.assemblyai.com/docs/API%20reference/transcript#create-a-transcript }, { apiKey: "<ASSEMBLYAI_API_KEY>", // or set the `ASSEMBLYAI_API_KEY` env variable });const docs = await loader.load();console.dir(docs, { depth: Infinity });API Reference:AudioTranscriptLoader from langchain/document_loaders/web/assemblyai info You can use the AudioTranscriptParagraphsLoader or AudioTranscriptSentencesLoader to split the transcript into paragraphs or sentences.If the audio_file is a local file path and the loader will upload it to AssemblyAI for you.The audio_file can also be a video file. |
4e9727215e95-899 | See the list of supported file types in the FAQ doc.If you don't pass in the apiKey option, the loader will use the ASSEMBLYAI_API_KEY environment variable.You can add more properties in addition to audio_url. Find the full list of request parameters in the AssemblyAI API docs.You can also use the AudioSubtitleLoader to get srt or vtt subtitles as a document.import { AudioSubtitleLoader, SubtitleFormat,} from "langchain/document_loaders/web/assemblyai";// You can also use a local file path and the loader will upload it to AssemblyAI for you.const audioUrl = "https://storage.googleapis.com/aai-docs-samples/espn.m4a";const loader = new AudioSubtitleLoader( { audio_url: audioUrl, // any other parameters as documented here: https://www.assemblyai.com/docs/API%20reference/transcript#create-a-transcript }, SubtitleFormat.Srt, // srt or vtt { apiKey: "<ASSEMBLYAI_API_KEY>", // or set the `ASSEMBLYAI_API_KEY` env variable });const docs = await loader.load();console.dir(docs, { depth: Infinity });API Reference:AudioSubtitleLoader from langchain/document_loaders/web/assemblyaiSubtitleFormat from langchain/document_loaders/web/assemblyai
To use the loaders you need an AssemblyAI account and
get your AssemblyAI API key from the dashboard.
Then, configure the API key as the ASSEMBLYAI_API_KEY environment variable or the apiKey options parameter. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.