id
stringlengths 14
17
| text
stringlengths 42
2.1k
|
---|---|
4e9727215e95-900 | import { AudioTranscriptLoader, // AudioTranscriptParagraphsLoader, // AudioTranscriptSentencesLoader} from "langchain/document_loaders/web/assemblyai";// You can also use a local file path and the loader will upload it to AssemblyAI for you.const audioUrl = "https://storage.googleapis.com/aai-docs-samples/espn.m4a";// Use `AudioTranscriptParagraphsLoader` or `AudioTranscriptSentencesLoader` for splitting the transcript into paragraphs or sentencesconst loader = new AudioTranscriptLoader( { audio_url: audioUrl, // any other parameters as documented here: https://www.assemblyai.com/docs/API%20reference/transcript#create-a-transcript }, { apiKey: "<ASSEMBLYAI_API_KEY>", // or set the `ASSEMBLYAI_API_KEY` env variable });const docs = await loader.load();console.dir(docs, { depth: Infinity });
API Reference:AudioTranscriptLoader from langchain/document_loaders/web/assemblyai
You can also use the AudioSubtitleLoader to get srt or vtt subtitles as a document. |
4e9727215e95-901 | import { AudioSubtitleLoader, SubtitleFormat,} from "langchain/document_loaders/web/assemblyai";// You can also use a local file path and the loader will upload it to AssemblyAI for you.const audioUrl = "https://storage.googleapis.com/aai-docs-samples/espn.m4a";const loader = new AudioSubtitleLoader( { audio_url: audioUrl, // any other parameters as documented here: https://www.assemblyai.com/docs/API%20reference/transcript#create-a-transcript }, SubtitleFormat.Srt, // srt or vtt { apiKey: "<ASSEMBLYAI_API_KEY>", // or set the `ASSEMBLYAI_API_KEY` env variable });const docs = await loader.load();console.dir(docs, { depth: Infinity });
API Reference:AudioSubtitleLoader from langchain/document_loaders/web/assemblyaiSubtitleFormat from langchain/document_loaders/web/assemblyai
Azure Blob Storage Container
Page Title: Azure Blob Storage Container | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-902 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersAzure Blob Storage ContainerAzure Blob Storage ContainerCompatibilityOnly available on Node.js.This covers how to load a container on Azure Blob Storage into LangChain documents.SetupTo run this loader, you'll need to have Unstructured already set up and ready to use at an available URL endpoint. |
4e9727215e95-903 | It can also be configured to run locally.See the docs here for information on how to do that.You'll also need to install the official Azure Storage Blob client library:npmYarnpnpmnpm install @azure/storage-blobyarn add @azure/storage-blobpnpm add @azure/storage-blobUsageOnce Unstructured is configured, you can use the Azure Blob Storage Container loader to load files and then convert them into a Document.import { AzureBlobStorageContainerLoader } from "langchain/document_loaders/web/azure_blob_storage_container";const loader = new AzureBlobStorageContainerLoader({ azureConfig: { connectionString: "", container: "container_name", }, unstructuredConfig: { apiUrl: "http://localhost:8000/general/v0/general", apiKey: "", // this will be soon required },});const docs = await loader.load();console.log(docs);API Reference:AzureBlobStorageContainerLoader from langchain/document_loaders/web/azure_blob_storage_containerPreviousAssemblyAI Audio TranscriptNextAzure Blob Storage FileCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-904 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersAzure Blob Storage ContainerAzure Blob Storage ContainerCompatibilityOnly available on Node.js.This covers how to load a container on Azure Blob Storage into LangChain documents.SetupTo run this loader, you'll need to have Unstructured already set up and ready to use at an available URL endpoint.
It can also be configured to run locally.See the docs here for information on how to do that.You'll also need to install the official Azure Storage Blob client library:npmYarnpnpmnpm install @azure/storage-blobyarn add @azure/storage-blobpnpm add @azure/storage-blobUsageOnce Unstructured is configured, you can use the Azure Blob Storage Container loader to load files and then convert them into a Document.import { AzureBlobStorageContainerLoader } from "langchain/document_loaders/web/azure_blob_storage_container";const loader = new AzureBlobStorageContainerLoader({ azureConfig: { connectionString: "", container: "container_name", }, unstructuredConfig: { apiUrl: "http://localhost:8000/general/v0/general", apiKey: "", // this will be soon required },});const docs = await loader.load();console.log(docs);API Reference:AzureBlobStorageContainerLoader from langchain/document_loaders/web/azure_blob_storage_containerPreviousAssemblyAI Audio TranscriptNextAzure Blob Storage File |
4e9727215e95-905 | ModulesData connectionDocument loadersIntegrationsWeb LoadersAzure Blob Storage ContainerAzure Blob Storage ContainerCompatibilityOnly available on Node.js.This covers how to load a container on Azure Blob Storage into LangChain documents.SetupTo run this loader, you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.See the docs here for information on how to do that.You'll also need to install the official Azure Storage Blob client library:npmYarnpnpmnpm install @azure/storage-blobyarn add @azure/storage-blobpnpm add @azure/storage-blobUsageOnce Unstructured is configured, you can use the Azure Blob Storage Container loader to load files and then convert them into a Document.import { AzureBlobStorageContainerLoader } from "langchain/document_loaders/web/azure_blob_storage_container";const loader = new AzureBlobStorageContainerLoader({ azureConfig: { connectionString: "", container: "container_name", }, unstructuredConfig: { apiUrl: "http://localhost:8000/general/v0/general", apiKey: "", // this will be soon required },});const docs = await loader.load();console.log(docs);API Reference:AzureBlobStorageContainerLoader from langchain/document_loaders/web/azure_blob_storage_containerPreviousAssemblyAI Audio TranscriptNextAzure Blob Storage File |
4e9727215e95-906 | Azure Blob Storage ContainerCompatibilityOnly available on Node.js.This covers how to load a container on Azure Blob Storage into LangChain documents.SetupTo run this loader, you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.See the docs here for information on how to do that.You'll also need to install the official Azure Storage Blob client library:npmYarnpnpmnpm install @azure/storage-blobyarn add @azure/storage-blobpnpm add @azure/storage-blobUsageOnce Unstructured is configured, you can use the Azure Blob Storage Container loader to load files and then convert them into a Document.import { AzureBlobStorageContainerLoader } from "langchain/document_loaders/web/azure_blob_storage_container";const loader = new AzureBlobStorageContainerLoader({ azureConfig: { connectionString: "", container: "container_name", }, unstructuredConfig: { apiUrl: "http://localhost:8000/general/v0/general", apiKey: "", // this will be soon required },});const docs = await loader.load();console.log(docs);API Reference:AzureBlobStorageContainerLoader from langchain/document_loaders/web/azure_blob_storage_container
This covers how to load a container on Azure Blob Storage into LangChain documents.
To run this loader, you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.
See the docs here for information on how to do that.
You'll also need to install the official Azure Storage Blob client library:
npmYarnpnpmnpm install @azure/storage-blobyarn add @azure/storage-blobpnpm add @azure/storage-blob
npm install @azure/storage-blobyarn add @azure/storage-blobpnpm add @azure/storage-blob
npm install @azure/storage-blob |
4e9727215e95-907 | npm install @azure/storage-blob
yarn add @azure/storage-blob
pnpm add @azure/storage-blob
Once Unstructured is configured, you can use the Azure Blob Storage Container loader to load files and then convert them into a Document.
import { AzureBlobStorageContainerLoader } from "langchain/document_loaders/web/azure_blob_storage_container";const loader = new AzureBlobStorageContainerLoader({ azureConfig: { connectionString: "", container: "container_name", }, unstructuredConfig: { apiUrl: "http://localhost:8000/general/v0/general", apiKey: "", // this will be soon required },});const docs = await loader.load();console.log(docs);
API Reference:AzureBlobStorageContainerLoader from langchain/document_loaders/web/azure_blob_storage_container
Azure Blob Storage File
Page Title: Azure Blob Storage File | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersAzure Blob Storage FileAzure Blob Storage FileCompatibilityOnly available on Node.js.This covers how to load an Azure File into LangChain documents.SetupTo use this loader, you'll need to have Unstructured already set up and ready to use at an available URL endpoint. |
4e9727215e95-908 | It can also be configured to run locally.See the docs here for information on how to do that.You'll also need to install the official Azure Storage Blob client library:npmYarnpnpmnpm install @azure/storage-blobyarn add @azure/storage-blobpnpm add @azure/storage-blobUsageOnce Unstructured is configured, you can use the Azure Blob Storage File loader to load files and then convert them into a Document.import { AzureBlobStorageFileLoader } from "langchain/document_loaders/web/azure_blob_storage_file";const loader = new AzureBlobStorageFileLoader({ azureConfig: { connectionString: "", container: "container_name", blobName: "example.txt", }, unstructuredConfig: { apiUrl: "http://localhost:8000/general/v0/general", apiKey: "", // this will be soon required },});const docs = await loader.load();console.log(docs);API Reference:AzureBlobStorageFileLoader from langchain/document_loaders/web/azure_blob_storage_filePreviousAzure Blob Storage ContainerNextCollege ConfidentialCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-909 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersAzure Blob Storage FileAzure Blob Storage FileCompatibilityOnly available on Node.js.This covers how to load an Azure File into LangChain documents.SetupTo use this loader, you'll need to have Unstructured already set up and ready to use at an available URL endpoint.
It can also be configured to run locally.See the docs here for information on how to do that.You'll also need to install the official Azure Storage Blob client library:npmYarnpnpmnpm install @azure/storage-blobyarn add @azure/storage-blobpnpm add @azure/storage-blobUsageOnce Unstructured is configured, you can use the Azure Blob Storage File loader to load files and then convert them into a Document.import { AzureBlobStorageFileLoader } from "langchain/document_loaders/web/azure_blob_storage_file";const loader = new AzureBlobStorageFileLoader({ azureConfig: { connectionString: "", container: "container_name", blobName: "example.txt", }, unstructuredConfig: { apiUrl: "http://localhost:8000/general/v0/general", apiKey: "", // this will be soon required },});const docs = await loader.load();console.log(docs);API Reference:AzureBlobStorageFileLoader from langchain/document_loaders/web/azure_blob_storage_filePreviousAzure Blob Storage ContainerNextCollege Confidential |
4e9727215e95-910 | ModulesData connectionDocument loadersIntegrationsWeb LoadersAzure Blob Storage FileAzure Blob Storage FileCompatibilityOnly available on Node.js.This covers how to load an Azure File into LangChain documents.SetupTo use this loader, you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.See the docs here for information on how to do that.You'll also need to install the official Azure Storage Blob client library:npmYarnpnpmnpm install @azure/storage-blobyarn add @azure/storage-blobpnpm add @azure/storage-blobUsageOnce Unstructured is configured, you can use the Azure Blob Storage File loader to load files and then convert them into a Document.import { AzureBlobStorageFileLoader } from "langchain/document_loaders/web/azure_blob_storage_file";const loader = new AzureBlobStorageFileLoader({ azureConfig: { connectionString: "", container: "container_name", blobName: "example.txt", }, unstructuredConfig: { apiUrl: "http://localhost:8000/general/v0/general", apiKey: "", // this will be soon required },});const docs = await loader.load();console.log(docs);API Reference:AzureBlobStorageFileLoader from langchain/document_loaders/web/azure_blob_storage_filePreviousAzure Blob Storage ContainerNextCollege Confidential |
4e9727215e95-911 | Azure Blob Storage FileCompatibilityOnly available on Node.js.This covers how to load an Azure File into LangChain documents.SetupTo use this loader, you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.See the docs here for information on how to do that.You'll also need to install the official Azure Storage Blob client library:npmYarnpnpmnpm install @azure/storage-blobyarn add @azure/storage-blobpnpm add @azure/storage-blobUsageOnce Unstructured is configured, you can use the Azure Blob Storage File loader to load files and then convert them into a Document.import { AzureBlobStorageFileLoader } from "langchain/document_loaders/web/azure_blob_storage_file";const loader = new AzureBlobStorageFileLoader({ azureConfig: { connectionString: "", container: "container_name", blobName: "example.txt", }, unstructuredConfig: { apiUrl: "http://localhost:8000/general/v0/general", apiKey: "", // this will be soon required },});const docs = await loader.load();console.log(docs);API Reference:AzureBlobStorageFileLoader from langchain/document_loaders/web/azure_blob_storage_file
This covers how to load an Azure File into LangChain documents.
To use this loader, you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.
Once Unstructured is configured, you can use the Azure Blob Storage File loader to load files and then convert them into a Document. |
4e9727215e95-912 | import { AzureBlobStorageFileLoader } from "langchain/document_loaders/web/azure_blob_storage_file";const loader = new AzureBlobStorageFileLoader({ azureConfig: { connectionString: "", container: "container_name", blobName: "example.txt", }, unstructuredConfig: { apiUrl: "http://localhost:8000/general/v0/general", apiKey: "", // this will be soon required },});const docs = await loader.load();console.log(docs);
API Reference:AzureBlobStorageFileLoader from langchain/document_loaders/web/azure_blob_storage_file
College Confidential
Page Title: College Confidential | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-913 | Page Title: College Confidential | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersCollege ConfidentialCollege ConfidentialThis example goes over how to load data from the college confidential website, using Cheerio. One document will be created for each page.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioUsageimport { CollegeConfidentialLoader } from "langchain/document_loaders/web/college_confidential";const loader = new CollegeConfidentialLoader( "https://www.collegeconfidential.com/colleges/brown-university/");const docs = await loader.load();PreviousAzure Blob Storage FileNextConfluenceCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-914 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersCollege ConfidentialCollege ConfidentialThis example goes over how to load data from the college confidential website, using Cheerio. One document will be created for each page.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioUsageimport { CollegeConfidentialLoader } from "langchain/document_loaders/web/college_confidential";const loader = new CollegeConfidentialLoader( "https://www.collegeconfidential.com/colleges/brown-university/");const docs = await loader.load();PreviousAzure Blob Storage FileNextConfluence
ModulesData connectionDocument loadersIntegrationsWeb LoadersCollege ConfidentialCollege ConfidentialThis example goes over how to load data from the college confidential website, using Cheerio. One document will be created for each page.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioUsageimport { CollegeConfidentialLoader } from "langchain/document_loaders/web/college_confidential";const loader = new CollegeConfidentialLoader( "https://www.collegeconfidential.com/colleges/brown-university/");const docs = await loader.load();PreviousAzure Blob Storage FileNextConfluence |
4e9727215e95-915 | College ConfidentialThis example goes over how to load data from the college confidential website, using Cheerio. One document will be created for each page.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioUsageimport { CollegeConfidentialLoader } from "langchain/document_loaders/web/college_confidential";const loader = new CollegeConfidentialLoader( "https://www.collegeconfidential.com/colleges/brown-university/");const docs = await loader.load();
import { CollegeConfidentialLoader } from "langchain/document_loaders/web/college_confidential";const loader = new CollegeConfidentialLoader( "https://www.collegeconfidential.com/colleges/brown-university/");const docs = await loader.load();
Confluence
Page Title: Confluence | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-916 | Page Title: Confluence | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersConfluenceOn this pageConfluenceCompatibilityOnly available on Node.js.This covers how to load document objects from pages in a Confluence space.CredentialsYou'll need to set up an access token and provide it along with your confluence username in order to authenticate the requestYou'll also need the space key for the space containing the pages to load as documents. This can be found in the url when navigating to your space e.g. |
4e9727215e95-917 | https://example.atlassian.net/wiki/spaces/{SPACE_KEY}And you'll need to install html-to-text to parse the pages into plain textnpmYarnpnpmnpm install html-to-textyarn add html-to-textpnpm add html-to-textUsageimport { ConfluencePagesLoader } from "langchain/document_loaders/web/confluence";const username = process.env.CONFLUENCE_USERNAME;const accessToken = process.env.CONFLUENCE_ACCESS_TOKEN;if (username && accessToken) { const loader = new ConfluencePagesLoader({ baseUrl: "https://example.atlassian.net/wiki", spaceKey: "~EXAMPLE362906de5d343d49dcdbae5dEXAMPLE", username, accessToken, }); const documents = await loader.load(); console.log(documents);} else { console.log( "You must provide a username and access token to run this example." );}API Reference:ConfluencePagesLoader from langchain/document_loaders/web/confluencePreviousCollege ConfidentialNextFigmaCredentialsUsageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-918 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersConfluenceOn this pageConfluenceCompatibilityOnly available on Node.js.This covers how to load document objects from pages in a Confluence space.CredentialsYou'll need to set up an access token and provide it along with your confluence username in order to authenticate the requestYou'll also need the space key for the space containing the pages to load as documents. This can be found in the url when navigating to your space e.g. |
4e9727215e95-919 | https://example.atlassian.net/wiki/spaces/{SPACE_KEY}And you'll need to install html-to-text to parse the pages into plain textnpmYarnpnpmnpm install html-to-textyarn add html-to-textpnpm add html-to-textUsageimport { ConfluencePagesLoader } from "langchain/document_loaders/web/confluence";const username = process.env.CONFLUENCE_USERNAME;const accessToken = process.env.CONFLUENCE_ACCESS_TOKEN;if (username && accessToken) { const loader = new ConfluencePagesLoader({ baseUrl: "https://example.atlassian.net/wiki", spaceKey: "~EXAMPLE362906de5d343d49dcdbae5dEXAMPLE", username, accessToken, }); const documents = await loader.load(); console.log(documents);} else { console.log( "You must provide a username and access token to run this example." );}API Reference:ConfluencePagesLoader from langchain/document_loaders/web/confluencePreviousCollege ConfidentialNextFigmaCredentialsUsage |
4e9727215e95-920 | ModulesData connectionDocument loadersIntegrationsWeb LoadersConfluenceOn this pageConfluenceCompatibilityOnly available on Node.js.This covers how to load document objects from pages in a Confluence space.CredentialsYou'll need to set up an access token and provide it along with your confluence username in order to authenticate the requestYou'll also need the space key for the space containing the pages to load as documents. This can be found in the url when navigating to your space e.g. https://example.atlassian.net/wiki/spaces/{SPACE_KEY}And you'll need to install html-to-text to parse the pages into plain textnpmYarnpnpmnpm install html-to-textyarn add html-to-textpnpm add html-to-textUsageimport { ConfluencePagesLoader } from "langchain/document_loaders/web/confluence";const username = process.env.CONFLUENCE_USERNAME;const accessToken = process.env.CONFLUENCE_ACCESS_TOKEN;if (username && accessToken) { const loader = new ConfluencePagesLoader({ baseUrl: "https://example.atlassian.net/wiki", spaceKey: "~EXAMPLE362906de5d343d49dcdbae5dEXAMPLE", username, accessToken, }); const documents = await loader.load(); console.log(documents);} else { console.log( "You must provide a username and access token to run this example." );}API Reference:ConfluencePagesLoader from langchain/document_loaders/web/confluencePreviousCollege ConfidentialNextFigmaCredentialsUsage |
4e9727215e95-921 | ModulesData connectionDocument loadersIntegrationsWeb LoadersConfluenceOn this pageConfluenceCompatibilityOnly available on Node.js.This covers how to load document objects from pages in a Confluence space.CredentialsYou'll need to set up an access token and provide it along with your confluence username in order to authenticate the requestYou'll also need the space key for the space containing the pages to load as documents. This can be found in the url when navigating to your space e.g. https://example.atlassian.net/wiki/spaces/{SPACE_KEY}And you'll need to install html-to-text to parse the pages into plain textnpmYarnpnpmnpm install html-to-textyarn add html-to-textpnpm add html-to-textUsageimport { ConfluencePagesLoader } from "langchain/document_loaders/web/confluence";const username = process.env.CONFLUENCE_USERNAME;const accessToken = process.env.CONFLUENCE_ACCESS_TOKEN;if (username && accessToken) { const loader = new ConfluencePagesLoader({ baseUrl: "https://example.atlassian.net/wiki", spaceKey: "~EXAMPLE362906de5d343d49dcdbae5dEXAMPLE", username, accessToken, }); const documents = await loader.load(); console.log(documents);} else { console.log( "You must provide a username and access token to run this example." );}API Reference:ConfluencePagesLoader from langchain/document_loaders/web/confluencePreviousCollege ConfidentialNextFigma |
4e9727215e95-922 | ConfluenceCompatibilityOnly available on Node.js.This covers how to load document objects from pages in a Confluence space.CredentialsYou'll need to set up an access token and provide it along with your confluence username in order to authenticate the requestYou'll also need the space key for the space containing the pages to load as documents. This can be found in the url when navigating to your space e.g. https://example.atlassian.net/wiki/spaces/{SPACE_KEY}And you'll need to install html-to-text to parse the pages into plain textnpmYarnpnpmnpm install html-to-textyarn add html-to-textpnpm add html-to-textUsageimport { ConfluencePagesLoader } from "langchain/document_loaders/web/confluence";const username = process.env.CONFLUENCE_USERNAME;const accessToken = process.env.CONFLUENCE_ACCESS_TOKEN;if (username && accessToken) { const loader = new ConfluencePagesLoader({ baseUrl: "https://example.atlassian.net/wiki", spaceKey: "~EXAMPLE362906de5d343d49dcdbae5dEXAMPLE", username, accessToken, }); const documents = await loader.load(); console.log(documents);} else { console.log( "You must provide a username and access token to run this example." );}API Reference:ConfluencePagesLoader from langchain/document_loaders/web/confluence
This covers how to load document objects from pages in a Confluence space.
npmYarnpnpmnpm install html-to-textyarn add html-to-textpnpm add html-to-text
npm install html-to-textyarn add html-to-textpnpm add html-to-text
npm install html-to-text
yarn add html-to-text
pnpm add html-to-text |
4e9727215e95-923 | npm install html-to-text
yarn add html-to-text
pnpm add html-to-text
import { ConfluencePagesLoader } from "langchain/document_loaders/web/confluence";const username = process.env.CONFLUENCE_USERNAME;const accessToken = process.env.CONFLUENCE_ACCESS_TOKEN;if (username && accessToken) { const loader = new ConfluencePagesLoader({ baseUrl: "https://example.atlassian.net/wiki", spaceKey: "~EXAMPLE362906de5d343d49dcdbae5dEXAMPLE", username, accessToken, }); const documents = await loader.load(); console.log(documents);} else { console.log( "You must provide a username and access token to run this example." );}
API Reference:ConfluencePagesLoader from langchain/document_loaders/web/confluence
Figma
CredentialsUsage
Page Title: Figma | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersFigmaFigmaThis example goes over how to load data from a Figma file. |
4e9727215e95-924 | You will need a Figma access token in order to get started.import { FigmaFileLoader } from "langchain/document_loaders/web/figma";const loader = new FigmaFileLoader({ accessToken: "FIGMA_ACCESS_TOKEN", // or load it from process.env.FIGMA_ACCESS_TOKEN nodeIds: ["id1", "id2", "id3"], fileKey: "key",});const docs = await loader.load();console.log({ docs });API Reference:FigmaFileLoader from langchain/document_loaders/web/figmaYou can find your Figma file's key and node ids by opening the file in your browser and extracting them from the URL:https://www.figma.com/file/<YOUR FILE KEY HERE>/LangChainJS-Test?type=whiteboard&node-id=<YOUR NODE ID HERE>&t=e6lqWkKecuYQRyRg-0PreviousConfluenceNextGitBookCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersFigmaFigmaThis example goes over how to load data from a Figma file. |
4e9727215e95-925 | You will need a Figma access token in order to get started.import { FigmaFileLoader } from "langchain/document_loaders/web/figma";const loader = new FigmaFileLoader({ accessToken: "FIGMA_ACCESS_TOKEN", // or load it from process.env.FIGMA_ACCESS_TOKEN nodeIds: ["id1", "id2", "id3"], fileKey: "key",});const docs = await loader.load();console.log({ docs });API Reference:FigmaFileLoader from langchain/document_loaders/web/figmaYou can find your Figma file's key and node ids by opening the file in your browser and extracting them from the URL:https://www.figma.com/file/<YOUR FILE KEY HERE>/LangChainJS-Test?type=whiteboard&node-id=<YOUR NODE ID HERE>&t=e6lqWkKecuYQRyRg-0PreviousConfluenceNextGitBook
ModulesData connectionDocument loadersIntegrationsWeb LoadersFigmaFigmaThis example goes over how to load data from a Figma file. |
4e9727215e95-926 | You will need a Figma access token in order to get started.import { FigmaFileLoader } from "langchain/document_loaders/web/figma";const loader = new FigmaFileLoader({ accessToken: "FIGMA_ACCESS_TOKEN", // or load it from process.env.FIGMA_ACCESS_TOKEN nodeIds: ["id1", "id2", "id3"], fileKey: "key",});const docs = await loader.load();console.log({ docs });API Reference:FigmaFileLoader from langchain/document_loaders/web/figmaYou can find your Figma file's key and node ids by opening the file in your browser and extracting them from the URL:https://www.figma.com/file/<YOUR FILE KEY HERE>/LangChainJS-Test?type=whiteboard&node-id=<YOUR NODE ID HERE>&t=e6lqWkKecuYQRyRg-0PreviousConfluenceNextGitBook
FigmaThis example goes over how to load data from a Figma file. |
4e9727215e95-927 | FigmaThis example goes over how to load data from a Figma file.
You will need a Figma access token in order to get started.import { FigmaFileLoader } from "langchain/document_loaders/web/figma";const loader = new FigmaFileLoader({ accessToken: "FIGMA_ACCESS_TOKEN", // or load it from process.env.FIGMA_ACCESS_TOKEN nodeIds: ["id1", "id2", "id3"], fileKey: "key",});const docs = await loader.load();console.log({ docs });API Reference:FigmaFileLoader from langchain/document_loaders/web/figmaYou can find your Figma file's key and node ids by opening the file in your browser and extracting them from the URL:https://www.figma.com/file/<YOUR FILE KEY HERE>/LangChainJS-Test?type=whiteboard&node-id=<YOUR NODE ID HERE>&t=e6lqWkKecuYQRyRg-0
This example goes over how to load data from a Figma file.
You will need a Figma access token in order to get started.
import { FigmaFileLoader } from "langchain/document_loaders/web/figma";const loader = new FigmaFileLoader({ accessToken: "FIGMA_ACCESS_TOKEN", // or load it from process.env.FIGMA_ACCESS_TOKEN nodeIds: ["id1", "id2", "id3"], fileKey: "key",});const docs = await loader.load();console.log({ docs });
API Reference:FigmaFileLoader from langchain/document_loaders/web/figma
You can find your Figma file's key and node ids by opening the file in your browser and extracting them from the URL: |
4e9727215e95-928 | https://www.figma.com/file/<YOUR FILE KEY HERE>/LangChainJS-Test?type=whiteboard&node-id=<YOUR NODE ID HERE>&t=e6lqWkKecuYQRyRg-0
GitBook
Page Title: GitBook | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersGitBookGitBookThis example goes over how to load data from any GitBook, using Cheerio. |
4e9727215e95-929 | One document will be created for each page.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioLoad from single GitBook pageimport { GitbookLoader } from "langchain/document_loaders/web/gitbook";const loader = new GitbookLoader( "https://docs.gitbook.com/product-tour/navigation");const docs = await loader.load();Load from all paths in a given GitBookFor this to work, the GitbookLoader needs to be initialized with the root path (https://docs.gitbook.com in this example) and have shouldLoadAllPaths set to true.import { GitbookLoader } from "langchain/document_loaders/web/gitbook";const loader = new GitbookLoader("https://docs.gitbook.com", { shouldLoadAllPaths: true,});const docs = await loader.load();PreviousFigmaNextGitHubCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-930 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersGitBookGitBookThis example goes over how to load data from any GitBook, using Cheerio. One document will be created for each page.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioLoad from single GitBook pageimport { GitbookLoader } from "langchain/document_loaders/web/gitbook";const loader = new GitbookLoader( "https://docs.gitbook.com/product-tour/navigation");const docs = await loader.load();Load from all paths in a given GitBookFor this to work, the GitbookLoader needs to be initialized with the root path (https://docs.gitbook.com in this example) and have shouldLoadAllPaths set to true.import { GitbookLoader } from "langchain/document_loaders/web/gitbook";const loader = new GitbookLoader("https://docs.gitbook.com", { shouldLoadAllPaths: true,});const docs = await loader.load();PreviousFigmaNextGitHub |
4e9727215e95-931 | ModulesData connectionDocument loadersIntegrationsWeb LoadersGitBookGitBookThis example goes over how to load data from any GitBook, using Cheerio. One document will be created for each page.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioLoad from single GitBook pageimport { GitbookLoader } from "langchain/document_loaders/web/gitbook";const loader = new GitbookLoader( "https://docs.gitbook.com/product-tour/navigation");const docs = await loader.load();Load from all paths in a given GitBookFor this to work, the GitbookLoader needs to be initialized with the root path (https://docs.gitbook.com in this example) and have shouldLoadAllPaths set to true.import { GitbookLoader } from "langchain/document_loaders/web/gitbook";const loader = new GitbookLoader("https://docs.gitbook.com", { shouldLoadAllPaths: true,});const docs = await loader.load();PreviousFigmaNextGitHub |
4e9727215e95-932 | GitBookThis example goes over how to load data from any GitBook, using Cheerio. One document will be created for each page.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioLoad from single GitBook pageimport { GitbookLoader } from "langchain/document_loaders/web/gitbook";const loader = new GitbookLoader( "https://docs.gitbook.com/product-tour/navigation");const docs = await loader.load();Load from all paths in a given GitBookFor this to work, the GitbookLoader needs to be initialized with the root path (https://docs.gitbook.com in this example) and have shouldLoadAllPaths set to true.import { GitbookLoader } from "langchain/document_loaders/web/gitbook";const loader = new GitbookLoader("https://docs.gitbook.com", { shouldLoadAllPaths: true,});const docs = await loader.load();
import { GitbookLoader } from "langchain/document_loaders/web/gitbook";const loader = new GitbookLoader( "https://docs.gitbook.com/product-tour/navigation");const docs = await loader.load();
For this to work, the GitbookLoader needs to be initialized with the root path (https://docs.gitbook.com in this example) and have shouldLoadAllPaths set to true.
import { GitbookLoader } from "langchain/document_loaders/web/gitbook";const loader = new GitbookLoader("https://docs.gitbook.com", { shouldLoadAllPaths: true,});const docs = await loader.load();
Page Title: GitHub | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-933 | Page Title: GitHub | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersGitHubGitHubThis example goes over how to load data from a GitHub repository. |
4e9727215e95-934 | You can set the GITHUB_ACCESS_TOKEN environment variable to a GitHub access token to increase the rate limit and access private repositories.SetupThe GitHub loader requires the ignore npm package as a peer dependency. Install it like this:npmYarnpnpmnpm install ignoreyarn add ignorepnpm add ignoreUsageimport { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/hwchase17/langchainjs", { branch: "main", recursive: false, unknown: "warn", maxConcurrency: 5, // Defaults to 2 } ); const docs = await loader.load(); console.log({ docs });};API Reference:GithubRepoLoader from langchain/document_loaders/web/githubThe loader will ignore binary files like images.Using .gitignore syntaxTo ignore specific files, you can pass in an ignorePaths array into the constructor:import { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/hwchase17/langchainjs", { branch: "main", recursive: false, unknown: "warn", ignorePaths: ["*.md"] } ); const docs = await loader.load(); console.log({ docs }); // Will not include any .md files};API Reference:GithubRepoLoader from langchain/document_loaders/web/githubPreviousGitBookNextHacker NewsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-935 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersGitHubGitHubThis example goes over how to load data from a GitHub repository. |
4e9727215e95-936 | You can set the GITHUB_ACCESS_TOKEN environment variable to a GitHub access token to increase the rate limit and access private repositories.SetupThe GitHub loader requires the ignore npm package as a peer dependency. Install it like this:npmYarnpnpmnpm install ignoreyarn add ignorepnpm add ignoreUsageimport { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/hwchase17/langchainjs", { branch: "main", recursive: false, unknown: "warn", maxConcurrency: 5, // Defaults to 2 } ); const docs = await loader.load(); console.log({ docs });};API Reference:GithubRepoLoader from langchain/document_loaders/web/githubThe loader will ignore binary files like images.Using .gitignore syntaxTo ignore specific files, you can pass in an ignorePaths array into the constructor:import { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/hwchase17/langchainjs", { branch: "main", recursive: false, unknown: "warn", ignorePaths: ["*.md"] } ); const docs = await loader.load(); console.log({ docs }); // Will not include any .md files};API Reference:GithubRepoLoader from langchain/document_loaders/web/githubPreviousGitBookNextHacker News
ModulesData connectionDocument loadersIntegrationsWeb LoadersGitHubGitHubThis example goes over how to load data from a GitHub repository. |
4e9727215e95-937 | You can set the GITHUB_ACCESS_TOKEN environment variable to a GitHub access token to increase the rate limit and access private repositories.SetupThe GitHub loader requires the ignore npm package as a peer dependency. Install it like this:npmYarnpnpmnpm install ignoreyarn add ignorepnpm add ignoreUsageimport { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/hwchase17/langchainjs", { branch: "main", recursive: false, unknown: "warn", maxConcurrency: 5, // Defaults to 2 } ); const docs = await loader.load(); console.log({ docs });};API Reference:GithubRepoLoader from langchain/document_loaders/web/githubThe loader will ignore binary files like images.Using .gitignore syntaxTo ignore specific files, you can pass in an ignorePaths array into the constructor:import { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/hwchase17/langchainjs", { branch: "main", recursive: false, unknown: "warn", ignorePaths: ["*.md"] } ); const docs = await loader.load(); console.log({ docs }); // Will not include any .md files};API Reference:GithubRepoLoader from langchain/document_loaders/web/githubPreviousGitBookNextHacker News
GitHubThis example goes over how to load data from a GitHub repository. |
4e9727215e95-938 | GitHubThis example goes over how to load data from a GitHub repository.
You can set the GITHUB_ACCESS_TOKEN environment variable to a GitHub access token to increase the rate limit and access private repositories.SetupThe GitHub loader requires the ignore npm package as a peer dependency. Install it like this:npmYarnpnpmnpm install ignoreyarn add ignorepnpm add ignoreUsageimport { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/hwchase17/langchainjs", { branch: "main", recursive: false, unknown: "warn", maxConcurrency: 5, // Defaults to 2 } ); const docs = await loader.load(); console.log({ docs });};API Reference:GithubRepoLoader from langchain/document_loaders/web/githubThe loader will ignore binary files like images.Using .gitignore syntaxTo ignore specific files, you can pass in an ignorePaths array into the constructor:import { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/hwchase17/langchainjs", { branch: "main", recursive: false, unknown: "warn", ignorePaths: ["*.md"] } ); const docs = await loader.load(); console.log({ docs }); // Will not include any .md files};API Reference:GithubRepoLoader from langchain/document_loaders/web/github
This example goes over how to load data from a GitHub repository.
You can set the GITHUB_ACCESS_TOKEN environment variable to a GitHub access token to increase the rate limit and access private repositories.
The GitHub loader requires the ignore npm package as a peer dependency. Install it like this: |
4e9727215e95-939 | The GitHub loader requires the ignore npm package as a peer dependency. Install it like this:
npmYarnpnpmnpm install ignoreyarn add ignorepnpm add ignore
npm install ignoreyarn add ignorepnpm add ignore
npm install ignore
yarn add ignore
pnpm add ignore
import { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/hwchase17/langchainjs", { branch: "main", recursive: false, unknown: "warn", maxConcurrency: 5, // Defaults to 2 } ); const docs = await loader.load(); console.log({ docs });};
API Reference:GithubRepoLoader from langchain/document_loaders/web/github
The loader will ignore binary files like images.
To ignore specific files, you can pass in an ignorePaths array into the constructor:
import { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/hwchase17/langchainjs", { branch: "main", recursive: false, unknown: "warn", ignorePaths: ["*.md"] } ); const docs = await loader.load(); console.log({ docs }); // Will not include any .md files};
Hacker News
Page Title: Hacker News | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-940 | Page Title: Hacker News | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersHacker NewsHacker NewsThis example goes over how to load data from the hacker news website, using Cheerio. One document will be created for each page.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioUsageimport { HNLoader } from "langchain/document_loaders/web/hn";const loader = new HNLoader("https://news.ycombinator.com/item?id=34817881");const docs = await loader.load();PreviousGitHubNextIMSDBCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-941 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersHacker NewsHacker NewsThis example goes over how to load data from the hacker news website, using Cheerio. One document will be created for each page.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioUsageimport { HNLoader } from "langchain/document_loaders/web/hn";const loader = new HNLoader("https://news.ycombinator.com/item?id=34817881");const docs = await loader.load();PreviousGitHubNextIMSDB
ModulesData connectionDocument loadersIntegrationsWeb LoadersHacker NewsHacker NewsThis example goes over how to load data from the hacker news website, using Cheerio. One document will be created for each page.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioUsageimport { HNLoader } from "langchain/document_loaders/web/hn";const loader = new HNLoader("https://news.ycombinator.com/item?id=34817881");const docs = await loader.load();PreviousGitHubNextIMSDB |
4e9727215e95-942 | Hacker NewsThis example goes over how to load data from the hacker news website, using Cheerio. One document will be created for each page.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioUsageimport { HNLoader } from "langchain/document_loaders/web/hn";const loader = new HNLoader("https://news.ycombinator.com/item?id=34817881");const docs = await loader.load();
import { HNLoader } from "langchain/document_loaders/web/hn";const loader = new HNLoader("https://news.ycombinator.com/item?id=34817881");const docs = await loader.load();
IMSDB
Page Title: IMSDB | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-943 | Page Title: IMSDB | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersIMSDBIMSDBThis example goes over how to load data from the internet movie script database website, using Cheerio. One document will be created for each page.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioUsageimport { IMSDBLoader } from "langchain/document_loaders/web/imsdb";const loader = new IMSDBLoader("https://imsdb.com/scripts/BlacKkKlansman.html");const docs = await loader.load();PreviousHacker NewsNextNotion APICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-944 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersIMSDBIMSDBThis example goes over how to load data from the internet movie script database website, using Cheerio. One document will be created for each page.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioUsageimport { IMSDBLoader } from "langchain/document_loaders/web/imsdb";const loader = new IMSDBLoader("https://imsdb.com/scripts/BlacKkKlansman.html");const docs = await loader.load();PreviousHacker NewsNextNotion API
ModulesData connectionDocument loadersIntegrationsWeb LoadersIMSDBIMSDBThis example goes over how to load data from the internet movie script database website, using Cheerio. One document will be created for each page.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioUsageimport { IMSDBLoader } from "langchain/document_loaders/web/imsdb";const loader = new IMSDBLoader("https://imsdb.com/scripts/BlacKkKlansman.html");const docs = await loader.load();PreviousHacker NewsNextNotion API |
4e9727215e95-945 | IMSDBThis example goes over how to load data from the internet movie script database website, using Cheerio. One document will be created for each page.SetupnpmYarnpnpmnpm install cheerioyarn add cheeriopnpm add cheerioUsageimport { IMSDBLoader } from "langchain/document_loaders/web/imsdb";const loader = new IMSDBLoader("https://imsdb.com/scripts/BlacKkKlansman.html");const docs = await loader.load();
import { IMSDBLoader } from "langchain/document_loaders/web/imsdb";const loader = new IMSDBLoader("https://imsdb.com/scripts/BlacKkKlansman.html");const docs = await loader.load();
Notion API
Page Title: Notion API | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersNotion APINotion APIThis guide will take you through the steps required to |
4e9727215e95-946 | load documents from Notion pages and databases using the Notion API.OverviewNotion is a versatile productivity platform that consolidates note-taking, task management, and data organization tools into one interface.This document loader is able to take full Notion pages and databases and turn them into a LangChain Documents ready to be integrated into your projects.SetupYou will first need to install the official Notion client and the notion-to-md package as peer dependencies:npmYarnpnpmnpm install @notionhq/client notion-to-mdyarn add @notionhq/client notion-to-mdpnpm add @notionhq/client notion-to-mdCreate a Notion integration and securely record the Internal Integration Secret (also known as NOTION_INTEGRATION_TOKEN).Add a connection to your new integration on your page or database. To do this open your Notion page, go to the settings pips in the top right and scroll down to Add connections and select your new integration.Get the PAGE_ID or DATABASE_ID for the page or database you want to load.The 32 char hex in the url path represents the ID. |
4e9727215e95-947 | For example:PAGE_ID: https://www.notion.so/skarard/LangChain-Notion-API-b34ca03f219c4420a6046fc4bdfdf7b4DATABASE_ID: https://www.notion.so/skarard/c393f19c3903440da0d34bf9c6c12ff2?v=9c70a0f4e174498aa0f9021e0a9d52deREGEX: /(?<!=)[0-9a-f]{32}/Example Usageimport { NotionAPILoader } from "langchain/document_loaders/web/notionapi";// Loading a page (including child pages all as separate documents)const pageLoader = new NotionAPILoader({ clientOptions: { auth: "<NOTION_INTEGRATION_TOKEN>", }, id: "<PAGE_ID>", type: "page",});// A page contents is likely to be more than 1000 characters so it's split into multiple documents (important for vectorization)const pageDocs = await pageLoader.loadAndSplit();console.log({ pageDocs });// Loading a database (each row is a separate document with all properties as metadata)const dbLoader = new NotionAPILoader({ clientOptions: { auth: "<NOTION_INTEGRATION_TOKEN>", }, id: "<DATABASE_ID>", type: "database",});// A database row contents is likely to be less than 1000 characters so it's not split into multiple documentsconst dbDocs = await dbLoader.load();console.log({ dbDocs });API Reference:NotionAPILoader from langchain/document_loaders/web/notionapiPreviousIMSDBNextS3 FileCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-948 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersNotion APINotion APIThis guide will take you through the steps required to load documents from Notion pages and
databases using the Notion API.OverviewNotion is a versatile productivity platform that consolidates note-taking, task management, and data organization tools into one interface.This document loader is able to take full Notion pages and databases and turn them into a LangChain Documents ready to be integrated into your projects.SetupYou will first need to install the official Notion client and the notion-to-md package as peer dependencies:npmYarnpnpmnpm install @notionhq/client notion-to-mdyarn add @notionhq/client notion-to-mdpnpm add @notionhq/client notion-to-mdCreate a Notion integration and securely record the Internal Integration Secret (also known as NOTION_INTEGRATION_TOKEN).Add a connection to your new integration on your page or database. To do this open your Notion page, go to the settings pips in the top right and scroll down to Add connections and select your new integration.Get the PAGE_ID or DATABASE_ID for the page or database you want to load.The 32 char hex in the url path represents the ID. |
4e9727215e95-949 | For example:PAGE_ID: https://www.notion.so/skarard/LangChain-Notion-API-b34ca03f219c4420a6046fc4bdfdf7b4DATABASE_ID: https://www.notion.so/skarard/c393f19c3903440da0d34bf9c6c12ff2?v=9c70a0f4e174498aa0f9021e0a9d52deREGEX: /(?<!=)[0-9a-f]{32}/Example Usageimport { NotionAPILoader } from "langchain/document_loaders/web/notionapi";// Loading a page (including child pages all as separate documents)const pageLoader = new NotionAPILoader({ clientOptions: { auth: "<NOTION_INTEGRATION_TOKEN>", }, id: "<PAGE_ID>", type: "page",});// A page contents is likely to be more than 1000 characters so it's split into multiple documents (important for vectorization)const pageDocs = await pageLoader.loadAndSplit();console.log({ pageDocs });// Loading a database (each row is a separate document with all properties as metadata)const dbLoader = new NotionAPILoader({ clientOptions: { auth: "<NOTION_INTEGRATION_TOKEN>", }, id: "<DATABASE_ID>", type: "database",});// A database row contents is likely to be less than 1000 characters so it's not split into multiple documentsconst dbDocs = await dbLoader.load();console.log({ dbDocs });API Reference:NotionAPILoader from langchain/document_loaders/web/notionapiPreviousIMSDBNextS3 File |
4e9727215e95-950 | ModulesData connectionDocument loadersIntegrationsWeb LoadersNotion APINotion APIThis guide will take you through the steps required to load documents from Notion pages and databases using the Notion API.OverviewNotion is a versatile productivity platform that consolidates note-taking, task management, and data organization tools into one interface.This document loader is able to take full Notion pages and databases and turn them into a LangChain Documents ready to be integrated into your projects.SetupYou will first need to install the official Notion client and the notion-to-md package as peer dependencies:npmYarnpnpmnpm install @notionhq/client notion-to-mdyarn add @notionhq/client notion-to-mdpnpm add @notionhq/client notion-to-mdCreate a Notion integration and securely record the Internal Integration Secret (also known as NOTION_INTEGRATION_TOKEN).Add a connection to your new integration on your page or database. To do this open your Notion page, go to the settings pips in the top right and scroll down to Add connections and select your new integration.Get the PAGE_ID or DATABASE_ID for the page or database you want to load.The 32 char hex in the url path represents the ID. |
4e9727215e95-951 | For example:PAGE_ID: https://www.notion.so/skarard/LangChain-Notion-API-b34ca03f219c4420a6046fc4bdfdf7b4DATABASE_ID: https://www.notion.so/skarard/c393f19c3903440da0d34bf9c6c12ff2?v=9c70a0f4e174498aa0f9021e0a9d52deREGEX: /(?<!=)[0-9a-f]{32}/Example Usageimport { NotionAPILoader } from "langchain/document_loaders/web/notionapi";// Loading a page (including child pages all as separate documents)const pageLoader = new NotionAPILoader({ clientOptions: { auth: "<NOTION_INTEGRATION_TOKEN>", }, id: "<PAGE_ID>", type: "page",});// A page contents is likely to be more than 1000 characters so it's split into multiple documents (important for vectorization)const pageDocs = await pageLoader.loadAndSplit();console.log({ pageDocs });// Loading a database (each row is a separate document with all properties as metadata)const dbLoader = new NotionAPILoader({ clientOptions: { auth: "<NOTION_INTEGRATION_TOKEN>", }, id: "<DATABASE_ID>", type: "database",});// A database row contents is likely to be less than 1000 characters so it's not split into multiple documentsconst dbDocs = await dbLoader.load();console.log({ dbDocs });API Reference:NotionAPILoader from langchain/document_loaders/web/notionapiPreviousIMSDBNextS3 File |
4e9727215e95-952 | Notion APIThis guide will take you through the steps required to load documents from Notion pages and databases using the Notion API.OverviewNotion is a versatile productivity platform that consolidates note-taking, task management, and data organization tools into one interface.This document loader is able to take full Notion pages and databases and turn them into a LangChain Documents ready to be integrated into your projects.SetupYou will first need to install the official Notion client and the notion-to-md package as peer dependencies:npmYarnpnpmnpm install @notionhq/client notion-to-mdyarn add @notionhq/client notion-to-mdpnpm add @notionhq/client notion-to-mdCreate a Notion integration and securely record the Internal Integration Secret (also known as NOTION_INTEGRATION_TOKEN).Add a connection to your new integration on your page or database. To do this open your Notion page, go to the settings pips in the top right and scroll down to Add connections and select your new integration.Get the PAGE_ID or DATABASE_ID for the page or database you want to load.The 32 char hex in the url path represents the ID. |
4e9727215e95-953 | For example:PAGE_ID: https://www.notion.so/skarard/LangChain-Notion-API-b34ca03f219c4420a6046fc4bdfdf7b4DATABASE_ID: https://www.notion.so/skarard/c393f19c3903440da0d34bf9c6c12ff2?v=9c70a0f4e174498aa0f9021e0a9d52deREGEX: /(?<!=)[0-9a-f]{32}/Example Usageimport { NotionAPILoader } from "langchain/document_loaders/web/notionapi";// Loading a page (including child pages all as separate documents)const pageLoader = new NotionAPILoader({ clientOptions: { auth: "<NOTION_INTEGRATION_TOKEN>", }, id: "<PAGE_ID>", type: "page",});// A page contents is likely to be more than 1000 characters so it's split into multiple documents (important for vectorization)const pageDocs = await pageLoader.loadAndSplit();console.log({ pageDocs });// Loading a database (each row is a separate document with all properties as metadata)const dbLoader = new NotionAPILoader({ clientOptions: { auth: "<NOTION_INTEGRATION_TOKEN>", }, id: "<DATABASE_ID>", type: "database",});// A database row contents is likely to be less than 1000 characters so it's not split into multiple documentsconst dbDocs = await dbLoader.load();console.log({ dbDocs });API Reference:NotionAPILoader from langchain/document_loaders/web/notionapi
Notion is a versatile productivity platform that consolidates note-taking, task management, and data organization tools into one interface. |
4e9727215e95-954 | This document loader is able to take full Notion pages and databases and turn them into a LangChain Documents ready to be integrated into your projects.
npmYarnpnpmnpm install @notionhq/client notion-to-mdyarn add @notionhq/client notion-to-mdpnpm add @notionhq/client notion-to-md
npm install @notionhq/client notion-to-mdyarn add @notionhq/client notion-to-mdpnpm add @notionhq/client notion-to-md
npm install @notionhq/client notion-to-md
yarn add @notionhq/client notion-to-md
pnpm add @notionhq/client notion-to-md
The 32 char hex in the url path represents the ID. For example:
PAGE_ID: https://www.notion.so/skarard/LangChain-Notion-API-b34ca03f219c4420a6046fc4bdfdf7b4
DATABASE_ID: https://www.notion.so/skarard/c393f19c3903440da0d34bf9c6c12ff2?v=9c70a0f4e174498aa0f9021e0a9d52de
REGEX: /(?<!=)[0-9a-f]{32}/ |
4e9727215e95-955 | REGEX: /(?<!=)[0-9a-f]{32}/
import { NotionAPILoader } from "langchain/document_loaders/web/notionapi";// Loading a page (including child pages all as separate documents)const pageLoader = new NotionAPILoader({ clientOptions: { auth: "<NOTION_INTEGRATION_TOKEN>", }, id: "<PAGE_ID>", type: "page",});// A page contents is likely to be more than 1000 characters so it's split into multiple documents (important for vectorization)const pageDocs = await pageLoader.loadAndSplit();console.log({ pageDocs });// Loading a database (each row is a separate document with all properties as metadata)const dbLoader = new NotionAPILoader({ clientOptions: { auth: "<NOTION_INTEGRATION_TOKEN>", }, id: "<DATABASE_ID>", type: "database",});// A database row contents is likely to be less than 1000 characters so it's not split into multiple documentsconst dbDocs = await dbLoader.load();console.log({ dbDocs });
API Reference:NotionAPILoader from langchain/document_loaders/web/notionapi
S3 File
Page Title: S3 File | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-956 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersS3 FileS3 FileCompatibilityOnly available on Node.js.This covers how to load document objects from an s3 file object.SetupTo run this index you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.See the docs here for information on how to do that.You'll also need to install the official AWS SDK:npmYarnpnpmnpm install @aws-sdk/client-s3yarn add @aws-sdk/client-s3pnpm add @aws-sdk/client-s3UsageOnce Unstructured is configured, you can use the S3 loader to load files and then convert them into a Document.You can optionally provide a s3Config parameter to specify your bucket region, access key, and secret access key. |
4e9727215e95-957 | If these are not provided, you will need to have them in your environment (e.g., by running aws configure).import { S3Loader } from "langchain/document_loaders/web/s3";const loader = new S3Loader({ bucket: "my-document-bucket-123", key: "AccountingOverview.pdf", s3Config: { region: "us-east-1", credentials: { accessKeyId: "AKIAIOSFODNN7EXAMPLE", secretAccessKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", }, }, unstructuredAPIURL: "http://localhost:8000/general/v0/general", unstructuredAPIKey: "", // this will be soon required});const docs = await loader.load();console.log(docs);API Reference:S3Loader from langchain/document_loaders/web/s3PreviousNotion APINextSerpAPI LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-958 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersS3 FileS3 FileCompatibilityOnly available on Node.js.This covers how to load document objects from an s3 file object.SetupTo run this index you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.See the docs here for information on how to do that.You'll also need to install the official AWS SDK:npmYarnpnpmnpm install @aws-sdk/client-s3yarn add @aws-sdk/client-s3pnpm add @aws-sdk/client-s3UsageOnce Unstructured is configured, you can use the S3 loader to load files and then convert them into a Document.You can optionally provide a s3Config parameter to specify your bucket region, access key, and secret access key. |
4e9727215e95-959 | If these are not provided, you will need to have them in your environment (e.g., by running aws configure).import { S3Loader } from "langchain/document_loaders/web/s3";const loader = new S3Loader({ bucket: "my-document-bucket-123", key: "AccountingOverview.pdf", s3Config: { region: "us-east-1", credentials: { accessKeyId: "AKIAIOSFODNN7EXAMPLE", secretAccessKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", }, }, unstructuredAPIURL: "http://localhost:8000/general/v0/general", unstructuredAPIKey: "", // this will be soon required});const docs = await loader.load();console.log(docs);API Reference:S3Loader from langchain/document_loaders/web/s3PreviousNotion APINextSerpAPI Loader |
4e9727215e95-960 | ModulesData connectionDocument loadersIntegrationsWeb LoadersS3 FileS3 FileCompatibilityOnly available on Node.js.This covers how to load document objects from an s3 file object.SetupTo run this index you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.See the docs here for information on how to do that.You'll also need to install the official AWS SDK:npmYarnpnpmnpm install @aws-sdk/client-s3yarn add @aws-sdk/client-s3pnpm add @aws-sdk/client-s3UsageOnce Unstructured is configured, you can use the S3 loader to load files and then convert them into a Document.You can optionally provide a s3Config parameter to specify your bucket region, access key, and secret access key. If these are not provided, you will need to have them in your environment (e.g., by running aws configure).import { S3Loader } from "langchain/document_loaders/web/s3";const loader = new S3Loader({ bucket: "my-document-bucket-123", key: "AccountingOverview.pdf", s3Config: { region: "us-east-1", credentials: { accessKeyId: "AKIAIOSFODNN7EXAMPLE", secretAccessKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", }, }, unstructuredAPIURL: "http://localhost:8000/general/v0/general", unstructuredAPIKey: "", // this will be soon required});const docs = await loader.load();console.log(docs);API Reference:S3Loader from langchain/document_loaders/web/s3PreviousNotion APINextSerpAPI Loader |
4e9727215e95-961 | S3 FileCompatibilityOnly available on Node.js.This covers how to load document objects from an s3 file object.SetupTo run this index you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.See the docs here for information on how to do that.You'll also need to install the official AWS SDK:npmYarnpnpmnpm install @aws-sdk/client-s3yarn add @aws-sdk/client-s3pnpm add @aws-sdk/client-s3UsageOnce Unstructured is configured, you can use the S3 loader to load files and then convert them into a Document.You can optionally provide a s3Config parameter to specify your bucket region, access key, and secret access key. If these are not provided, you will need to have them in your environment (e.g., by running aws configure).import { S3Loader } from "langchain/document_loaders/web/s3";const loader = new S3Loader({ bucket: "my-document-bucket-123", key: "AccountingOverview.pdf", s3Config: { region: "us-east-1", credentials: { accessKeyId: "AKIAIOSFODNN7EXAMPLE", secretAccessKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", }, }, unstructuredAPIURL: "http://localhost:8000/general/v0/general", unstructuredAPIKey: "", // this will be soon required});const docs = await loader.load();console.log(docs);API Reference:S3Loader from langchain/document_loaders/web/s3
This covers how to load document objects from an s3 file object.
To run this index you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally. |
4e9727215e95-962 | You'll also need to install the official AWS SDK:
npmYarnpnpmnpm install @aws-sdk/client-s3yarn add @aws-sdk/client-s3pnpm add @aws-sdk/client-s3
npm install @aws-sdk/client-s3yarn add @aws-sdk/client-s3pnpm add @aws-sdk/client-s3
npm install @aws-sdk/client-s3
yarn add @aws-sdk/client-s3
pnpm add @aws-sdk/client-s3
Once Unstructured is configured, you can use the S3 loader to load files and then convert them into a Document.
You can optionally provide a s3Config parameter to specify your bucket region, access key, and secret access key. If these are not provided, you will need to have them in your environment (e.g., by running aws configure).
import { S3Loader } from "langchain/document_loaders/web/s3";const loader = new S3Loader({ bucket: "my-document-bucket-123", key: "AccountingOverview.pdf", s3Config: { region: "us-east-1", credentials: { accessKeyId: "AKIAIOSFODNN7EXAMPLE", secretAccessKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", }, }, unstructuredAPIURL: "http://localhost:8000/general/v0/general", unstructuredAPIKey: "", // this will be soon required});const docs = await loader.load();console.log(docs);
API Reference:S3Loader from langchain/document_loaders/web/s3
SerpAPI Loader
Page Title: SerpAPI Loader | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-963 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersSerpAPI LoaderSerpAPI LoaderThis guide shows how to use SerpAPI with LangChain to load web search results.OverviewSerpAPI is a real-time API that provides access to search results from various search engines. It is commonly used for tasks like competitor analysis and rank tracking. It empowers businesses to scrape, extract, and make sense of data from all search engines' result pages.This guide shows how to load web search results using the SerpAPILoader in LangChain. |
4e9727215e95-964 | The SerpAPILoader simplifies the process of loading and processing web search results from SerpAPI.SetupYou'll need to sign up and retrieve your SerpAPI API key.UsageHere's an example of how to use the SerpAPILoader:import { OpenAI } from "langchain/llms/openai";import { RetrievalQAChain } from "langchain/chains";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { SerpAPILoader } from "langchain/document_loaders/web/serpapi";// Initialize the necessary componentsconst llm = new OpenAI();const embeddings = new OpenAIEmbeddings();const apiKey = "Your SerpAPI API key";// Define your question and queryconst question = "Your question here";const query = "Your query here";// Use SerpAPILoader to load web search resultsconst |
4e9727215e95-965 | loader = new SerpAPILoader({ q: query, apiKey });const docs = await loader.load();// Use MemoryVectorStore to store the loaded documents in memoryconst vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);// Use RetrievalQAChain to retrieve documents and answer the questionconst chain = RetrievalQAChain.fromLLM(llm, vectorStore.asRetriever());const answer = await chain.call({ query: question });console.log(answer.text);API Reference:OpenAI from langchain/llms/openaiRetrievalQAChain from langchain/chainsMemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiSerpAPILoader from langchain/document_loaders/web/serpapiIn this example, the SerpAPILoader is used to load web search results, which are then stored in memory using MemoryVectorStore. The RetrievalQAChain is then used to retrieve the most relevant documents from the memory and answer the question based on these documents. This demonstrates how the SerpAPILoader can streamline the process of loading and processing web search results.PreviousS3 FileNextSonix AudioCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-966 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersSerpAPI LoaderSerpAPI LoaderThis guide shows how to use SerpAPI with LangChain to load web search results.OverviewSerpAPI is a real-time API that provides access to search results from various search engines. It is commonly used for tasks like competitor analysis and rank tracking. It empowers businesses to scrape, extract, and make sense of data from all search engines' result pages.This guide shows how to load web search results using the SerpAPILoader in LangChain. |
4e9727215e95-967 | The SerpAPILoader simplifies the process of loading and processing web search results from SerpAPI.SetupYou'll need to sign up and retrieve your SerpAPI API key.UsageHere's an example of how to use the SerpAPILoader:import { OpenAI } from "langchain/llms/openai";import { RetrievalQAChain } from "langchain/chains";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { SerpAPILoader } from "langchain/document_loaders/web/serpapi";// Initialize the necessary componentsconst llm = new OpenAI();const embeddings = new OpenAIEmbeddings();const apiKey = "Your SerpAPI API key";// Define your question and queryconst question = "Your question here";const query = "Your query here";// Use SerpAPILoader to load web search resultsconst |
4e9727215e95-968 | loader = new SerpAPILoader({ q: query, apiKey });const docs = await loader.load();// Use MemoryVectorStore to store the loaded documents in memoryconst vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);// Use RetrievalQAChain to retrieve documents and answer the questionconst chain = RetrievalQAChain.fromLLM(llm, vectorStore.asRetriever());const answer = await chain.call({ query: question });console.log(answer.text);API Reference:OpenAI from langchain/llms/openaiRetrievalQAChain from langchain/chainsMemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiSerpAPILoader from langchain/document_loaders/web/serpapiIn this example, the SerpAPILoader is used to load web search results, which are then stored in memory using MemoryVectorStore. The RetrievalQAChain is then used to retrieve the most relevant documents from the memory and answer the question based on these documents. This demonstrates how the SerpAPILoader can streamline the process of loading and processing web search results.PreviousS3 FileNextSonix Audio
ModulesData connectionDocument loadersIntegrationsWeb LoadersSerpAPI LoaderSerpAPI LoaderThis guide shows how to use SerpAPI with LangChain to load web search results.OverviewSerpAPI is a real-time API that provides access to search results from various search engines. It is commonly used for tasks like competitor analysis and rank tracking. It empowers businesses to scrape, extract, and make sense of data from all search engines' result pages.This guide shows how to load web search results using the SerpAPILoader in LangChain. |
4e9727215e95-969 | The SerpAPILoader simplifies the process of loading and processing web search results from SerpAPI.SetupYou'll need to sign up and retrieve your SerpAPI API key.UsageHere's an example of how to use the SerpAPILoader:import { OpenAI } from "langchain/llms/openai";import { RetrievalQAChain } from "langchain/chains";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { SerpAPILoader } from "langchain/document_loaders/web/serpapi";// Initialize the necessary componentsconst llm = new OpenAI();const embeddings = new OpenAIEmbeddings();const apiKey = "Your SerpAPI API key";// Define your question and queryconst question = "Your question here";const query = "Your query here";// Use SerpAPILoader to load web search resultsconst |
4e9727215e95-970 | loader = new SerpAPILoader({ q: query, apiKey });const docs = await loader.load();// Use MemoryVectorStore to store the loaded documents in memoryconst vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);// Use RetrievalQAChain to retrieve documents and answer the questionconst chain = RetrievalQAChain.fromLLM(llm, vectorStore.asRetriever());const answer = await chain.call({ query: question });console.log(answer.text);API Reference:OpenAI from langchain/llms/openaiRetrievalQAChain from langchain/chainsMemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiSerpAPILoader from langchain/document_loaders/web/serpapiIn this example, the SerpAPILoader is used to load web search results, which are then stored in memory using MemoryVectorStore. The RetrievalQAChain is then used to retrieve the most relevant documents from the memory and answer the question based on these documents. This demonstrates how the SerpAPILoader can streamline the process of loading and processing web search results.PreviousS3 FileNextSonix Audio
SerpAPI LoaderThis guide shows how to use SerpAPI with LangChain to load web search results.OverviewSerpAPI is a real-time API that provides access to search results from various search engines. It is commonly used for tasks like competitor analysis and rank tracking. It empowers businesses to scrape, extract, and make sense of data from all search engines' result pages.This guide shows how to load web search results using the SerpAPILoader in LangChain. |
4e9727215e95-971 | The SerpAPILoader simplifies the process of loading and processing web search results from SerpAPI.SetupYou'll need to sign up and retrieve your SerpAPI API key.UsageHere's an example of how to use the SerpAPILoader:import { OpenAI } from "langchain/llms/openai";import { RetrievalQAChain } from "langchain/chains";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { SerpAPILoader } from "langchain/document_loaders/web/serpapi";// Initialize the necessary componentsconst llm = new OpenAI();const embeddings = new OpenAIEmbeddings();const apiKey = "Your SerpAPI API key";// Define your question and queryconst question = "Your question here";const query = "Your query here";// Use SerpAPILoader to load web search resultsconst |
4e9727215e95-972 | loader = new SerpAPILoader({ q: query, apiKey });const docs = await loader.load();// Use MemoryVectorStore to store the loaded documents in memoryconst vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);// Use RetrievalQAChain to retrieve documents and answer the questionconst chain = RetrievalQAChain.fromLLM(llm, vectorStore.asRetriever());const answer = await chain.call({ query: question });console.log(answer.text);API Reference:OpenAI from langchain/llms/openaiRetrievalQAChain from langchain/chainsMemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiSerpAPILoader from langchain/document_loaders/web/serpapiIn this example, the SerpAPILoader is used to load web search results, which are then stored in memory using MemoryVectorStore. The RetrievalQAChain is then used to retrieve the most relevant documents from the memory and answer the question based on these documents. This demonstrates how the SerpAPILoader can streamline the process of loading and processing web search results.
SerpAPI is a real-time API that provides access to search results from various search engines. It is commonly used for tasks like competitor analysis and rank tracking. It empowers businesses to scrape, extract, and make sense of data from all search engines' result pages.
This guide shows how to load web search results using the SerpAPILoader in LangChain. The SerpAPILoader simplifies the process of loading and processing web search results from SerpAPI.
You'll need to sign up and retrieve your SerpAPI API key.
Here's an example of how to use the SerpAPILoader: |
4e9727215e95-973 | Here's an example of how to use the SerpAPILoader:
import { OpenAI } from "langchain/llms/openai";import { RetrievalQAChain } from "langchain/chains";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { SerpAPILoader } from "langchain/document_loaders/web/serpapi";// Initialize the necessary componentsconst llm = new OpenAI();const embeddings = new OpenAIEmbeddings();const apiKey = "Your SerpAPI API key";// Define your question and queryconst question = "Your question here";const query = "Your query here";// Use SerpAPILoader to load web search resultsconst loader = new SerpAPILoader({ q: query, apiKey });const docs = await loader.load();// Use MemoryVectorStore to store the loaded documents in memoryconst vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);// Use RetrievalQAChain to retrieve documents and answer the questionconst chain = RetrievalQAChain.fromLLM(llm, vectorStore.asRetriever());const answer = await chain.call({ query: question });console.log(answer.text);
API Reference:OpenAI from langchain/llms/openaiRetrievalQAChain from langchain/chainsMemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiSerpAPILoader from langchain/document_loaders/web/serpapi
In this example, the SerpAPILoader is used to load web search results, which are then stored in memory using MemoryVectorStore. The RetrievalQAChain is then used to retrieve the most relevant documents from the memory and answer the question based on these documents. This demonstrates how the SerpAPILoader can streamline the process of loading and processing web search results.
Sonix Audio |
4e9727215e95-974 | Sonix Audio
Page Title: Sonix Audio | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersSonix AudioSonix AudioCompatibilityOnly available on Node.js.This covers how to load document objects from an audio file using the Sonix API.SetupTo run this loader you will need to create an account on the https://sonix.ai/ and obtain an auth key from the https://my.sonix.ai/api page.You'll also need to install the sonix-speech-recognition library:npmYarnpnpmnpm install sonix-speech-recognitionyarn add sonix-speech-recognitionpnpm add sonix-speech-recognitionUsageOnce auth key is configured, you can use the loader to create transcriptions and then convert them into a Document.
In the request parameter, you can either specify a local file by setting audioFilePath or a remote file using audioUrl. |
4e9727215e95-975 | You will also need to specify the audio language. See the list of supported languages here.import { SonixAudioTranscriptionLoader } from "langchain/document_loaders/web/sonix_audio";const loader = new SonixAudioTranscriptionLoader({ sonixAuthKey: "SONIX_AUTH_KEY", request: { audioFilePath: "LOCAL_AUDIO_FILE_PATH", fileName: "FILE_NAME", language: "en", },});const docs = await loader.load();console.log(docs);API Reference:SonixAudioTranscriptionLoader from langchain/document_loaders/web/sonix_audioPreviousSerpAPI LoaderNextBlockchain DataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-976 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersSonix AudioSonix AudioCompatibilityOnly available on Node.js.This covers how to load document objects from an audio file using the Sonix API.SetupTo run this loader you will need to create an account on the https://sonix.ai/ and obtain an auth key from the https://my.sonix.ai/api page.You'll also need to install the sonix-speech-recognition library:npmYarnpnpmnpm install sonix-speech-recognitionyarn add sonix-speech-recognitionpnpm add sonix-speech-recognitionUsageOnce auth key is configured, you can use the loader to create transcriptions and then convert them into a Document.
In the request parameter, you can either specify a local file by setting audioFilePath or a remote file using audioUrl. |
4e9727215e95-977 | You will also need to specify the audio language. See the list of supported languages here.import { SonixAudioTranscriptionLoader } from "langchain/document_loaders/web/sonix_audio";const loader = new SonixAudioTranscriptionLoader({ sonixAuthKey: "SONIX_AUTH_KEY", request: { audioFilePath: "LOCAL_AUDIO_FILE_PATH", fileName: "FILE_NAME", language: "en", },});const docs = await loader.load();console.log(docs);API Reference:SonixAudioTranscriptionLoader from langchain/document_loaders/web/sonix_audioPreviousSerpAPI LoaderNextBlockchain Data
ModulesData connectionDocument loadersIntegrationsWeb LoadersSonix AudioSonix AudioCompatibilityOnly available on Node.js.This covers how to load document objects from an audio file using the Sonix API.SetupTo run this loader you will need to create an account on the https://sonix.ai/ and obtain an auth key from the https://my.sonix.ai/api page.You'll also need to install the sonix-speech-recognition library:npmYarnpnpmnpm install sonix-speech-recognitionyarn add sonix-speech-recognitionpnpm add sonix-speech-recognitionUsageOnce auth key is configured, you can use the loader to create transcriptions and then convert them into a Document.
In the request parameter, you can either specify a local file by setting audioFilePath or a remote file using audioUrl. |
4e9727215e95-978 | You will also need to specify the audio language. See the list of supported languages here.import { SonixAudioTranscriptionLoader } from "langchain/document_loaders/web/sonix_audio";const loader = new SonixAudioTranscriptionLoader({ sonixAuthKey: "SONIX_AUTH_KEY", request: { audioFilePath: "LOCAL_AUDIO_FILE_PATH", fileName: "FILE_NAME", language: "en", },});const docs = await loader.load();console.log(docs);API Reference:SonixAudioTranscriptionLoader from langchain/document_loaders/web/sonix_audioPreviousSerpAPI LoaderNextBlockchain Data
Sonix AudioCompatibilityOnly available on Node.js.This covers how to load document objects from an audio file using the Sonix API.SetupTo run this loader you will need to create an account on the https://sonix.ai/ and obtain an auth key from the https://my.sonix.ai/api page.You'll also need to install the sonix-speech-recognition library:npmYarnpnpmnpm install sonix-speech-recognitionyarn add sonix-speech-recognitionpnpm add sonix-speech-recognitionUsageOnce auth key is configured, you can use the loader to create transcriptions and then convert them into a Document.
In the request parameter, you can either specify a local file by setting audioFilePath or a remote file using audioUrl. |
4e9727215e95-979 | You will also need to specify the audio language. See the list of supported languages here.import { SonixAudioTranscriptionLoader } from "langchain/document_loaders/web/sonix_audio";const loader = new SonixAudioTranscriptionLoader({ sonixAuthKey: "SONIX_AUTH_KEY", request: { audioFilePath: "LOCAL_AUDIO_FILE_PATH", fileName: "FILE_NAME", language: "en", },});const docs = await loader.load();console.log(docs);API Reference:SonixAudioTranscriptionLoader from langchain/document_loaders/web/sonix_audio
This covers how to load document objects from an audio file using the Sonix API.
To run this loader you will need to create an account on the https://sonix.ai/ and obtain an auth key from the https://my.sonix.ai/api page.
You'll also need to install the sonix-speech-recognition library:
npmYarnpnpmnpm install sonix-speech-recognitionyarn add sonix-speech-recognitionpnpm add sonix-speech-recognition
npm install sonix-speech-recognitionyarn add sonix-speech-recognitionpnpm add sonix-speech-recognition
npm install sonix-speech-recognition
yarn add sonix-speech-recognition
pnpm add sonix-speech-recognition
Once auth key is configured, you can use the loader to create transcriptions and then convert them into a Document.
In the request parameter, you can either specify a local file by setting audioFilePath or a remote file using audioUrl.
You will also need to specify the audio language. See the list of supported languages here. |
4e9727215e95-980 | You will also need to specify the audio language. See the list of supported languages here.
import { SonixAudioTranscriptionLoader } from "langchain/document_loaders/web/sonix_audio";const loader = new SonixAudioTranscriptionLoader({ sonixAuthKey: "SONIX_AUTH_KEY", request: { audioFilePath: "LOCAL_AUDIO_FILE_PATH", fileName: "FILE_NAME", language: "en", },});const docs = await loader.load();console.log(docs);
API Reference:SonixAudioTranscriptionLoader from langchain/document_loaders/web/sonix_audio
Blockchain Data
Page Title: Blockchain Data | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-981 | Page Title: Blockchain Data | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersBlockchain DataBlockchain DataThis example shows how to load blockchain data, including NFT metadata and transactions for a contract address, via the sort.xyz SQL API.You will need a free Sort API key, visiting sort.xyz to obtain one.import { SortXYZBlockchainLoader } from "langchain/document_loaders/web/sort_xyz_blockchain";import { OpenAI } from "langchain/llms/openai";/** * See https://docs.sort.xyz/docs/api-keys to get your free Sort API key. * See https://docs.sort.xyz for more information on the available queries. * See https://docs.sort.xyz/reference for more information about Sort's REST API. *//** * Run the example. */export const run = async () => { // Initialize the OpenAI model. |
4e9727215e95-982 | Use OPENAI_API_KEY from .env in /examples const model = new OpenAI({ temperature: 0.9 }); const apiKey = "YOUR_SORTXYZ_API_KEY"; const contractAddress = "0x887F3909C14DAbd9e9510128cA6cBb448E932d7f".toLowerCase(); /* Load NFT metadata from the Ethereum blockchain. Hint: to load by a specific ID, see SQL query example below. */ const nftMetadataLoader = new SortXYZBlockchainLoader({ apiKey, query: { type: "NFTMetadata", blockchain: "ethereum", contractAddress, }, }); const nftMetadataDocs = await nftMetadataLoader.load(); const nftPrompt = "Describe the character with the attributes from the following json document in a 4 sentence story. "; const nftResponse = await model.call( nftPrompt + JSON.stringify(nftMetadataDocs[0], null, 2) ); console.log(`user > ${nftPrompt}`); console.log(`chatgpt > ${nftResponse}`); /* Load the latest transactions for a contract address from the Ethereum blockchain. */ const latestTransactionsLoader = new SortXYZBlockchainLoader({ apiKey, query: { type: "latestTransactions", blockchain: "ethereum", contractAddress, }, }); const latestTransactionsDocs = await latestTransactionsLoader.load(); const latestPrompt = "Describe the following json documents in only 4 sentences per document. Include as much detail as possible. |
4e9727215e95-983 | "; const latestResponse = await model.call( latestPrompt + JSON.stringify(latestTransactionsDocs[0], null, 2) ); console.log(`\n\nuser > ${nftPrompt}`); console.log(`chatgpt > ${latestResponse}`); /* Load metadata for a specific NFT by using raw SQL and the NFT index. See https://docs.sort.xyz for forumulating SQL. */ const sqlQueryLoader = new SortXYZBlockchainLoader({ apiKey, query: `SELECT * FROM ethereum.nft_metadata WHERE contract_address = '${contractAddress}' AND token_id = 1 LIMIT 1`, }); const sqlDocs = await sqlQueryLoader.load(); const sqlPrompt = "Describe the character with the attributes from the following json document in an ad for a new coffee shop. "; const sqlResponse = await model.call( sqlPrompt + JSON.stringify(sqlDocs[0], null, 2) ); console.log(`\n\nuser > ${sqlPrompt}`); console.log(`chatgpt > ${sqlResponse}`);};API Reference:SortXYZBlockchainLoader from langchain/document_loaders/web/sort_xyz_blockchainOpenAI from langchain/llms/openaiPreviousSonix AudioNextYouTube transcriptsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-984 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersBlockchain DataBlockchain DataThis example shows how to load blockchain data, including NFT metadata and transactions for a contract address, via the sort.xyz SQL API.You will need a free Sort API key, visiting sort.xyz to obtain one.import { SortXYZBlockchainLoader } from "langchain/document_loaders/web/sort_xyz_blockchain";import { OpenAI } from "langchain/llms/openai";/** * See https://docs.sort.xyz/docs/api-keys to get your free Sort API key. * See https://docs.sort.xyz for more information on the available queries. * See https://docs.sort.xyz/reference for more information about Sort's REST API. *//** * Run the example. */export const run = async () => { // Initialize the OpenAI model. |
4e9727215e95-985 | Use OPENAI_API_KEY from .env in /examples const model = new OpenAI({ temperature: 0.9 }); const apiKey = "YOUR_SORTXYZ_API_KEY"; const contractAddress = "0x887F3909C14DAbd9e9510128cA6cBb448E932d7f".toLowerCase(); /* Load NFT metadata from the Ethereum blockchain. Hint: to load by a specific ID, see SQL query example below. */ const nftMetadataLoader = new SortXYZBlockchainLoader({ apiKey, query: { type: "NFTMetadata", blockchain: "ethereum", contractAddress, }, }); const nftMetadataDocs = await nftMetadataLoader.load(); const nftPrompt = "Describe the character with the attributes from the following json document in a 4 sentence story. "; const nftResponse = await model.call( nftPrompt + JSON.stringify(nftMetadataDocs[0], null, 2) ); console.log(`user > ${nftPrompt}`); console.log(`chatgpt > ${nftResponse}`); /* Load the latest transactions for a contract address from the Ethereum blockchain. */ const latestTransactionsLoader = new SortXYZBlockchainLoader({ apiKey, query: { type: "latestTransactions", blockchain: "ethereum", contractAddress, }, }); const latestTransactionsDocs = await latestTransactionsLoader.load(); const latestPrompt = "Describe the following json documents in only 4 sentences per document. Include as much detail as possible. |
4e9727215e95-986 | "; const latestResponse = await model.call( latestPrompt + JSON.stringify(latestTransactionsDocs[0], null, 2) ); console.log(`\n\nuser > ${nftPrompt}`); console.log(`chatgpt > ${latestResponse}`); /* Load metadata for a specific NFT by using raw SQL and the NFT index. See https://docs.sort.xyz for forumulating SQL. */ const sqlQueryLoader = new SortXYZBlockchainLoader({ apiKey, query: `SELECT * FROM ethereum.nft_metadata WHERE contract_address = '${contractAddress}' AND token_id = 1 LIMIT 1`, }); const sqlDocs = await sqlQueryLoader.load(); const sqlPrompt = "Describe the character with the attributes from the following json document in an ad for a new coffee shop. "; const sqlResponse = await model.call( sqlPrompt + JSON.stringify(sqlDocs[0], null, 2) ); console.log(`\n\nuser > ${sqlPrompt}`); console.log(`chatgpt > ${sqlResponse}`);};API Reference:SortXYZBlockchainLoader from langchain/document_loaders/web/sort_xyz_blockchainOpenAI from langchain/llms/openaiPreviousSonix AudioNextYouTube transcripts |
4e9727215e95-987 | ModulesData connectionDocument loadersIntegrationsWeb LoadersBlockchain DataBlockchain DataThis example shows how to load blockchain data, including NFT metadata and transactions for a contract address, via the sort.xyz SQL API.You will need a free Sort API key, visiting sort.xyz to obtain one.import { SortXYZBlockchainLoader } from "langchain/document_loaders/web/sort_xyz_blockchain";import { OpenAI } from "langchain/llms/openai";/** * See https://docs.sort.xyz/docs/api-keys to get your free Sort API key. * See https://docs.sort.xyz for more information on the available queries. * See https://docs.sort.xyz/reference for more information about Sort's REST API. *//** * Run the example. */export const run = async () => { // Initialize the OpenAI model. Use OPENAI_API_KEY from .env in /examples const model = new OpenAI({ temperature: 0.9 }); const apiKey = "YOUR_SORTXYZ_API_KEY"; const contractAddress = "0x887F3909C14DAbd9e9510128cA6cBb448E932d7f".toLowerCase(); /* Load NFT metadata from the Ethereum blockchain. Hint: to load by a specific ID, see SQL query example below. */ const nftMetadataLoader = new SortXYZBlockchainLoader({ apiKey, query: { type: "NFTMetadata", blockchain: "ethereum", contractAddress, }, }); const nftMetadataDocs = await nftMetadataLoader.load(); const nftPrompt = "Describe the character with the attributes from the following json document in a 4 sentence story. |
4e9727215e95-988 | "; const nftResponse = await model.call( nftPrompt + JSON.stringify(nftMetadataDocs[0], null, 2) ); console.log(`user > ${nftPrompt}`); console.log(`chatgpt > ${nftResponse}`); /* Load the latest transactions for a contract address from the Ethereum blockchain. */ const latestTransactionsLoader = new SortXYZBlockchainLoader({ apiKey, query: { type: "latestTransactions", blockchain: "ethereum", contractAddress, }, }); const latestTransactionsDocs = await latestTransactionsLoader.load(); const latestPrompt = "Describe the following json documents in only 4 sentences per document. Include as much detail as possible. "; const latestResponse = await model.call( latestPrompt + JSON.stringify(latestTransactionsDocs[0], null, 2) ); console.log(`\n\nuser > ${nftPrompt}`); console.log(`chatgpt > ${latestResponse}`); /* Load metadata for a specific NFT by using raw SQL and the NFT index. See https://docs.sort.xyz for forumulating SQL. */ const sqlQueryLoader = new SortXYZBlockchainLoader({ apiKey, query: `SELECT * FROM ethereum.nft_metadata WHERE contract_address = '${contractAddress}' AND token_id = 1 LIMIT 1`, }); const sqlDocs = await sqlQueryLoader.load(); const sqlPrompt = "Describe the character with the attributes from the following json document in an ad for a new coffee shop. |
4e9727215e95-989 | "; const sqlResponse = await model.call( sqlPrompt + JSON.stringify(sqlDocs[0], null, 2) ); console.log(`\n\nuser > ${sqlPrompt}`); console.log(`chatgpt > ${sqlResponse}`);};API Reference:SortXYZBlockchainLoader from langchain/document_loaders/web/sort_xyz_blockchainOpenAI from langchain/llms/openaiPreviousSonix AudioNextYouTube transcripts |
4e9727215e95-990 | Blockchain DataThis example shows how to load blockchain data, including NFT metadata and transactions for a contract address, via the sort.xyz SQL API.You will need a free Sort API key, visiting sort.xyz to obtain one.import { SortXYZBlockchainLoader } from "langchain/document_loaders/web/sort_xyz_blockchain";import { OpenAI } from "langchain/llms/openai";/** * See https://docs.sort.xyz/docs/api-keys to get your free Sort API key. * See https://docs.sort.xyz for more information on the available queries. * See https://docs.sort.xyz/reference for more information about Sort's REST API. *//** * Run the example. */export const run = async () => { // Initialize the OpenAI model. Use OPENAI_API_KEY from .env in /examples const model = new OpenAI({ temperature: 0.9 }); const apiKey = "YOUR_SORTXYZ_API_KEY"; const contractAddress = "0x887F3909C14DAbd9e9510128cA6cBb448E932d7f".toLowerCase(); /* Load NFT metadata from the Ethereum blockchain. Hint: to load by a specific ID, see SQL query example below. */ const nftMetadataLoader = new SortXYZBlockchainLoader({ apiKey, query: { type: "NFTMetadata", blockchain: "ethereum", contractAddress, }, }); const nftMetadataDocs = await nftMetadataLoader.load(); const nftPrompt = "Describe the character with the attributes from the following json document in a 4 sentence story. |
4e9727215e95-991 | "; const nftResponse = await model.call( nftPrompt + JSON.stringify(nftMetadataDocs[0], null, 2) ); console.log(`user > ${nftPrompt}`); console.log(`chatgpt > ${nftResponse}`); /* Load the latest transactions for a contract address from the Ethereum blockchain. */ const latestTransactionsLoader = new SortXYZBlockchainLoader({ apiKey, query: { type: "latestTransactions", blockchain: "ethereum", contractAddress, }, }); const latestTransactionsDocs = await latestTransactionsLoader.load(); const latestPrompt = "Describe the following json documents in only 4 sentences per document. Include as much detail as possible. "; const latestResponse = await model.call( latestPrompt + JSON.stringify(latestTransactionsDocs[0], null, 2) ); console.log(`\n\nuser > ${nftPrompt}`); console.log(`chatgpt > ${latestResponse}`); /* Load metadata for a specific NFT by using raw SQL and the NFT index. See https://docs.sort.xyz for forumulating SQL. */ const sqlQueryLoader = new SortXYZBlockchainLoader({ apiKey, query: `SELECT * FROM ethereum.nft_metadata WHERE contract_address = '${contractAddress}' AND token_id = 1 LIMIT 1`, }); const sqlDocs = await sqlQueryLoader.load(); const sqlPrompt = "Describe the character with the attributes from the following json document in an ad for a new coffee shop. |
4e9727215e95-992 | "; const sqlResponse = await model.call( sqlPrompt + JSON.stringify(sqlDocs[0], null, 2) ); console.log(`\n\nuser > ${sqlPrompt}`); console.log(`chatgpt > ${sqlResponse}`);};API Reference:SortXYZBlockchainLoader from langchain/document_loaders/web/sort_xyz_blockchainOpenAI from langchain/llms/openai
You will need a free Sort API key, visiting sort.xyz to obtain one. |
4e9727215e95-993 | You will need a free Sort API key, visiting sort.xyz to obtain one.
import { SortXYZBlockchainLoader } from "langchain/document_loaders/web/sort_xyz_blockchain";import { OpenAI } from "langchain/llms/openai";/** * See https://docs.sort.xyz/docs/api-keys to get your free Sort API key. * See https://docs.sort.xyz for more information on the available queries. * See https://docs.sort.xyz/reference for more information about Sort's REST API. *//** * Run the example. */export const run = async () => { // Initialize the OpenAI model. Use OPENAI_API_KEY from .env in /examples const model = new OpenAI({ temperature: 0.9 }); const apiKey = "YOUR_SORTXYZ_API_KEY"; const contractAddress = "0x887F3909C14DAbd9e9510128cA6cBb448E932d7f".toLowerCase(); /* Load NFT metadata from the Ethereum blockchain. Hint: to load by a specific ID, see SQL query example below. */ const nftMetadataLoader = new SortXYZBlockchainLoader({ apiKey, query: { type: "NFTMetadata", blockchain: "ethereum", contractAddress, }, }); const nftMetadataDocs = await nftMetadataLoader.load(); const nftPrompt = "Describe the character with the attributes from the following json document in a 4 sentence story. "; const nftResponse = await model.call( nftPrompt + JSON.stringify(nftMetadataDocs[0], null, 2) ); console.log(`user > ${nftPrompt}`); console.log(`chatgpt > ${nftResponse}`); /* Load the latest transactions for a contract address from the Ethereum blockchain. |
4e9727215e95-994 | / const latestTransactionsLoader = new SortXYZBlockchainLoader({ apiKey, query: { type: "latestTransactions", blockchain: "ethereum", contractAddress, }, }); const latestTransactionsDocs = await latestTransactionsLoader.load(); const latestPrompt = "Describe the following json documents in only 4 sentences per document. Include as much detail as possible. "; const latestResponse = await model.call( latestPrompt + JSON.stringify(latestTransactionsDocs[0], null, 2) ); console.log(`\n\nuser > ${nftPrompt}`); console.log(`chatgpt > ${latestResponse}`); /* Load metadata for a specific NFT by using raw SQL and the NFT index. See https://docs.sort.xyz for forumulating SQL. */ const sqlQueryLoader = new SortXYZBlockchainLoader({ apiKey, query: `SELECT * FROM ethereum.nft_metadata WHERE contract_address = '${contractAddress}' AND token_id = 1 LIMIT 1`, }); const sqlDocs = await sqlQueryLoader.load(); const sqlPrompt = "Describe the character with the attributes from the following json document in an ad for a new coffee shop. "; const sqlResponse = await model.call( sqlPrompt + JSON.stringify(sqlDocs[0], null, 2) ); console.log(`\n\nuser > ${sqlPrompt}`); console.log(`chatgpt > ${sqlResponse}`);};
API Reference:SortXYZBlockchainLoader from langchain/document_loaders/web/sort_xyz_blockchainOpenAI from langchain/llms/openai
YouTube transcripts
Page Title: YouTube transcripts | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-995 | Page Title: YouTube transcripts | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersYouTube transcriptsYouTube transcriptsThis covers how to load youtube transcript into LangChain documents.SetupYou'll need to install the youtube-transcript package
and youtubei.js to extract metadata:npmYarnpnpmnpm install youtube-transcript youtubei.jsyarn add youtube-transcript youtubei.jspnpm add youtube-transcript youtubei.jsUsageYou need to specify a link to the video in the url. You can also specify language in ISO 639-1 and addVideoInfo flag.import { YoutubeLoader } from "langchain/document_loaders/web/youtube";const loader = YoutubeLoader.createFromUrl("https://youtu.be/bZQun8Y4L2A", { language: "en", addVideoInfo: true,});const docs = await loader.load();console.log(docs);API Reference:YoutubeLoader from langchain/document_loaders/web/youtubePreviousBlockchain DataNextDocument transformersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-996 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersHow-toIntegrationsFile LoadersWeb LoadersCheerioPuppeteerPlaywrightApify DatasetAssemblyAI Audio TranscriptAzure Blob Storage ContainerAzure Blob Storage FileCollege ConfidentialConfluenceFigmaGitBookGitHubHacker NewsIMSDBNotion APIS3 FileSerpAPI LoaderSonix AudioBlockchain DataYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument loadersIntegrationsWeb LoadersYouTube transcriptsYouTube transcriptsThis covers how to load youtube transcript into LangChain documents.SetupYou'll need to install the youtube-transcript package
and youtubei.js to extract metadata:npmYarnpnpmnpm install youtube-transcript youtubei.jsyarn add youtube-transcript youtubei.jspnpm add youtube-transcript youtubei.jsUsageYou need to specify a link to the video in the url. You can also specify language in ISO 639-1 and addVideoInfo flag.import { YoutubeLoader } from "langchain/document_loaders/web/youtube";const loader = YoutubeLoader.createFromUrl("https://youtu.be/bZQun8Y4L2A", { language: "en", addVideoInfo: true,});const docs = await loader.load();console.log(docs);API Reference:YoutubeLoader from langchain/document_loaders/web/youtubePreviousBlockchain DataNextDocument transformers
ModulesData connectionDocument loadersIntegrationsWeb LoadersYouTube transcriptsYouTube transcriptsThis covers how to load youtube transcript into LangChain documents.SetupYou'll need to install the youtube-transcript package |
4e9727215e95-997 | and youtubei.js to extract metadata:npmYarnpnpmnpm install youtube-transcript youtubei.jsyarn add youtube-transcript youtubei.jspnpm add youtube-transcript youtubei.jsUsageYou need to specify a link to the video in the url. You can also specify language in ISO 639-1 and addVideoInfo flag.import { YoutubeLoader } from "langchain/document_loaders/web/youtube";const loader = YoutubeLoader.createFromUrl("https://youtu.be/bZQun8Y4L2A", { language: "en", addVideoInfo: true,});const docs = await loader.load();console.log(docs);API Reference:YoutubeLoader from langchain/document_loaders/web/youtubePreviousBlockchain DataNextDocument transformers
YouTube transcriptsThis covers how to load youtube transcript into LangChain documents.SetupYou'll need to install the youtube-transcript package
and youtubei.js to extract metadata:npmYarnpnpmnpm install youtube-transcript youtubei.jsyarn add youtube-transcript youtubei.jspnpm add youtube-transcript youtubei.jsUsageYou need to specify a link to the video in the url. You can also specify language in ISO 639-1 and addVideoInfo flag.import { YoutubeLoader } from "langchain/document_loaders/web/youtube";const loader = YoutubeLoader.createFromUrl("https://youtu.be/bZQun8Y4L2A", { language: "en", addVideoInfo: true,});const docs = await loader.load();console.log(docs);API Reference:YoutubeLoader from langchain/document_loaders/web/youtube
You'll need to install the youtube-transcript package
and youtubei.js to extract metadata:
npmYarnpnpmnpm install youtube-transcript youtubei.jsyarn add youtube-transcript youtubei.jspnpm add youtube-transcript youtubei.js |
4e9727215e95-998 | npm install youtube-transcript youtubei.jsyarn add youtube-transcript youtubei.jspnpm add youtube-transcript youtubei.js
npm install youtube-transcript youtubei.js
yarn add youtube-transcript youtubei.js
pnpm add youtube-transcript youtubei.js
You need to specify a link to the video in the url. You can also specify language in ISO 639-1 and addVideoInfo flag.
import { YoutubeLoader } from "langchain/document_loaders/web/youtube";const loader = YoutubeLoader.createFromUrl("https://youtu.be/bZQun8Y4L2A", { language: "en", addVideoInfo: true,});const docs = await loader.load();console.log(docs);
API Reference:YoutubeLoader from langchain/document_loaders/web/youtube
Page Title: Document transformers | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionDocument loadersDocument transformersIntegrationsText splittersText embedding modelsVector storesRetrieversExperimentalCaching embeddingsChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesData connectionDocument transformersOn this pageDocument transformersOnce you've loaded documents, you'll often want to transform them to better suit your application. The simplest example
is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain
has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents.Text splittersWhen you want to deal with long pieces of text, it is necessary to split up that text into chunks. |
4e9727215e95-999 | As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What "semantically related" means could depend on the type of text.
This notebook showcases several ways to do that.At a high level, text splitters work as following:Split the text up into small, semantically meaningful chunks (often sentences).Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function).Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks).That means there are two different axes along which you can customize your text splitter:How the text is splitHow the chunk size is measuredGet started with text splittersThe recommended TextSplitter is the RecursiveCharacterTextSplitter. This will split documents recursively by different characters - starting with "\n\n", then "\n", then " ". This is nice because it will try to keep all the semantically relevant content in the same place for as long as possible.Important parameters to know here are chunkSize and chunkOverlap. chunkSize controls the max size (in terms of number of characters) of the final documents. chunkOverlap specifies how much overlap there should be between chunks. This is often helpful to make sure that the text isn't split weirdly. In the example below we set these values to be small (for illustration purposes), but in practice they default to 1000 and 200 respectively.import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";const text = `Hi.\n\nI'm Harrison.\n\nHow? Are? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.