Unnamed: 0
int64
0
4.66k
page content
stringlengths
23
2k
description
stringlengths
8
925
output
stringlengths
38
2.93k
200
the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator.This notebook covers how to load content from HTML that was generated as part of a Read-The-Docs build.For an example of this in the wild, see here.This assumes that the HTML has already been scraped into a folder. This can be done by uncommenting and running the following command#!pip install beautifulsoup4#!wget -r -A.html -P rtdocs https://python.langchain.com/en/latest/from langchain.document_loaders import ReadTheDocsLoaderloader = ReadTheDocsLoader("rtdocs", features="html.parser")docs = loader.load()PreviousPySparkNextRecursive URLCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Read the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator.
Read the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator. ->: the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator.This notebook covers how to load content from HTML that was generated as part of a Read-The-Docs build.For an example of this in the wild, see here.This assumes that the HTML has already been scraped into a folder. This can be done by uncommenting and running the following command#!pip install beautifulsoup4#!wget -r -A.html -P rtdocs https://python.langchain.com/en/latest/from langchain.document_loaders import ReadTheDocsLoaderloader = ReadTheDocsLoader("rtdocs", features="html.parser")docs = loader.load()PreviousPySparkNextRecursive URLCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
201
Images | 🦜️🔗 Langchain
This covers how to load images such as JPG or PNG into a document format that we can use downstream.
This covers how to load images such as JPG or PNG into a document format that we can use downstream. ->: Images | 🦜️🔗 Langchain
202
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersImagesOn this pageImagesThis covers how to load images such as JPG or PNG into a document
This covers how to load images such as JPG or PNG into a document format that we can use downstream.
This covers how to load images such as JPG or PNG into a document format that we can use downstream. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersImagesOn this pageImagesThis covers how to load images such as JPG or PNG into a document
203
to load images such as JPG or PNG into a document format that we can use downstream.Using Unstructured​#!pip install pdfminerfrom langchain.document_loaders.image import UnstructuredImageLoaderloader = UnstructuredImageLoader("layout-parser-paper-fast.jpg")data = loader.load()data[0] Document(page_content="LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n\n\n‘Zxjiang Shen' (F3}, Ruochen Zhang”, Melissa Dell*, Benjamin Charles Germain\nLeet, Jacob Carlson, and Weining LiF\n\n\nsugehen\n\nshangthrows, et\n\n“Abstract. Recent advanocs in document image analysis (DIA) have been\n‘pimarliy driven bythe application of neural networks dell roar\n{uteomer could be aly deployed in production and extended fo farther\n[nvetigtion. However, various factory ke lcely organize codebanee\nsnd sophisticated modal cnigurations compat the ey ree of\n‘erin! innovation by wide sence, Though there have been sng\n‘Hors to improve reuablty and simplify deep lees (DL) mode\n‘aon, sone of them ae optimized for challenge inthe demain of DIA,\nThis roprscte a major gap in the extng fol, sw DIA i eal to\nscademic research acon wie range of dpi in the social ssencee\n[rary for streamlining the sage of DL in DIA research and appicn\n‘tons The core LayoutFaraer brary comes with a sch of simple and\nIntative interfaee or applying and eutomiing DI. odel fr Inyo de\npltfom for sharing both protrined modes an fal document dist\n{ation pipeline We demonutate that LayootPareer shea fr both\nlightweight and lrgeseledgtieation pipelines in eal-word uae ces\nThe leary pblely smal at Btspe://layost-pareergsthab So\n\n\n\n‘Keywords: Document Image Analysis» Deep Learning Layout Analysis\n‘Character Renguition - Open Serres dary « Tol\n\n\nIntroduction\n\n\n‘Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndoctiment image analysis (DIA) tea including document image clasiffeation [I]\n", lookup_str='',
This covers how to load images such as JPG or PNG into a document format that we can use downstream.
This covers how to load images such as JPG or PNG into a document format that we can use downstream. ->: to load images such as JPG or PNG into a document format that we can use downstream.Using Unstructured​#!pip install pdfminerfrom langchain.document_loaders.image import UnstructuredImageLoaderloader = UnstructuredImageLoader("layout-parser-paper-fast.jpg")data = loader.load()data[0] Document(page_content="LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n\n\n‘Zxjiang Shen' (F3}, Ruochen Zhang”, Melissa Dell*, Benjamin Charles Germain\nLeet, Jacob Carlson, and Weining LiF\n\n\nsugehen\n\nshangthrows, et\n\n“Abstract. Recent advanocs in document image analysis (DIA) have been\n‘pimarliy driven bythe application of neural networks dell roar\n{uteomer could be aly deployed in production and extended fo farther\n[nvetigtion. However, various factory ke lcely organize codebanee\nsnd sophisticated modal cnigurations compat the ey ree of\n‘erin! innovation by wide sence, Though there have been sng\n‘Hors to improve reuablty and simplify deep lees (DL) mode\n‘aon, sone of them ae optimized for challenge inthe demain of DIA,\nThis roprscte a major gap in the extng fol, sw DIA i eal to\nscademic research acon wie range of dpi in the social ssencee\n[rary for streamlining the sage of DL in DIA research and appicn\n‘tons The core LayoutFaraer brary comes with a sch of simple and\nIntative interfaee or applying and eutomiing DI. odel fr Inyo de\npltfom for sharing both protrined modes an fal document dist\n{ation pipeline We demonutate that LayootPareer shea fr both\nlightweight and lrgeseledgtieation pipelines in eal-word uae ces\nThe leary pblely smal at Btspe://layost-pareergsthab So\n\n\n\n‘Keywords: Document Image Analysis» Deep Learning Layout Analysis\n‘Character Renguition - Open Serres dary « Tol\n\n\nIntroduction\n\n\n‘Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndoctiment image analysis (DIA) tea including document image clasiffeation [I]\n", lookup_str='',
204
image clasiffeation [I]\n", lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg'}, lookup_index=0)Retain Elements​Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".loader = UnstructuredImageLoader("layout-parser-paper-fast.jpg", mode="elements")data = loader.load()data[0] Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg', 'filename': 'layout-parser-paper-fast.jpg', 'page_number': 1, 'category': 'Title'}, lookup_index=0)PreviousiFixitNextImage captionsUsing UnstructuredRetain ElementsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This covers how to load images such as JPG or PNG into a document format that we can use downstream.
This covers how to load images such as JPG or PNG into a document format that we can use downstream. ->: image clasiffeation [I]\n", lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg'}, lookup_index=0)Retain Elements​Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".loader = UnstructuredImageLoader("layout-parser-paper-fast.jpg", mode="elements")data = loader.load()data[0] Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg', 'filename': 'layout-parser-paper-fast.jpg', 'page_number': 1, 'category': 'Title'}, lookup_index=0)PreviousiFixitNextImage captionsUsing UnstructuredRetain ElementsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
205
Async Chromium | 🦜️🔗 Langchain
Chromium is one of the browsers supported by Playwright, a library used to control browser automation.
Chromium is one of the browsers supported by Playwright, a library used to control browser automation. ->: Async Chromium | 🦜️🔗 Langchain
206
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersAsync ChromiumAsync ChromiumChromium is one of the browsers supported by Playwright, a library
Chromium is one of the browsers supported by Playwright, a library used to control browser automation.
Chromium is one of the browsers supported by Playwright, a library used to control browser automation. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersAsync ChromiumAsync ChromiumChromium is one of the browsers supported by Playwright, a library
207
the browsers supported by Playwright, a library used to control browser automation. By running p.chromium.launch(headless=True), we are launching a headless instance of Chromium. Headless mode means that the browser is running without a graphical user interface.AsyncChromiumLoader load the page, and then we use Html2TextTransformer to trasnform to text.pip install -q playwright beautifulsoup4 playwright installfrom langchain.document_loaders import AsyncChromiumLoaderurls = ["https://www.wsj.com"]loader = AsyncChromiumLoader(urls)docs = loader.load()docs[0].page_content[0:100] '<!DOCTYPE html><html lang="en"><head><script src="https://s0.2mdn.net/instream/video/client.js" asyn'from langchain.document_transformers import Html2TextTransformerhtml2text = Html2TextTransformer()docs_transformed = html2text.transform_documents(docs)docs_transformed[0].page_content[0:500] "Skip to Main ContentSkip to SearchSkip to... Select * Top News * What's News *\nFeatured Stories * Retirement * Life & Arts * Hip-Hop * Sports * Video *\nEconomy * Real Estate * Sports * CMO * CIO * CFO * Risk & Compliance *\nLogistics Report * Sustainable Business * Heard on the Street * Barron’s *\nMarketWatch * Mansion Global * Penta * Opinion * Journal Reports * Sponsored\nOffers Explore Our Brands * WSJ * * * * * Barron's * * * * * MarketWatch * * *\n* * IBD # The Wall Street Journal SubscribeSig"PreviousAssemblyAI Audio TranscriptsNextAsyncHtmlCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Chromium is one of the browsers supported by Playwright, a library used to control browser automation.
Chromium is one of the browsers supported by Playwright, a library used to control browser automation. ->: the browsers supported by Playwright, a library used to control browser automation. By running p.chromium.launch(headless=True), we are launching a headless instance of Chromium. Headless mode means that the browser is running without a graphical user interface.AsyncChromiumLoader load the page, and then we use Html2TextTransformer to trasnform to text.pip install -q playwright beautifulsoup4 playwright installfrom langchain.document_loaders import AsyncChromiumLoaderurls = ["https://www.wsj.com"]loader = AsyncChromiumLoader(urls)docs = loader.load()docs[0].page_content[0:100] '<!DOCTYPE html><html lang="en"><head><script src="https://s0.2mdn.net/instream/video/client.js" asyn'from langchain.document_transformers import Html2TextTransformerhtml2text = Html2TextTransformer()docs_transformed = html2text.transform_documents(docs)docs_transformed[0].page_content[0:500] "Skip to Main ContentSkip to SearchSkip to... Select * Top News * What's News *\nFeatured Stories * Retirement * Life & Arts * Hip-Hop * Sports * Video *\nEconomy * Real Estate * Sports * CMO * CIO * CFO * Risk & Compliance *\nLogistics Report * Sustainable Business * Heard on the Street * Barron’s *\nMarketWatch * Mansion Global * Penta * Opinion * Journal Reports * Sponsored\nOffers Explore Our Brands * WSJ * * * * * Barron's * * * * * MarketWatch * * *\n* * IBD # The Wall Street Journal SubscribeSig"PreviousAssemblyAI Audio TranscriptsNextAsyncHtmlCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
208
Sitemap | 🦜️🔗 Langchain
Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.
Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. ->: Sitemap | 🦜️🔗 Langchain
209
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersSitemapOn this pageSitemapExtends from the WebBaseLoader, SitemapLoader loads a sitemap from a
Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.
Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersSitemapOn this pageSitemapExtends from the WebBaseLoader, SitemapLoader loads a sitemap from a
210
SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.The scraping is done concurrently. There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren't concerned about being a good citizen, or you control the scrapped server, or don't care about load. Note, while this will speed up the scraping process, but it may cause the server to block you. Be careful!pip install nest_asyncio Requirement already satisfied: nest_asyncio in /Users/tasp/Code/projects/langchain/.venv/lib/python3.10/site-packages (1.5.6) [notice] A new release of pip available: 22.3.1 -> 23.0.1 [notice] To update, run: pip install --upgrade pip# fixes a bug with asyncio and jupyterimport nest_asyncionest_asyncio.apply()from langchain.document_loaders.sitemap import SitemapLoadersitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml")docs = sitemap_loader.load()You can change the requests_per_second parameter to increase the max concurrent requests. and use requests_kwargs to pass kwargs when send requests.sitemap_loader.requests_per_second = 2# Optional: avoid `[SSL: CERTIFICATE_VERIFY_FAILED]` issuesitemap_loader.requests_kwargs = {"verify": False}docs[0] Document(page_content='\n\n\n\n\n\n\n\n\n\nLangChain Python API Reference Documentation.\n\n\n\n\n\n\n\n\n\nYou will be automatically redirected to the new location of this page.\n\n', metadata={'source': 'https://api.python.langchain.com/en/stable/', 'loc': 'https://api.python.langchain.com/en/stable/', 'lastmod': '2023-10-13T18:13:26.966937+00:00', 'changefreq': 'weekly', 'priority': '1'})Filtering sitemap URLs‚ÄãSitemaps can be massive files, with thousands of URLs. Often you don't need every single one of them. You can filter the URLs by passing a list of strings or regex patterns to the filter_urls parameter. Only URLs that match one of the patterns will be loaded.loader =
Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.
Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. ->: SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.The scraping is done concurrently. There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren't concerned about being a good citizen, or you control the scrapped server, or don't care about load. Note, while this will speed up the scraping process, but it may cause the server to block you. Be careful!pip install nest_asyncio Requirement already satisfied: nest_asyncio in /Users/tasp/Code/projects/langchain/.venv/lib/python3.10/site-packages (1.5.6) [notice] A new release of pip available: 22.3.1 -> 23.0.1 [notice] To update, run: pip install --upgrade pip# fixes a bug with asyncio and jupyterimport nest_asyncionest_asyncio.apply()from langchain.document_loaders.sitemap import SitemapLoadersitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml")docs = sitemap_loader.load()You can change the requests_per_second parameter to increase the max concurrent requests. and use requests_kwargs to pass kwargs when send requests.sitemap_loader.requests_per_second = 2# Optional: avoid `[SSL: CERTIFICATE_VERIFY_FAILED]` issuesitemap_loader.requests_kwargs = {"verify": False}docs[0] Document(page_content='\n\n\n\n\n\n\n\n\n\nLangChain Python API Reference Documentation.\n\n\n\n\n\n\n\n\n\nYou will be automatically redirected to the new location of this page.\n\n', metadata={'source': 'https://api.python.langchain.com/en/stable/', 'loc': 'https://api.python.langchain.com/en/stable/', 'lastmod': '2023-10-13T18:13:26.966937+00:00', 'changefreq': 'weekly', 'priority': '1'})Filtering sitemap URLs‚ÄãSitemaps can be massive files, with thousands of URLs. Often you don't need every single one of them. You can filter the URLs by passing a list of strings or regex patterns to the filter_urls parameter. Only URLs that match one of the patterns will be loaded.loader =
211
match one of the patterns will be loaded.loader = SitemapLoader( web_path="https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://api.python.langchain.com/en/latest"],)documents = loader.load() Fetching pages: 100%|##########| 1/1 [00:00<00:00, 16.39it/s]documents[0] Document(page_content='\n\n\n\n\n\n\n\n\n\nLangChain Python API Reference Documentation.\n\n\n\n\n\n\n\n\n\nYou will be automatically redirected to the new location of this page.\n\n', metadata={'source': 'https://api.python.langchain.com/en/latest/', 'loc': 'https://api.python.langchain.com/en/latest/', 'lastmod': '2023-10-13T18:09:58.478681+00:00', 'changefreq': 'daily', 'priority': '0.9'})Add custom scraping rules‚ÄãThe SitemapLoader uses beautifulsoup4 for the scraping process, and it scrapes every element on the page by default. The SitemapLoader constructor accepts a custom scraping function. This feature can be helpful to tailor the scraping process to your specific needs; for example, you might want to avoid scraping headers or navigation elements. The following example shows how to develop and use a custom function to avoid navigation and header elements.Import the beautifulsoup4 library and define the custom function.pip install beautifulsoup4from bs4 import BeautifulSoupdef remove_nav_and_header_elements(content: BeautifulSoup) -> str: # Find all 'nav' and 'header' elements in the BeautifulSoup object nav_elements = content.find_all("nav") header_elements = content.find_all("header") # Remove each 'nav' and 'header' element from the BeautifulSoup object for element in nav_elements + header_elements: element.decompose() return str(content.get_text())Add your custom function to the SitemapLoader object.loader = SitemapLoader( "https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://api.python.langchain.com/en/latest/"], parsing_function=remove_nav_and_header_elements,)Local Sitemap‚ÄãThe sitemap loader can also be used to
Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.
Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. ->: match one of the patterns will be loaded.loader = SitemapLoader( web_path="https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://api.python.langchain.com/en/latest"],)documents = loader.load() Fetching pages: 100%|##########| 1/1 [00:00<00:00, 16.39it/s]documents[0] Document(page_content='\n\n\n\n\n\n\n\n\n\nLangChain Python API Reference Documentation.\n\n\n\n\n\n\n\n\n\nYou will be automatically redirected to the new location of this page.\n\n', metadata={'source': 'https://api.python.langchain.com/en/latest/', 'loc': 'https://api.python.langchain.com/en/latest/', 'lastmod': '2023-10-13T18:09:58.478681+00:00', 'changefreq': 'daily', 'priority': '0.9'})Add custom scraping rules‚ÄãThe SitemapLoader uses beautifulsoup4 for the scraping process, and it scrapes every element on the page by default. The SitemapLoader constructor accepts a custom scraping function. This feature can be helpful to tailor the scraping process to your specific needs; for example, you might want to avoid scraping headers or navigation elements. The following example shows how to develop and use a custom function to avoid navigation and header elements.Import the beautifulsoup4 library and define the custom function.pip install beautifulsoup4from bs4 import BeautifulSoupdef remove_nav_and_header_elements(content: BeautifulSoup) -> str: # Find all 'nav' and 'header' elements in the BeautifulSoup object nav_elements = content.find_all("nav") header_elements = content.find_all("header") # Remove each 'nav' and 'header' element from the BeautifulSoup object for element in nav_elements + header_elements: element.decompose() return str(content.get_text())Add your custom function to the SitemapLoader object.loader = SitemapLoader( "https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://api.python.langchain.com/en/latest/"], parsing_function=remove_nav_and_header_elements,)Local Sitemap‚ÄãThe sitemap loader can also be used to
212
Sitemap​The sitemap loader can also be used to load local files.sitemap_loader = SitemapLoader(web_path="example_data/sitemap.xml", is_local=True)docs = sitemap_loader.load() Fetching pages: 100%|##########| 3/3 [00:00<00:00, 12.46it/s]PreviousRSTNextSlackFiltering sitemap URLsAdd custom scraping rulesLocal SitemapCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.
Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. ->: Sitemap​The sitemap loader can also be used to load local files.sitemap_loader = SitemapLoader(web_path="example_data/sitemap.xml", is_local=True)docs = sitemap_loader.load() Fetching pages: 100%|##########| 3/3 [00:00<00:00, 12.46it/s]PreviousRSTNextSlackFiltering sitemap URLsAdd custom scraping rulesLocal SitemapCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
213
Obsidian | 🦜️🔗 Langchain
Obsidian is a powerful and extensible knowledge base
Obsidian is a powerful and extensible knowledge base ->: Obsidian | 🦜️🔗 Langchain
214
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersObsidianObsidianObsidian is a powerful and extensible knowledge base
Obsidian is a powerful and extensible knowledge base
Obsidian is a powerful and extensible knowledge base ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersObsidianObsidianObsidian is a powerful and extensible knowledge base
215
that works on top of your local folder of plain text files.This notebook covers how to load documents from an Obsidian database.Since Obsidian is just stored on disk as a folder of Markdown files, the loader just takes a path to this directory.Obsidian files also sometimes contain metadata which is a YAML block at the top of the file. These values will be added to the document's metadata. (ObsidianLoader can also be passed a collect_metadata=False argument to disable this behavior.)from langchain.document_loaders import ObsidianLoaderloader = ObsidianLoader("<path-to-obsidian>")docs = loader.load()PreviousNucliaNextOpen Document Format (ODT)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Obsidian is a powerful and extensible knowledge base
Obsidian is a powerful and extensible knowledge base ->: that works on top of your local folder of plain text files.This notebook covers how to load documents from an Obsidian database.Since Obsidian is just stored on disk as a folder of Markdown files, the loader just takes a path to this directory.Obsidian files also sometimes contain metadata which is a YAML block at the top of the file. These values will be added to the document's metadata. (ObsidianLoader can also be passed a collect_metadata=False argument to disable this behavior.)from langchain.document_loaders import ObsidianLoaderloader = ObsidianLoader("<path-to-obsidian>")docs = loader.load()PreviousNucliaNextOpen Document Format (ODT)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
216
Alibaba Cloud MaxCompute | 🦜️🔗 Langchain
Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security.
Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security. ->: Alibaba Cloud MaxCompute | 🦜️🔗 Langchain
217
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersAlibaba Cloud MaxComputeOn this pageAlibaba Cloud MaxComputeAlibaba Cloud MaxCompute
Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security.
Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersAlibaba Cloud MaxComputeOn this pageAlibaba Cloud MaxComputeAlibaba Cloud MaxCompute
218
Cloud MaxComputeAlibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security.The MaxComputeLoader lets you execute a MaxCompute SQL query and loads the results as one document per row.pip install pyodps Collecting pyodps Downloading pyodps-0.11.4.post0-cp39-cp39-macosx_10_9_universal2.whl (2.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 1.7 MB/s eta 0:00:0000:0100:010m Requirement already satisfied: charset-normalizer>=2 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (3.1.0) Requirement already satisfied: urllib3<2.0,>=1.26.0 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (1.26.15) Requirement already satisfied: idna>=2.5 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (3.4) Requirement already satisfied: certifi>=2017.4.17 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (2023.5.7) Installing collected packages: pyodps Successfully installed pyodps-0.11.4.post0Basic Usage​To instantiate the loader you'll need a SQL query to execute, your MaxCompute endpoint and project name, and you access ID and secret access key. The access ID and secret access key can either be passed in direct via the access_id and secret_access_key parameters or they can be set as environment variables MAX_COMPUTE_ACCESS_ID and MAX_COMPUTE_SECRET_ACCESS_KEY.from langchain.document_loaders import MaxComputeLoaderbase_query = """SELECT *FROM ( SELECT 1 AS id, 'content1' AS content, 'meta_info1' AS meta_info UNION ALL SELECT 2 AS
Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security.
Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security. ->: Cloud MaxComputeAlibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security.The MaxComputeLoader lets you execute a MaxCompute SQL query and loads the results as one document per row.pip install pyodps Collecting pyodps Downloading pyodps-0.11.4.post0-cp39-cp39-macosx_10_9_universal2.whl (2.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 1.7 MB/s eta 0:00:0000:0100:010m Requirement already satisfied: charset-normalizer>=2 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (3.1.0) Requirement already satisfied: urllib3<2.0,>=1.26.0 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (1.26.15) Requirement already satisfied: idna>=2.5 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (3.4) Requirement already satisfied: certifi>=2017.4.17 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (2023.5.7) Installing collected packages: pyodps Successfully installed pyodps-0.11.4.post0Basic Usage​To instantiate the loader you'll need a SQL query to execute, your MaxCompute endpoint and project name, and you access ID and secret access key. The access ID and secret access key can either be passed in direct via the access_id and secret_access_key parameters or they can be set as environment variables MAX_COMPUTE_ACCESS_ID and MAX_COMPUTE_SECRET_ACCESS_KEY.from langchain.document_loaders import MaxComputeLoaderbase_query = """SELECT *FROM ( SELECT 1 AS id, 'content1' AS content, 'meta_info1' AS meta_info UNION ALL SELECT 2 AS
219
AS meta_info UNION ALL SELECT 2 AS id, 'content2' AS content, 'meta_info2' AS meta_info UNION ALL SELECT 3 AS id, 'content3' AS content, 'meta_info3' AS meta_info) mydata;"""endpoint = "<ENDPOINT>"project = "<PROJECT>"ACCESS_ID = "<ACCESS ID>"SECRET_ACCESS_KEY = "<SECRET ACCESS KEY>"loader = MaxComputeLoader.from_params( base_query, endpoint, project, access_id=ACCESS_ID, secret_access_key=SECRET_ACCESS_KEY,)data = loader.load()print(data) [Document(page_content='id: 1\ncontent: content1\nmeta_info: meta_info1', metadata={}), Document(page_content='id: 2\ncontent: content2\nmeta_info: meta_info2', metadata={}), Document(page_content='id: 3\ncontent: content3\nmeta_info: meta_info3', metadata={})]print(data[0].page_content) id: 1 content: content1 meta_info: meta_info1print(data[0].metadata) {}Specifying Which Columns are Content vs Metadata​You can configure which subset of columns should be loaded as the contents of the Document and which as the metadata using the page_content_columns and metadata_columns parameters.loader = MaxComputeLoader.from_params( base_query, endpoint, project, page_content_columns=["content"], # Specify Document page content metadata_columns=["id", "meta_info"], # Specify Document metadata access_id=ACCESS_ID, secret_access_key=SECRET_ACCESS_KEY,)data = loader.load()print(data[0].page_content) content: content1print(data[0].metadata) {'id': 1, 'meta_info': 'meta_info1'}PreviousAirtableNextApify DatasetBasic UsageSpecifying Which Columns are Content vs MetadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security.
Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security. ->: AS meta_info UNION ALL SELECT 2 AS id, 'content2' AS content, 'meta_info2' AS meta_info UNION ALL SELECT 3 AS id, 'content3' AS content, 'meta_info3' AS meta_info) mydata;"""endpoint = "<ENDPOINT>"project = "<PROJECT>"ACCESS_ID = "<ACCESS ID>"SECRET_ACCESS_KEY = "<SECRET ACCESS KEY>"loader = MaxComputeLoader.from_params( base_query, endpoint, project, access_id=ACCESS_ID, secret_access_key=SECRET_ACCESS_KEY,)data = loader.load()print(data) [Document(page_content='id: 1\ncontent: content1\nmeta_info: meta_info1', metadata={}), Document(page_content='id: 2\ncontent: content2\nmeta_info: meta_info2', metadata={}), Document(page_content='id: 3\ncontent: content3\nmeta_info: meta_info3', metadata={})]print(data[0].page_content) id: 1 content: content1 meta_info: meta_info1print(data[0].metadata) {}Specifying Which Columns are Content vs Metadata​You can configure which subset of columns should be loaded as the contents of the Document and which as the metadata using the page_content_columns and metadata_columns parameters.loader = MaxComputeLoader.from_params( base_query, endpoint, project, page_content_columns=["content"], # Specify Document page content metadata_columns=["id", "meta_info"], # Specify Document metadata access_id=ACCESS_ID, secret_access_key=SECRET_ACCESS_KEY,)data = loader.load()print(data[0].page_content) content: content1print(data[0].metadata) {'id': 1, 'meta_info': 'meta_info1'}PreviousAirtableNextApify DatasetBasic UsageSpecifying Which Columns are Content vs MetadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
220
Airbyte Zendesk Support | 🦜️🔗 Langchain
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. ->: Airbyte Zendesk Support | 🦜️🔗 Langchain
221
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersAirbyte Zendesk SupportOn this pageAirbyte Zendesk SupportAirbyte is a data integration
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersAirbyte Zendesk SupportOn this pageAirbyte Zendesk SupportAirbyte is a data integration
222
Zendesk SupportAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.This loader exposes the Zendesk Support connector as a document loader, allowing you to load various objects as documents.Installation‚ÄãFirst, you need to install the airbyte-source-zendesk-support python package.#!pip install airbyte-source-zendesk-supportExample‚ÄãCheck out the Airbyte documentation page for details about how to configure the reader.
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. ->: Zendesk SupportAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.This loader exposes the Zendesk Support connector as a document loader, allowing you to load various objects as documents.Installation‚ÄãFirst, you need to install the airbyte-source-zendesk-support python package.#!pip install airbyte-source-zendesk-supportExample‚ÄãCheck out the Airbyte documentation page for details about how to configure the reader.
223
The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-zendesk-support/source_zendesk_support/spec.json.The general shape looks like this:{ "subdomain": "<your zendesk subdomain>", "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>", "credentials": { "credentials": "api_token", "email": "<your email>", "api_token": "<your api token>" }}By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.from langchain.document_loaders.airbyte import AirbyteZendeskSupportLoaderconfig = { # your zendesk-support configuration}loader = AirbyteZendeskSupportLoader(config=config, stream_name="tickets") # check the documentation linked above for a list of all streamsNow you can load documents the usual waydocs = loader.load()As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:docs_iterator = loader.lazy_load()Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:from langchain.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"], metadata=record.data)loader = AirbyteZendeskSupportLoader(config=config, record_handler=handle_record, stream_name="tickets")docs = loader.load()Incremental loads‚ÄãSome streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.To take advantage of this,
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. ->: The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-zendesk-support/source_zendesk_support/spec.json.The general shape looks like this:{ "subdomain": "<your zendesk subdomain>", "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>", "credentials": { "credentials": "api_token", "email": "<your email>", "api_token": "<your api token>" }}By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.from langchain.document_loaders.airbyte import AirbyteZendeskSupportLoaderconfig = { # your zendesk-support configuration}loader = AirbyteZendeskSupportLoader(config=config, stream_name="tickets") # check the documentation linked above for a list of all streamsNow you can load documents the usual waydocs = loader.load()As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:docs_iterator = loader.lazy_load()Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:from langchain.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"], metadata=record.data)loader = AirbyteZendeskSupportLoader(config=config, record_handler=handle_record, stream_name="tickets")docs = loader.load()Incremental loads‚ÄãSome streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.To take advantage of this,
224
are updated frequently.To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.last_state = loader.last_state # store safelyincremental_loader = AirbyteZendeskSupportLoader(config=config, stream_name="tickets", state=last_state)new_docs = incremental_loader.load()PreviousAirbyte TypeformNextAirtableInstallationExampleIncremental loadsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. ->: are updated frequently.To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.last_state = loader.last_state # store safelyincremental_loader = AirbyteZendeskSupportLoader(config=config, stream_name="tickets", state=last_state)new_docs = incremental_loader.load()PreviousAirbyte TypeformNextAirtableInstallationExampleIncremental loadsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
225
YouTube transcripts | 🦜️🔗 Langchain
YouTube is an online video sharing and social media platform created by Google.
YouTube is an online video sharing and social media platform created by Google. ->: YouTube transcripts | 🦜️🔗 Langchain
226
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersYouTube transcriptsOn this pageYouTube transcriptsYouTube is an online video sharing and
YouTube is an online video sharing and social media platform created by Google.
YouTube is an online video sharing and social media platform created by Google. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersYouTube transcriptsOn this pageYouTube transcriptsYouTube is an online video sharing and
227
transcriptsYouTube is an online video sharing and social media platform created by Google.This notebook covers how to load documents from YouTube transcripts.from langchain.document_loaders import YoutubeLoader# !pip install youtube-transcript-apiloader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True)loader.load()Add video info‚Äã# ! pip install pytubeloader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True)loader.load()Add language preferences‚ÄãLanguage param : It's a list of language codes in a descending priority, en by default.translation param : It's a translate preference when the youtube does'nt have your select language, en by default.loader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True, language=["en", "id"], translation="en",)loader.load()YouTube loader from Google Cloud‚ÄãPrerequisites‚ÄãCreate a Google Cloud project or use an existing projectEnable the Youtube ApiAuthorize credentials for desktop apppip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib youtube-transcript-apiüßë Instructions for ingesting your Google Docs data‚ÄãBy default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_file keyword argument. Same thing with token.json. Note that token.json will be created automatically the first time you use the loader.GoogleApiYoutubeLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL:
YouTube is an online video sharing and social media platform created by Google.
YouTube is an online video sharing and social media platform created by Google. ->: transcriptsYouTube is an online video sharing and social media platform created by Google.This notebook covers how to load documents from YouTube transcripts.from langchain.document_loaders import YoutubeLoader# !pip install youtube-transcript-apiloader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True)loader.load()Add video info‚Äã# ! pip install pytubeloader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True)loader.load()Add language preferences‚ÄãLanguage param : It's a list of language codes in a descending priority, en by default.translation param : It's a translate preference when the youtube does'nt have your select language, en by default.loader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True, language=["en", "id"], translation="en",)loader.load()YouTube loader from Google Cloud‚ÄãPrerequisites‚ÄãCreate a Google Cloud project or use an existing projectEnable the Youtube ApiAuthorize credentials for desktop apppip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib youtube-transcript-apiüßë Instructions for ingesting your Google Docs data‚ÄãBy default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_file keyword argument. Same thing with token.json. Note that token.json will be created automatically the first time you use the loader.GoogleApiYoutubeLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL:
228
Note depending on your set up, the service_account_path needs to be set up. See here for more details.from langchain.document_loaders import GoogleApiClient, GoogleApiYoutubeLoader# Init the GoogleApiClientfrom pathlib import Pathgoogle_api_client = GoogleApiClient(credentials_path=Path("your_path_creds.json"))# Use a Channelyoutube_loader_channel = GoogleApiYoutubeLoader( google_api_client=google_api_client, channel_name="Reducible", captions_language="en",)# Use Youtube Idsyoutube_loader_ids = GoogleApiYoutubeLoader( google_api_client=google_api_client, video_ids=["TrdevFK_am4"], add_video_info=True)# returns a list of Documentsyoutube_loader_channel.load()PreviousYouTube audioNextDocument transformersAdd video infoAdd language preferencesYouTube loader from Google CloudPrerequisites🧑 Instructions for ingesting your Google Docs dataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
YouTube is an online video sharing and social media platform created by Google.
YouTube is an online video sharing and social media platform created by Google. ->: Note depending on your set up, the service_account_path needs to be set up. See here for more details.from langchain.document_loaders import GoogleApiClient, GoogleApiYoutubeLoader# Init the GoogleApiClientfrom pathlib import Pathgoogle_api_client = GoogleApiClient(credentials_path=Path("your_path_creds.json"))# Use a Channelyoutube_loader_channel = GoogleApiYoutubeLoader( google_api_client=google_api_client, channel_name="Reducible", captions_language="en",)# Use Youtube Idsyoutube_loader_ids = GoogleApiYoutubeLoader( google_api_client=google_api_client, video_ids=["TrdevFK_am4"], add_video_info=True)# returns a list of Documentsyoutube_loader_channel.load()PreviousYouTube audioNextDocument transformersAdd video infoAdd language preferencesYouTube loader from Google CloudPrerequisites🧑 Instructions for ingesting your Google Docs dataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
229
Huawei OBS Directory | 🦜️🔗 Langchain
The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents.
The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents. ->: Huawei OBS Directory | 🦜️🔗 Langchain
230
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersHuawei OBS DirectoryOn this pageHuawei OBS DirectoryThe following code demonstrates how to
The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents.
The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersHuawei OBS DirectoryOn this pageHuawei OBS DirectoryThe following code demonstrates how to
231
DirectoryThe following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents.# Install the required package# pip install esdk-obs-pythonfrom langchain.document_loaders import OBSDirectoryLoaderendpoint = "your-endpoint"# Configure your access credentials\nconfig = { "ak": "your-access-key", "sk": "your-secret-key"}loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint, config=config)loader.load()Specify a Prefix for Loading​If you want to load objects with a specific prefix from the bucket, you can use the following code:loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint, config=config, prefix="test_prefix")loader.load()Get Authentication Information from ECS​If your langchain is deployed on Huawei Cloud ECS and Agency is set up, the loader can directly get the security token from ECS without needing access key and secret key. config = {"get_token_from_ecs": True}loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint, config=config)loader.load()Use a Public Bucket​If your bucket's bucket policy allows anonymous access (anonymous users have listBucket and GetObject permissions), you can directly load the objects without configuring the config parameter.loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint)loader.load()PreviousHacker NewsNextHuawei OBS FileSpecify a Prefix for LoadingGet Authentication Information from ECSUse a Public BucketCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents.
The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents. ->: DirectoryThe following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents.# Install the required package# pip install esdk-obs-pythonfrom langchain.document_loaders import OBSDirectoryLoaderendpoint = "your-endpoint"# Configure your access credentials\nconfig = { "ak": "your-access-key", "sk": "your-secret-key"}loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint, config=config)loader.load()Specify a Prefix for Loading​If you want to load objects with a specific prefix from the bucket, you can use the following code:loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint, config=config, prefix="test_prefix")loader.load()Get Authentication Information from ECS​If your langchain is deployed on Huawei Cloud ECS and Agency is set up, the loader can directly get the security token from ECS without needing access key and secret key. config = {"get_token_from_ecs": True}loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint, config=config)loader.load()Use a Public Bucket​If your bucket's bucket policy allows anonymous access (anonymous users have listBucket and GetObject permissions), you can directly load the objects without configuring the config parameter.loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint)loader.load()PreviousHacker NewsNextHuawei OBS FileSpecify a Prefix for LoadingGet Authentication Information from ECSUse a Public BucketCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
232
Figma | 🦜️🔗 Langchain
Figma is a collaborative web application for interface design.
Figma is a collaborative web application for interface design. ->: Figma | 🦜️🔗 Langchain
233
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersFigmaFigmaFigma is a collaborative web application for interface design.This notebook covers
Figma is a collaborative web application for interface design.
Figma is a collaborative web application for interface design. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersFigmaFigmaFigma is a collaborative web application for interface design.This notebook covers
234
for interface design.This notebook covers how to load data from the Figma REST API into a format that can be ingested into LangChain, along with example usage for code generation.import osfrom langchain.document_loaders.figma import FigmaFileLoaderfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.chat_models import ChatOpenAIfrom langchain.indexes import VectorstoreIndexCreatorfrom langchain.chains import ConversationChain, LLMChainfrom langchain.memory import ConversationBufferWindowMemoryfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)The Figma API Requires an access token, node_ids, and a file key.The file key can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilenameNode IDs are also available in the URL. Click on anything and look for the '?node-id={node_id}' param.Access token instructions are in the Figma help center article: https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokensfigma_loader = FigmaFileLoader( os.environ.get("ACCESS_TOKEN"), os.environ.get("NODE_IDS"), os.environ.get("FILE_KEY"),)# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([figma_loader])figma_doc_retriever = index.vectorstore.as_retriever()def generate_code(human_input): # I have no idea if the Jon Carmack thing makes for better code. YMMV. # See https://python.langchain.com/en/latest/modules/models/chat/getting_started.html for chat info system_prompt_template = """You are expert coder Jon Carmack. Use the provided design context to create idiomatic HTML/CSS code as possible based on the user request. Everything must be inline in one file and your response must be directly renderable by the browser. Figma file nodes and metadata: {context}""" human_prompt_template = "Code the
Figma is a collaborative web application for interface design.
Figma is a collaborative web application for interface design. ->: for interface design.This notebook covers how to load data from the Figma REST API into a format that can be ingested into LangChain, along with example usage for code generation.import osfrom langchain.document_loaders.figma import FigmaFileLoaderfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.chat_models import ChatOpenAIfrom langchain.indexes import VectorstoreIndexCreatorfrom langchain.chains import ConversationChain, LLMChainfrom langchain.memory import ConversationBufferWindowMemoryfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)The Figma API Requires an access token, node_ids, and a file key.The file key can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilenameNode IDs are also available in the URL. Click on anything and look for the '?node-id={node_id}' param.Access token instructions are in the Figma help center article: https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokensfigma_loader = FigmaFileLoader( os.environ.get("ACCESS_TOKEN"), os.environ.get("NODE_IDS"), os.environ.get("FILE_KEY"),)# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([figma_loader])figma_doc_retriever = index.vectorstore.as_retriever()def generate_code(human_input): # I have no idea if the Jon Carmack thing makes for better code. YMMV. # See https://python.langchain.com/en/latest/modules/models/chat/getting_started.html for chat info system_prompt_template = """You are expert coder Jon Carmack. Use the provided design context to create idiomatic HTML/CSS code as possible based on the user request. Everything must be inline in one file and your response must be directly renderable by the browser. Figma file nodes and metadata: {context}""" human_prompt_template = "Code the
235
{context}""" human_prompt_template = "Code the {text}. Ensure it's mobile responsive" system_message_prompt = SystemMessagePromptTemplate.from_template( system_prompt_template ) human_message_prompt = HumanMessagePromptTemplate.from_template( human_prompt_template ) # delete the gpt-4 model_name to use the default gpt-3.5 turbo for faster results gpt_4 = ChatOpenAI(temperature=0.02, model_name="gpt-4") # Use the retriever's 'get_relevant_documents' method if needed to filter down longer docs relevant_nodes = figma_doc_retriever.get_relevant_documents(human_input) conversation = [system_message_prompt, human_message_prompt] chat_prompt = ChatPromptTemplate.from_messages(conversation) response = gpt_4( chat_prompt.format_prompt( context=relevant_nodes, text=human_input ).to_messages() ) return responseresponse = generate_code("page top header")Returns the following in response.content:<!DOCTYPE html>\n<html lang="en">\n<head>\n <meta charset="UTF-8">\n <meta name="viewport" content="width=device-width, initial-scale=1.0">\n <style>\n @import url(\'https://fonts.googleapis.com/css2?family=DM+Sans:wght@500;700&family=Inter:wght@600&display=swap\');\n\n body {\n margin: 0;\n font-family: \'DM Sans\', sans-serif;\n }\n\n .header {\n display: flex;\n justify-content: space-between;\n align-items: center;\n padding: 20px;\n background-color: #fff;\n box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);\n }\n\n .header h1 {\n font-size: 16px;\n font-weight: 700;\n margin: 0;\n }\n\n .header nav {\n display: flex;\n align-items: center;\n }\n\n .header nav a {\n font-size: 14px;\n font-weight: 500;\n text-decoration: none;\n color: #000;\n
Figma is a collaborative web application for interface design.
Figma is a collaborative web application for interface design. ->: {context}""" human_prompt_template = "Code the {text}. Ensure it's mobile responsive" system_message_prompt = SystemMessagePromptTemplate.from_template( system_prompt_template ) human_message_prompt = HumanMessagePromptTemplate.from_template( human_prompt_template ) # delete the gpt-4 model_name to use the default gpt-3.5 turbo for faster results gpt_4 = ChatOpenAI(temperature=0.02, model_name="gpt-4") # Use the retriever's 'get_relevant_documents' method if needed to filter down longer docs relevant_nodes = figma_doc_retriever.get_relevant_documents(human_input) conversation = [system_message_prompt, human_message_prompt] chat_prompt = ChatPromptTemplate.from_messages(conversation) response = gpt_4( chat_prompt.format_prompt( context=relevant_nodes, text=human_input ).to_messages() ) return responseresponse = generate_code("page top header")Returns the following in response.content:<!DOCTYPE html>\n<html lang="en">\n<head>\n <meta charset="UTF-8">\n <meta name="viewport" content="width=device-width, initial-scale=1.0">\n <style>\n @import url(\'https://fonts.googleapis.com/css2?family=DM+Sans:wght@500;700&family=Inter:wght@600&display=swap\');\n\n body {\n margin: 0;\n font-family: \'DM Sans\', sans-serif;\n }\n\n .header {\n display: flex;\n justify-content: space-between;\n align-items: center;\n padding: 20px;\n background-color: #fff;\n box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);\n }\n\n .header h1 {\n font-size: 16px;\n font-weight: 700;\n margin: 0;\n }\n\n .header nav {\n display: flex;\n align-items: center;\n }\n\n .header nav a {\n font-size: 14px;\n font-weight: 500;\n text-decoration: none;\n color: #000;\n
236
none;\n color: #000;\n margin-left: 20px;\n }\n\n @media (max-width: 768px) {\n .header nav {\n display: none;\n }\n }\n </style>\n</head>\n<body>\n <header class="header">\n <h1>Company Contact</h1>\n <nav>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n </nav>\n </header>\n</body>\n</html>PreviousFaunaNextGeopandasCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Figma is a collaborative web application for interface design.
Figma is a collaborative web application for interface design. ->: none;\n color: #000;\n margin-left: 20px;\n }\n\n @media (max-width: 768px) {\n .header nav {\n display: none;\n }\n }\n </style>\n</head>\n<body>\n <header class="header">\n <h1>Company Contact</h1>\n <nav>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n </nav>\n </header>\n</body>\n</html>PreviousFaunaNextGeopandasCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
237
AssemblyAI Audio Transcripts | 🦜️🔗 Langchain
The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents.
The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents. ->: AssemblyAI Audio Transcripts | 🦜️🔗 Langchain
238
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersAssemblyAI Audio TranscriptsOn this pageAssemblyAI Audio TranscriptsThe
The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents.
The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersAssemblyAI Audio TranscriptsOn this pageAssemblyAI Audio TranscriptsThe
239
this pageAssemblyAI Audio TranscriptsThe AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents.To use it, you should have the assemblyai python package installed, and the
The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents.
The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents. ->: this pageAssemblyAI Audio TranscriptsThe AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents.To use it, you should have the assemblyai python package installed, and the
240
environment variable ASSEMBLYAI_API_KEY set with your API key. Alternatively, the API key can also be passed as an argument.More info about AssemblyAI:WebsiteGet a Free API keyAssemblyAI API DocsInstallation‚ÄãFirst, you need to install the assemblyai python package.You can find more info about it inside the assemblyai-python-sdk GitHub repo.#!pip install assemblyaiExample‚ÄãThe AssemblyAIAudioTranscriptLoader needs at least the file_path argument. Audio files can be specified as an URL or a local file path.from langchain.document_loaders import AssemblyAIAudioTranscriptLoaderaudio_file = "https://storage.googleapis.com/aai-docs-samples/nbc.mp3"# or a local file path: audio_file = "./nbc.mp3"loader = AssemblyAIAudioTranscriptLoader(file_path=audio_file)docs = loader.load()Note: Calling loader.load() blocks until the transcription is finished.The transcribed text is available in the page_content:docs[0].page_content"Load time, a new president and new congressional makeup. Same old ..."The metadata contains the full JSON response with more meta information:docs[0].metadata{'language_code': <LanguageCode.en_us: 'en_us'>, 'audio_url': 'https://storage.googleapis.com/aai-docs-samples/nbc.mp3', 'punctuate': True, 'format_text': True, ...}Transcript Formats‚ÄãYou can specify the transcript_format argument for different formats.Depending on the format, one or more documents are returned. These are the different TranscriptFormat options:TEXT: One document with the transcription textSENTENCES: Multiple documents, splits the transcription by each sentencePARAGRAPHS: Multiple documents, splits the transcription by each paragraphSUBTITLES_SRT: One document with the transcript exported in SRT subtitles formatSUBTITLES_VTT: One document with the transcript exported in VTT subtitles formatfrom langchain.document_loaders.assemblyai import TranscriptFormatloader = AssemblyAIAudioTranscriptLoader( file_path="./your_file.mp3", transcript_format=TranscriptFormat.SENTENCES,)docs
The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents.
The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents. ->: environment variable ASSEMBLYAI_API_KEY set with your API key. Alternatively, the API key can also be passed as an argument.More info about AssemblyAI:WebsiteGet a Free API keyAssemblyAI API DocsInstallation‚ÄãFirst, you need to install the assemblyai python package.You can find more info about it inside the assemblyai-python-sdk GitHub repo.#!pip install assemblyaiExample‚ÄãThe AssemblyAIAudioTranscriptLoader needs at least the file_path argument. Audio files can be specified as an URL or a local file path.from langchain.document_loaders import AssemblyAIAudioTranscriptLoaderaudio_file = "https://storage.googleapis.com/aai-docs-samples/nbc.mp3"# or a local file path: audio_file = "./nbc.mp3"loader = AssemblyAIAudioTranscriptLoader(file_path=audio_file)docs = loader.load()Note: Calling loader.load() blocks until the transcription is finished.The transcribed text is available in the page_content:docs[0].page_content"Load time, a new president and new congressional makeup. Same old ..."The metadata contains the full JSON response with more meta information:docs[0].metadata{'language_code': <LanguageCode.en_us: 'en_us'>, 'audio_url': 'https://storage.googleapis.com/aai-docs-samples/nbc.mp3', 'punctuate': True, 'format_text': True, ...}Transcript Formats‚ÄãYou can specify the transcript_format argument for different formats.Depending on the format, one or more documents are returned. These are the different TranscriptFormat options:TEXT: One document with the transcription textSENTENCES: Multiple documents, splits the transcription by each sentencePARAGRAPHS: Multiple documents, splits the transcription by each paragraphSUBTITLES_SRT: One document with the transcript exported in SRT subtitles formatSUBTITLES_VTT: One document with the transcript exported in VTT subtitles formatfrom langchain.document_loaders.assemblyai import TranscriptFormatloader = AssemblyAIAudioTranscriptLoader( file_path="./your_file.mp3", transcript_format=TranscriptFormat.SENTENCES,)docs
241
= loader.load()Transcription Config​You can also specify the config argument to use different audio intelligence models.Visit the AssemblyAI API Documentation to get an overview of all available models!import assemblyai as aaiconfig = aai.TranscriptionConfig(speaker_labels=True, auto_chapters=True, entity_detection=True)loader = AssemblyAIAudioTranscriptLoader( file_path="./your_file.mp3", config=config)Pass the API Key as argument​Next to setting the API key as environment variable ASSEMBLYAI_API_KEY, it is also possible to pass it as argument.loader = AssemblyAIAudioTranscriptLoader( file_path="./your_file.mp3", api_key="YOUR_KEY")PreviousArxivNextAsync ChromiumInstallationExampleTranscript FormatsTranscription ConfigPass the API Key as argumentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents.
The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents. ->: = loader.load()Transcription Config​You can also specify the config argument to use different audio intelligence models.Visit the AssemblyAI API Documentation to get an overview of all available models!import assemblyai as aaiconfig = aai.TranscriptionConfig(speaker_labels=True, auto_chapters=True, entity_detection=True)loader = AssemblyAIAudioTranscriptLoader( file_path="./your_file.mp3", config=config)Pass the API Key as argument​Next to setting the API key as environment variable ASSEMBLYAI_API_KEY, it is also possible to pass it as argument.loader = AssemblyAIAudioTranscriptLoader( file_path="./your_file.mp3", api_key="YOUR_KEY")PreviousArxivNextAsync ChromiumInstallationExampleTranscript FormatsTranscription ConfigPass the API Key as argumentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
242
Diffbot | 🦜️🔗 Langchain
Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page.
Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. ->: Diffbot | 🦜️🔗 Langchain
243
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersDiffbotDiffbotUnlike traditional web scraping tools, Diffbot doesn't require any rules to read
Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page.
Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersDiffbotDiffbotUnlike traditional web scraping tools, Diffbot doesn't require any rules to read
244
tools, Diffbot doesn't require any rules to read the content on a page.
Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page.
Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. ->: tools, Diffbot doesn't require any rules to read the content on a page.
245
It starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type.
Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page.
Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. ->: It starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type.
246
The result is a website transformed into clean structured data (like JSON or CSV), ready for your application.This covers how to extract HTML documents from a list of URLs using the Diffbot extract API, into a document format that we can use downstream.urls = [ "https://python.langchain.com/en/latest/index.html",]The Diffbot Extract API Requires an API token. Once you have it, you can extract the data.Read instructions how to get the Diffbot API Token.import osfrom langchain.document_loaders import DiffbotLoaderloader = DiffbotLoader(urls=urls, api_token=os.environ.get("DIFFBOT_API_TOKEN"))With the .load() method, you can see the documents loadedloader.load() [Document(page_content='LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\nBe data-aware: connect a language model to other sources of data\nBe agentic: allow a language model to interact with its environment\nThe LangChain framework is designed with the above principles in mind.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\nGetting Started\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\nGetting Started Documentation\nModules\nThere are several main modules that LangChain provides support for. For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides. These modules are, in increasing order of complexity:\nModels: The various model types and model integrations LangChain supports.\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a
Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page.
Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. ->: The result is a website transformed into clean structured data (like JSON or CSV), ready for your application.This covers how to extract HTML documents from a list of URLs using the Diffbot extract API, into a document format that we can use downstream.urls = [ "https://python.langchain.com/en/latest/index.html",]The Diffbot Extract API Requires an API token. Once you have it, you can extract the data.Read instructions how to get the Diffbot API Token.import osfrom langchain.document_loaders import DiffbotLoaderloader = DiffbotLoader(urls=urls, api_token=os.environ.get("DIFFBOT_API_TOKEN"))With the .load() method, you can see the documents loadedloader.load() [Document(page_content='LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\nBe data-aware: connect a language model to other sources of data\nBe agentic: allow a language model to interact with its environment\nThe LangChain framework is designed with the above principles in mind.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\nGetting Started\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\nGetting Started Documentation\nModules\nThere are several main modules that LangChain provides support for. For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides. These modules are, in increasing order of complexity:\nModels: The various model types and model integrations LangChain supports.\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a
247
provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\nUse Cases\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\nExtraction: Extract structured information from text.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nEvaluation: Generative models are notoriously
Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page.
Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. ->: provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\nUse Cases\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\nExtraction: Extract structured information from text.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nEvaluation: Generative models are notoriously
248
Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\nReference Docs\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\nReference Documentation\nLangChain Ecosystem\nGuides for how other companies/products can be used with LangChain\nLangChain Ecosystem\nAdditional Resources\nAdditional collection of resources we think may be useful as you develop your application!\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\nDiscord: Join us on our Discord to discuss all things LangChain!\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.', metadata={'source': 'https://python.langchain.com/en/latest/index.html'})]PreviousDatadog LogsNextDiscordCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page.
Unlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. ->: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\nReference Docs\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\nReference Documentation\nLangChain Ecosystem\nGuides for how other companies/products can be used with LangChain\nLangChain Ecosystem\nAdditional Resources\nAdditional collection of resources we think may be useful as you develop your application!\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\nDiscord: Join us on our Discord to discuss all things LangChain!\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.', metadata={'source': 'https://python.langchain.com/en/latest/index.html'})]PreviousDatadog LogsNextDiscordCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
249
Notebook | 🦜️🔗 Langchain
This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.
This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain. ->: Notebook | 🦜️🔗 Langchain
250
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataNotebookMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersexample_dataNotebookNotebookThis notebook covers how to load data from an .ipynb
This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.
This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataNotebookMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersexample_dataNotebookNotebookThis notebook covers how to load data from an .ipynb
251
notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.from langchain.document_loaders import NotebookLoaderloader = NotebookLoader("example_data/notebook.ipynb")NotebookLoader.load() loads the .ipynb notebook file into a Document object.Parameters:include_outputs (bool): whether to include cell outputs in the resulting document (default is False).max_output_length (int): the maximum number of characters to include from each cell output (default is 10).remove_newline (bool): whether to remove newline characters from the cell sources and outputs (default is False).traceback (bool): whether to include full traceback (default is False).loader.load(include_outputs=True, max_output_length=20, remove_newline=True)PreviousEverNoteNextMicrosoft ExcelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.
This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain. ->: notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.from langchain.document_loaders import NotebookLoaderloader = NotebookLoader("example_data/notebook.ipynb")NotebookLoader.load() loads the .ipynb notebook file into a Document object.Parameters:include_outputs (bool): whether to include cell outputs in the resulting document (default is False).max_output_length (int): the maximum number of characters to include from each cell output (default is 10).remove_newline (bool): whether to remove newline characters from the cell sources and outputs (default is False).traceback (bool): whether to include full traceback (default is False).loader.load(include_outputs=True, max_output_length=20, remove_newline=True)PreviousEverNoteNextMicrosoft ExcelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
252
Modern Treasury | 🦜️🔗 Langchain
Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.
Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money. ->: Modern Treasury | 🦜️🔗 Langchain
253
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersModern TreasuryModern TreasuryModern Treasury simplifies complex payment operations. It is a
Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.
Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersModern TreasuryModern TreasuryModern Treasury simplifies complex payment operations. It is a
254
simplifies complex payment operations. It is a unified platform to power products and processes that move money.Connect to banks and payment systemsTrack transactions and balances in real-timeAutomate payment operations for scaleThis notebook covers how to load data from the Modern Treasury REST API into a format that can be ingested into LangChain, along with example usage for vectorization.import osfrom langchain.document_loaders import ModernTreasuryLoaderfrom langchain.indexes import VectorstoreIndexCreatorThe Modern Treasury API requires an organization ID and API key, which can be found in the Modern Treasury dashboard within developer settings.This document loader also requires a resource option which defines what data you want to load.Following resources are available:payment_orders Documentationexpected_payments Documentationreturns Documentationincoming_payment_details Documentationcounterparties Documentationinternal_accounts Documentationexternal_accounts Documentationtransactions Documentationledgers Documentationledger_accounts Documentationledger_transactions Documentationevents Documentationinvoices Documentationmodern_treasury_loader = ModernTreasuryLoader("payment_orders")# Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([modern_treasury_loader])modern_treasury_doc_retriever = index.vectorstore.as_retriever()PreviousMicrosoft WordNextMongoDBCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.
Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money. ->: simplifies complex payment operations. It is a unified platform to power products and processes that move money.Connect to banks and payment systemsTrack transactions and balances in real-timeAutomate payment operations for scaleThis notebook covers how to load data from the Modern Treasury REST API into a format that can be ingested into LangChain, along with example usage for vectorization.import osfrom langchain.document_loaders import ModernTreasuryLoaderfrom langchain.indexes import VectorstoreIndexCreatorThe Modern Treasury API requires an organization ID and API key, which can be found in the Modern Treasury dashboard within developer settings.This document loader also requires a resource option which defines what data you want to load.Following resources are available:payment_orders Documentationexpected_payments Documentationreturns Documentationincoming_payment_details Documentationcounterparties Documentationinternal_accounts Documentationexternal_accounts Documentationtransactions Documentationledgers Documentationledger_accounts Documentationledger_transactions Documentationevents Documentationinvoices Documentationmodern_treasury_loader = ModernTreasuryLoader("payment_orders")# Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([modern_treasury_loader])modern_treasury_doc_retriever = index.vectorstore.as_retriever()PreviousMicrosoft WordNextMongoDBCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
255
Rockset | 🦜️🔗 Langchain
Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups).
Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups). ->: Rockset | 🦜️🔗 Langchain
256
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersRocksetOn this pageRocksetRockset is a real-time analytics database which enables queries on
Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups).
Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups). ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersRocksetOn this pageRocksetRockset is a real-time analytics database which enables queries on
257
analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups).This notebook demonstrates how to use Rockset as a document loader in langchain. To get started, make sure you have a Rockset account and an API key available.Setting up the environment‚ÄãGo to the Rockset console and get an API key. Find your API region from the API reference. For the purpose of this notebook, we will assume you're using Rockset from Oregon(us-west-2).Set your the environment variable ROCKSET_API_KEY.Install the Rockset python client, which will be used by langchain to interact with the Rockset database.$ pip3 install rocksetLoading DocumentsThe Rockset integration with LangChain allows you to load documents from Rockset collections with SQL queries. In order to do this you must construct a RocksetLoader object. Here is an example snippet that initializes a RocksetLoader.from langchain.document_loaders import RocksetLoaderfrom rockset import RocksetClient, Regions, modelsloader = RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 3"), # SQL query ["text"], # content columns metadata_keys=["id", "date"], # metadata columns)Here, you can see that the following query is run:SELECT * FROM langchain_demo LIMIT 3The text column in the collection is used as the page content, and the record's id and date columns are used as metadata (if you do not pass anything into metadata_keys, the whole Rockset document will be used as metadata). To execute the query and access an iterator over the resulting Documents, run:loader.lazy_load()To execute the query and access all resulting Documents
Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups).
Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups). ->: analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups).This notebook demonstrates how to use Rockset as a document loader in langchain. To get started, make sure you have a Rockset account and an API key available.Setting up the environment‚ÄãGo to the Rockset console and get an API key. Find your API region from the API reference. For the purpose of this notebook, we will assume you're using Rockset from Oregon(us-west-2).Set your the environment variable ROCKSET_API_KEY.Install the Rockset python client, which will be used by langchain to interact with the Rockset database.$ pip3 install rocksetLoading DocumentsThe Rockset integration with LangChain allows you to load documents from Rockset collections with SQL queries. In order to do this you must construct a RocksetLoader object. Here is an example snippet that initializes a RocksetLoader.from langchain.document_loaders import RocksetLoaderfrom rockset import RocksetClient, Regions, modelsloader = RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 3"), # SQL query ["text"], # content columns metadata_keys=["id", "date"], # metadata columns)Here, you can see that the following query is run:SELECT * FROM langchain_demo LIMIT 3The text column in the collection is used as the page content, and the record's id and date columns are used as metadata (if you do not pass anything into metadata_keys, the whole Rockset document will be used as metadata). To execute the query and access an iterator over the resulting Documents, run:loader.lazy_load()To execute the query and access all resulting Documents
258
the query and access all resulting Documents at once, run:loader.load()Here is an example response of loader.load():[ Document( page_content="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas a libero porta, dictum ipsum eget, hendrerit neque. Morbi blandit, ex ut suscipit viverra, enim velit tincidunt tellus, a tempor velit nunc et ex. Proin hendrerit odio nec convallis lobortis. Aenean in purus dolor. Vestibulum orci orci, laoreet eget magna in, commodo euismod justo.", metadata={"id": 83209, "date": "2022-11-13T18:26:45.000000Z"} ), Document( page_content="Integer at finibus odio. Nam sit amet enim cursus lacus gravida feugiat vestibulum sed libero. Aenean eleifend est quis elementum tincidunt. Curabitur sit amet ornare erat. Nulla id dolor ut magna volutpat sodales fringilla vel ipsum. Donec ultricies, lacus sed fermentum dignissim, lorem elit aliquam ligula, sed suscipit sapien purus nec ligula.", metadata={"id": 89313, "date": "2022-11-13T18:28:53.000000Z"} ), Document( page_content="Morbi tortor enim, commodo id efficitur vitae, fringilla nec mi. Nullam molestie faucibus aliquet. Praesent a est facilisis, condimentum justo sit amet, viverra erat. Fusce volutpat nisi vel purus blandit, et facilisis felis accumsan. Phasellus luctus ligula ultrices tellus tempor hendrerit. Donec at ultricies leo.", metadata={"id": 87732, "date": "2022-11-13T18:49:04.000000Z"} )]Using multiple columns as content‚ÄãYou can choose to use multiple columns as content:from langchain.document_loaders import RocksetLoaderfrom rockset import RocksetClient, Regions, modelsloader = RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], # TWO content columns)Assuming the "sentence1" field is "This is the first sentence." and the "sentence2" field is "This is the second sentence.", the
Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups).
Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups). ->: the query and access all resulting Documents at once, run:loader.load()Here is an example response of loader.load():[ Document( page_content="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas a libero porta, dictum ipsum eget, hendrerit neque. Morbi blandit, ex ut suscipit viverra, enim velit tincidunt tellus, a tempor velit nunc et ex. Proin hendrerit odio nec convallis lobortis. Aenean in purus dolor. Vestibulum orci orci, laoreet eget magna in, commodo euismod justo.", metadata={"id": 83209, "date": "2022-11-13T18:26:45.000000Z"} ), Document( page_content="Integer at finibus odio. Nam sit amet enim cursus lacus gravida feugiat vestibulum sed libero. Aenean eleifend est quis elementum tincidunt. Curabitur sit amet ornare erat. Nulla id dolor ut magna volutpat sodales fringilla vel ipsum. Donec ultricies, lacus sed fermentum dignissim, lorem elit aliquam ligula, sed suscipit sapien purus nec ligula.", metadata={"id": 89313, "date": "2022-11-13T18:28:53.000000Z"} ), Document( page_content="Morbi tortor enim, commodo id efficitur vitae, fringilla nec mi. Nullam molestie faucibus aliquet. Praesent a est facilisis, condimentum justo sit amet, viverra erat. Fusce volutpat nisi vel purus blandit, et facilisis felis accumsan. Phasellus luctus ligula ultrices tellus tempor hendrerit. Donec at ultricies leo.", metadata={"id": 87732, "date": "2022-11-13T18:49:04.000000Z"} )]Using multiple columns as content‚ÄãYou can choose to use multiple columns as content:from langchain.document_loaders import RocksetLoaderfrom rockset import RocksetClient, Regions, modelsloader = RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], # TWO content columns)Assuming the "sentence1" field is "This is the first sentence." and the "sentence2" field is "This is the second sentence.", the
259
field is "This is the second sentence.", the page_content of the resulting Document would be:This is the first sentence.This is the second sentence.You can define you own function to join content columns by setting the content_columns_joiner argument in the RocksetLoader constructor. content_columns_joiner is a method that takes in a List[Tuple[str, Any]]] as an argument, representing a list of tuples of (column name, column value). By default, this is a method that joins each column value with a new line.For example, if you wanted to join sentence1 and sentence2 with a space instead of a new line, you could set content_columns_joiner like so:RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], content_columns_joiner=lambda docs: " ".join( [doc[1] for doc in docs] ), # join with space instead of /n)The page_content of the resulting Document would be:This is the first sentence. This is the second sentence.Oftentimes you want to include the column name in the page_content. You can do that like this:RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], content_columns_joiner=lambda docs: "\n".join( [f"{doc[0]}: {doc[1]}" for doc in docs] ),)This would result in the following page_content:sentence1: This is the first sentence.sentence2: This is the second sentence.PreviousRoamNextrspaceSetting up the environmentUsing multiple columns as contentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups).
Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups). ->: field is "This is the second sentence.", the page_content of the resulting Document would be:This is the first sentence.This is the second sentence.You can define you own function to join content columns by setting the content_columns_joiner argument in the RocksetLoader constructor. content_columns_joiner is a method that takes in a List[Tuple[str, Any]]] as an argument, representing a list of tuples of (column name, column value). By default, this is a method that joins each column value with a new line.For example, if you wanted to join sentence1 and sentence2 with a space instead of a new line, you could set content_columns_joiner like so:RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], content_columns_joiner=lambda docs: " ".join( [doc[1] for doc in docs] ), # join with space instead of /n)The page_content of the resulting Document would be:This is the first sentence. This is the second sentence.Oftentimes you want to include the column name in the page_content. You can do that like this:RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], content_columns_joiner=lambda docs: "\n".join( [f"{doc[0]}: {doc[1]}" for doc in docs] ),)This would result in the following page_content:sentence1: This is the first sentence.sentence2: This is the second sentence.PreviousRoamNextrspaceSetting up the environmentUsing multiple columns as contentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
260
PubMed | 🦜️🔗 Langchain
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. ->: PubMed | 🦜️🔗 Langchain
261
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersPubMedPubMedPubMed® by The National Center for Biotechnology Information, National Library of
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersPubMedPubMedPubMed® by The National Center for Biotechnology Information, National Library of
262
Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.from langchain.document_loaders import PubMedLoaderloader = PubMedLoader("chatgpt")docs = loader.load()len(docs) 3docs[1].metadata {'uid': '37548997', 'Title': 'Performance of ChatGPT on the Situational Judgement Test-A Professional Dilemmas-Based Examination for Doctors in the United Kingdom.', 'Published': '2023-08-07', 'Copyright Information': '©Robin J Borchert, Charlotte R Hickman, Jack Pepys, Timothy J Sadler. Originally published in JMIR Medical Education (https://mededu.jmir.org), 07.08.2023.'}docs[1].page_content "BACKGROUND: ChatGPT is a large language model that has performed well on professional examinations in the fields of medicine, law, and business. However, it is unclear how ChatGPT would perform on an examination assessing professionalism and situational judgement for doctors.\nOBJECTIVE: We evaluated the performance of ChatGPT on the Situational Judgement Test (SJT): a national examination taken by all final-year medical students in the United Kingdom. This examination is designed to assess attributes such as communication, teamwork, patient safety, prioritization skills, professionalism, and ethics.\nMETHODS: All questions from the UK Foundation Programme Office's (UKFPO's) 2023 SJT practice examination were inputted into ChatGPT. For each question, ChatGPT's answers and rationales were recorded and assessed on the basis of the official UK Foundation Programme Office scoring template. Questions were categorized into domains of Good Medical Practice on the basis of the domains referenced in the rationales provided in the scoring sheet. Questions without clear domain links were screened by reviewers and assigned one or multiple domains. ChatGPT's overall
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. ->: Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.from langchain.document_loaders import PubMedLoaderloader = PubMedLoader("chatgpt")docs = loader.load()len(docs) 3docs[1].metadata {'uid': '37548997', 'Title': 'Performance of ChatGPT on the Situational Judgement Test-A Professional Dilemmas-Based Examination for Doctors in the United Kingdom.', 'Published': '2023-08-07', 'Copyright Information': '©Robin J Borchert, Charlotte R Hickman, Jack Pepys, Timothy J Sadler. Originally published in JMIR Medical Education (https://mededu.jmir.org), 07.08.2023.'}docs[1].page_content "BACKGROUND: ChatGPT is a large language model that has performed well on professional examinations in the fields of medicine, law, and business. However, it is unclear how ChatGPT would perform on an examination assessing professionalism and situational judgement for doctors.\nOBJECTIVE: We evaluated the performance of ChatGPT on the Situational Judgement Test (SJT): a national examination taken by all final-year medical students in the United Kingdom. This examination is designed to assess attributes such as communication, teamwork, patient safety, prioritization skills, professionalism, and ethics.\nMETHODS: All questions from the UK Foundation Programme Office's (UKFPO's) 2023 SJT practice examination were inputted into ChatGPT. For each question, ChatGPT's answers and rationales were recorded and assessed on the basis of the official UK Foundation Programme Office scoring template. Questions were categorized into domains of Good Medical Practice on the basis of the domains referenced in the rationales provided in the scoring sheet. Questions without clear domain links were screened by reviewers and assigned one or multiple domains. ChatGPT's overall
263
one or multiple domains. ChatGPT's overall performance, as well as its performance across the domains of Good Medical Practice, was evaluated.\nRESULTS: Overall, ChatGPT performed well, scoring 76% on the SJT but scoring full marks on only a few questions (9%), which may reflect possible flaws in ChatGPT's situational judgement or inconsistencies in the reasoning across questions (or both) in the examination itself. ChatGPT demonstrated consistent performance across the 4 outlined domains in Good Medical Practice for doctors.\nCONCLUSIONS: Further research is needed to understand the potential applications of large language models, such as ChatGPT, in medical education for standardizing questions and providing consistent rationales for examinations assessing professionalism and ethics."PreviousPsychicNextPySparkCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. ->: one or multiple domains. ChatGPT's overall performance, as well as its performance across the domains of Good Medical Practice, was evaluated.\nRESULTS: Overall, ChatGPT performed well, scoring 76% on the SJT but scoring full marks on only a few questions (9%), which may reflect possible flaws in ChatGPT's situational judgement or inconsistencies in the reasoning across questions (or both) in the examination itself. ChatGPT demonstrated consistent performance across the 4 outlined domains in Good Medical Practice for doctors.\nCONCLUSIONS: Further research is needed to understand the potential applications of large language models, such as ChatGPT, in medical education for standardizing questions and providing consistent rationales for examinations assessing professionalism and ethics."PreviousPsychicNextPySparkCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
264
Recursive URL | 🦜️🔗 Langchain
We may want to process load all URLs under a root directory.
We may want to process load all URLs under a root directory. ->: Recursive URL | 🦜️🔗 Langchain
265
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersRecursive URLRecursive URLWe may want to process load all URLs under a root directory.For
We may want to process load all URLs under a root directory.
We may want to process load all URLs under a root directory. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersRecursive URLRecursive URLWe may want to process load all URLs under a root directory.For
266
process load all URLs under a root directory.For example, let's look at the Python 3.9 Document.This has many interesting child pages that we may want to read in bulk.Of course, the WebBaseLoader can load a list of pages. But, the challenge is traversing the tree of child pages and actually assembling that list!We do this using the RecursiveUrlLoader.This also gives us the flexibility to exclude some children, customize the extractor, and more.Parametersurl: str, the target url to crawl.exclude_dirs: Optional[str], webpage directories to exclude.use_async: Optional[bool], wether to use async requests, using async requests is usually faster in large tasks. However, async will disable the lazy loading feature(the function still works, but it is not lazy). By default, it is set to False.extractor: Optional[Callable[[str], str]], a function to extract the text of the document from the webpage, by default it returns the page as it is. It is recommended to use tools like goose3 and beautifulsoup to extract the text. By default, it just returns the page as it is.max_depth: Optional[int] = None, the maximum depth to crawl. By default, it is set to 2. If you need to crawl the whole website, set it to a number that is large enough would simply do the job.timeout: Optional[int] = None, the timeout for each request, in the unit of seconds. By default, it is set to 10.prevent_outside: Optional[bool] = None, whether to prevent crawling outside the root url. By default, it is set to True.from langchain.document_loaders.recursive_url_loader import RecursiveUrlLoaderLet's try a simple example.from bs4 import BeautifulSoup as Soupurl = "https://docs.python.org/3.9/"loader = RecursiveUrlLoader(url=url, max_depth=2, extractor=lambda x: Soup(x, "html.parser").text)docs = loader.load()docs[0].page_content[:50] '\n\n\n\n\nPython Frequently Asked Questions — Python 3.'docs[-1].metadata {'source': 'https://docs.python.org/3.9/library/index.html', 'title': 'The Python Standard
We may want to process load all URLs under a root directory.
We may want to process load all URLs under a root directory. ->: process load all URLs under a root directory.For example, let's look at the Python 3.9 Document.This has many interesting child pages that we may want to read in bulk.Of course, the WebBaseLoader can load a list of pages. But, the challenge is traversing the tree of child pages and actually assembling that list!We do this using the RecursiveUrlLoader.This also gives us the flexibility to exclude some children, customize the extractor, and more.Parametersurl: str, the target url to crawl.exclude_dirs: Optional[str], webpage directories to exclude.use_async: Optional[bool], wether to use async requests, using async requests is usually faster in large tasks. However, async will disable the lazy loading feature(the function still works, but it is not lazy). By default, it is set to False.extractor: Optional[Callable[[str], str]], a function to extract the text of the document from the webpage, by default it returns the page as it is. It is recommended to use tools like goose3 and beautifulsoup to extract the text. By default, it just returns the page as it is.max_depth: Optional[int] = None, the maximum depth to crawl. By default, it is set to 2. If you need to crawl the whole website, set it to a number that is large enough would simply do the job.timeout: Optional[int] = None, the timeout for each request, in the unit of seconds. By default, it is set to 10.prevent_outside: Optional[bool] = None, whether to prevent crawling outside the root url. By default, it is set to True.from langchain.document_loaders.recursive_url_loader import RecursiveUrlLoaderLet's try a simple example.from bs4 import BeautifulSoup as Soupurl = "https://docs.python.org/3.9/"loader = RecursiveUrlLoader(url=url, max_depth=2, extractor=lambda x: Soup(x, "html.parser").text)docs = loader.load()docs[0].page_content[:50] '\n\n\n\n\nPython Frequently Asked Questions — Python 3.'docs[-1].metadata {'source': 'https://docs.python.org/3.9/library/index.html', 'title': 'The Python Standard
267
'title': 'The Python Standard Library — Python 3.9.17 documentation', 'language': None}However, since it's hard to perform a perfect filter, you may still see some irrelevant results in the results. You can perform a filter on the returned documents by yourself, if it's needed. Most of the time, the returned results are good enough.Testing on LangChain docs.url = "https://js.langchain.com/docs/modules/memory/integrations/"loader = RecursiveUrlLoader(url=url)docs = loader.load()len(docs) 8PreviousReadTheDocs DocumentationNextRedditCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
We may want to process load all URLs under a root directory.
We may want to process load all URLs under a root directory. ->: 'title': 'The Python Standard Library — Python 3.9.17 documentation', 'language': None}However, since it's hard to perform a perfect filter, you may still see some irrelevant results in the results. You can perform a filter on the returned documents by yourself, if it's needed. Most of the time, the returned results are good enough.Testing on LangChain docs.url = "https://js.langchain.com/docs/modules/memory/integrations/"loader = RecursiveUrlLoader(url=url)docs = loader.load()len(docs) 8PreviousReadTheDocs DocumentationNextRedditCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
268
Source Code | 🦜️🔗 Langchain
This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document.
This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. ->: Source Code | 🦜️🔗 Langchain
269
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersSource CodeOn this pageSource CodeThis notebook covers how to load source code files using a
This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document.
This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersSource CodeOn this pageSource CodeThis notebook covers how to load source code files using a
270
covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document.This approach can potentially improve the accuracy of QA models over source code. Currently, the supported languages for code parsing are Python and JavaScript. The language used for parsing can be configured, along with the minimum number of lines required to activate the splitting based on syntax.pip install esprimaimport warningswarnings.filterwarnings("ignore")from pprint import pprintfrom langchain.text_splitter import Languagefrom langchain.document_loaders.generic import GenericLoaderfrom langchain.document_loaders.parsers import LanguageParserloader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".py", ".js"], parser=LanguageParser(),)docs = loader.load()len(docs) 6for document in docs: pprint(document.metadata) {'content_type': 'functions_classes', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'} {'content_type': 'functions_classes', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'} {'content_type': 'simplified_code', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'} {'content_type': 'functions_classes', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'} {'content_type': 'functions_classes', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'} {'content_type': 'simplified_code', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'}print("\n\n--8<--\n\n".join([document.page_content for document in docs])) class MyClass: def __init__(self,
This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document.
This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. ->: covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document.This approach can potentially improve the accuracy of QA models over source code. Currently, the supported languages for code parsing are Python and JavaScript. The language used for parsing can be configured, along with the minimum number of lines required to activate the splitting based on syntax.pip install esprimaimport warningswarnings.filterwarnings("ignore")from pprint import pprintfrom langchain.text_splitter import Languagefrom langchain.document_loaders.generic import GenericLoaderfrom langchain.document_loaders.parsers import LanguageParserloader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".py", ".js"], parser=LanguageParser(),)docs = loader.load()len(docs) 6for document in docs: pprint(document.metadata) {'content_type': 'functions_classes', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'} {'content_type': 'functions_classes', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'} {'content_type': 'simplified_code', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'} {'content_type': 'functions_classes', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'} {'content_type': 'functions_classes', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'} {'content_type': 'simplified_code', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'}print("\n\n--8<--\n\n".join([document.page_content for document in docs])) class MyClass: def __init__(self,
271
class MyClass: def __init__(self, name): self.name = name def greet(self): print(f"Hello, {self.name}!") --8<-- def main(): name = input("Enter your name: ") obj = MyClass(name) obj.greet() --8<-- # Code for: class MyClass: # Code for: def main(): if __name__ == "__main__": main() --8<-- class MyClass { constructor(name) { this.name = name; } greet() { console.log(`Hello, ${this.name}!`); } } --8<-- function main() { const name = prompt("Enter your name:"); const obj = new MyClass(name); obj.greet(); } --8<-- // Code for: class MyClass { // Code for: function main() { main();The parser can be disabled for small files. The parameter parser_threshold indicates the minimum number of lines that the source code file must have to be segmented using the parser.loader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".py"], parser=LanguageParser(language=Language.PYTHON, parser_threshold=1000),)docs = loader.load()len(docs) 1print(docs[0].page_content) class MyClass: def __init__(self, name): self.name = name def greet(self): print(f"Hello, {self.name}!") def main(): name = input("Enter your name: ") obj = MyClass(name) obj.greet() if __name__ == "__main__": main() Splitting‚ÄãAdditional splitting could be needed for those functions, classes, or scripts that are too big.loader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".js"], parser=LanguageParser(language=Language.JS),)docs = loader.load()from langchain.text_splitter import ( RecursiveCharacterTextSplitter, Language,)js_splitter = RecursiveCharacterTextSplitter.from_language(
This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document.
This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. ->: class MyClass: def __init__(self, name): self.name = name def greet(self): print(f"Hello, {self.name}!") --8<-- def main(): name = input("Enter your name: ") obj = MyClass(name) obj.greet() --8<-- # Code for: class MyClass: # Code for: def main(): if __name__ == "__main__": main() --8<-- class MyClass { constructor(name) { this.name = name; } greet() { console.log(`Hello, ${this.name}!`); } } --8<-- function main() { const name = prompt("Enter your name:"); const obj = new MyClass(name); obj.greet(); } --8<-- // Code for: class MyClass { // Code for: function main() { main();The parser can be disabled for small files. The parameter parser_threshold indicates the minimum number of lines that the source code file must have to be segmented using the parser.loader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".py"], parser=LanguageParser(language=Language.PYTHON, parser_threshold=1000),)docs = loader.load()len(docs) 1print(docs[0].page_content) class MyClass: def __init__(self, name): self.name = name def greet(self): print(f"Hello, {self.name}!") def main(): name = input("Enter your name: ") obj = MyClass(name) obj.greet() if __name__ == "__main__": main() Splitting‚ÄãAdditional splitting could be needed for those functions, classes, or scripts that are too big.loader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".js"], parser=LanguageParser(language=Language.JS),)docs = loader.load()from langchain.text_splitter import ( RecursiveCharacterTextSplitter, Language,)js_splitter = RecursiveCharacterTextSplitter.from_language(
272
RecursiveCharacterTextSplitter.from_language( language=Language.JS, chunk_size=60, chunk_overlap=0)result = js_splitter.split_documents(docs)len(result) 7print("\n\n--8<--\n\n".join([document.page_content for document in result])) class MyClass { constructor(name) { this.name = name; --8<-- } --8<-- greet() { console.log(`Hello, ${this.name}!`); } } --8<-- function main() { const name = prompt("Enter your name:"); --8<-- const obj = new MyClass(name); obj.greet(); } --8<-- // Code for: class MyClass { // Code for: function main() { --8<-- main();PreviousSnowflakeNextSpreedlySplittingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document.
This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. ->: RecursiveCharacterTextSplitter.from_language( language=Language.JS, chunk_size=60, chunk_overlap=0)result = js_splitter.split_documents(docs)len(result) 7print("\n\n--8<--\n\n".join([document.page_content for document in result])) class MyClass { constructor(name) { this.name = name; --8<-- } --8<-- greet() { console.log(`Hello, ${this.name}!`); } } --8<-- function main() { const name = prompt("Enter your name:"); --8<-- const obj = new MyClass(name); obj.greet(); } --8<-- // Code for: class MyClass { // Code for: function main() { --8<-- main();PreviousSnowflakeNextSpreedlySplittingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
273
Geopandas | 🦜️🔗 Langchain
Geopandas is an open-source project to make working with geospatial data in python easier.
Geopandas is an open-source project to make working with geospatial data in python easier. ->: Geopandas | 🦜️🔗 Langchain
274
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersGeopandasGeopandasGeopandas is an open-source project to make working with geospatial data in
Geopandas is an open-source project to make working with geospatial data in python easier.
Geopandas is an open-source project to make working with geospatial data in python easier. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersGeopandasGeopandasGeopandas is an open-source project to make working with geospatial data in
275
project to make working with geospatial data in python easier. GeoPandas extends the datatypes used by pandas to allow spatial operations on geometric types. Geometric operations are performed by shapely. Geopandas further depends on fiona for file access and matplotlib for plotting.LLM applications (chat, QA) that utilize geospatial data are an interesting area for exploration.pip install sodapy pip install pandas pip install geopandasimport astimport pandas as pdimport geopandas as gpdfrom langchain.document_loaders import OpenCityDataLoaderCreate a GeoPandas dataframe from Open City Data as an example input.# Load Open City Datadataset = "tmnf-yvry" # San Francisco crime dataloader = OpenCityDataLoader(city_id="data.sfgov.org", dataset_id=dataset, limit=5000)docs = loader.load()# Convert list of dictionaries to DataFramedf = pd.DataFrame([ast.literal_eval(d.page_content) for d in docs])# Extract latitude and longitudedf["Latitude"] = df["location"].apply(lambda loc: loc["coordinates"][1])df["Longitude"] = df["location"].apply(lambda loc: loc["coordinates"][0])# Create geopandas DFgdf = gpd.GeoDataFrame( df, geometry=gpd.points_from_xy(df.Longitude, df.Latitude), crs="EPSG:4326")# Only keep valid longitudes and latitudes for San Franciscogdf = gdf[ (gdf["Longitude"] >= -123.173825) & (gdf["Longitude"] <= -122.281780) & (gdf["Latitude"] >= 37.623983) & (gdf["Latitude"] <= 37.929824)]Visualization of the sample of SF crime data. import matplotlib.pyplot as plt# Load San Francisco map datasf = gpd.read_file("https://data.sfgov.org/resource/3psu-pn9h.geojson")# Plot the San Francisco map and the pointsfig, ax = plt.subplots(figsize=(10, 10))sf.plot(ax=ax, color="white", edgecolor="black")gdf.plot(ax=ax, color="red", markersize=5)plt.show()Load GeoPandas dataframe as a Document for downstream processing (embedding, chat, etc). The geometry will be the default page_content columns, and all other columns are placed in metadata.But, we can specify the
Geopandas is an open-source project to make working with geospatial data in python easier.
Geopandas is an open-source project to make working with geospatial data in python easier. ->: project to make working with geospatial data in python easier. GeoPandas extends the datatypes used by pandas to allow spatial operations on geometric types. Geometric operations are performed by shapely. Geopandas further depends on fiona for file access and matplotlib for plotting.LLM applications (chat, QA) that utilize geospatial data are an interesting area for exploration.pip install sodapy pip install pandas pip install geopandasimport astimport pandas as pdimport geopandas as gpdfrom langchain.document_loaders import OpenCityDataLoaderCreate a GeoPandas dataframe from Open City Data as an example input.# Load Open City Datadataset = "tmnf-yvry" # San Francisco crime dataloader = OpenCityDataLoader(city_id="data.sfgov.org", dataset_id=dataset, limit=5000)docs = loader.load()# Convert list of dictionaries to DataFramedf = pd.DataFrame([ast.literal_eval(d.page_content) for d in docs])# Extract latitude and longitudedf["Latitude"] = df["location"].apply(lambda loc: loc["coordinates"][1])df["Longitude"] = df["location"].apply(lambda loc: loc["coordinates"][0])# Create geopandas DFgdf = gpd.GeoDataFrame( df, geometry=gpd.points_from_xy(df.Longitude, df.Latitude), crs="EPSG:4326")# Only keep valid longitudes and latitudes for San Franciscogdf = gdf[ (gdf["Longitude"] >= -123.173825) & (gdf["Longitude"] <= -122.281780) & (gdf["Latitude"] >= 37.623983) & (gdf["Latitude"] <= 37.929824)]Visualization of the sample of SF crime data. import matplotlib.pyplot as plt# Load San Francisco map datasf = gpd.read_file("https://data.sfgov.org/resource/3psu-pn9h.geojson")# Plot the San Francisco map and the pointsfig, ax = plt.subplots(figsize=(10, 10))sf.plot(ax=ax, color="white", edgecolor="black")gdf.plot(ax=ax, color="red", markersize=5)plt.show()Load GeoPandas dataframe as a Document for downstream processing (embedding, chat, etc). The geometry will be the default page_content columns, and all other columns are placed in metadata.But, we can specify the
276
are placed in metadata.But, we can specify the page_content_column.from langchain.document_loaders import GeoDataFrameLoaderloader = GeoDataFrameLoader(data_frame=gdf, page_content_column="geometry")docs = loader.load()docs[0] Document(page_content='POINT (-122.420084075249 37.7083109744362)', metadata={'pdid': '4133422003074', 'incidntnum': '041334220', 'incident_code': '03074', 'category': 'ROBBERY', 'descript': 'ROBBERY, BODILY FORCE', 'dayofweek': 'Monday', 'date': '2004-11-22T00:00:00.000', 'time': '17:50', 'pddistrict': 'INGLESIDE', 'resolution': 'NONE', 'address': 'GENEVA AV / SANTOS ST', 'x': '-122.420084075249', 'y': '37.7083109744362', 'location': {'type': 'Point', 'coordinates': [-122.420084075249, 37.7083109744362]}, ':@computed_region_26cr_cadq': '9', ':@computed_region_rxqg_mtj9': '8', ':@computed_region_bh8s_q3mv': '309', ':@computed_region_6qbp_sg9q': nan, ':@computed_region_qgnn_b9vv': nan, ':@computed_region_ajp5_b2md': nan, ':@computed_region_yftq_j783': nan, ':@computed_region_p5aj_wyqh': nan, ':@computed_region_fyvs_ahh9': nan, ':@computed_region_6pnf_4xz7': nan, ':@computed_region_jwn9_ihcz': nan, ':@computed_region_9dfj_4gjx': nan, ':@computed_region_4isq_27mq': nan, ':@computed_region_pigm_ib2e': nan, ':@computed_region_9jxd_iqea': nan, ':@computed_region_6ezc_tdp2': nan, ':@computed_region_h4ep_8xdi': nan, ':@computed_region_n4xg_c4py': nan, ':@computed_region_fcz8_est8': nan, ':@computed_region_nqbw_i6c3': nan, ':@computed_region_2dwj_jsy4': nan, 'Latitude': 37.7083109744362, 'Longitude': -122.420084075249})PreviousFigmaNextGitCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Geopandas is an open-source project to make working with geospatial data in python easier.
Geopandas is an open-source project to make working with geospatial data in python easier. ->: are placed in metadata.But, we can specify the page_content_column.from langchain.document_loaders import GeoDataFrameLoaderloader = GeoDataFrameLoader(data_frame=gdf, page_content_column="geometry")docs = loader.load()docs[0] Document(page_content='POINT (-122.420084075249 37.7083109744362)', metadata={'pdid': '4133422003074', 'incidntnum': '041334220', 'incident_code': '03074', 'category': 'ROBBERY', 'descript': 'ROBBERY, BODILY FORCE', 'dayofweek': 'Monday', 'date': '2004-11-22T00:00:00.000', 'time': '17:50', 'pddistrict': 'INGLESIDE', 'resolution': 'NONE', 'address': 'GENEVA AV / SANTOS ST', 'x': '-122.420084075249', 'y': '37.7083109744362', 'location': {'type': 'Point', 'coordinates': [-122.420084075249, 37.7083109744362]}, ':@computed_region_26cr_cadq': '9', ':@computed_region_rxqg_mtj9': '8', ':@computed_region_bh8s_q3mv': '309', ':@computed_region_6qbp_sg9q': nan, ':@computed_region_qgnn_b9vv': nan, ':@computed_region_ajp5_b2md': nan, ':@computed_region_yftq_j783': nan, ':@computed_region_p5aj_wyqh': nan, ':@computed_region_fyvs_ahh9': nan, ':@computed_region_6pnf_4xz7': nan, ':@computed_region_jwn9_ihcz': nan, ':@computed_region_9dfj_4gjx': nan, ':@computed_region_4isq_27mq': nan, ':@computed_region_pigm_ib2e': nan, ':@computed_region_9jxd_iqea': nan, ':@computed_region_6ezc_tdp2': nan, ':@computed_region_h4ep_8xdi': nan, ':@computed_region_n4xg_c4py': nan, ':@computed_region_fcz8_est8': nan, ':@computed_region_nqbw_i6c3': nan, ':@computed_region_2dwj_jsy4': nan, 'Latitude': 37.7083109744362, 'Longitude': -122.420084075249})PreviousFigmaNextGitCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
277
AWS S3 File | 🦜️🔗 Langchain
Amazon Simple Storage Service (Amazon S3) is an object storage service.
Amazon Simple Storage Service (Amazon S3) is an object storage service. ->: AWS S3 File | 🦜️🔗 Langchain
278
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersAWS S3 FileOn this pageAWS S3 FileAmazon Simple Storage Service (Amazon S3) is an object
Amazon Simple Storage Service (Amazon S3) is an object storage service.
Amazon Simple Storage Service (Amazon S3) is an object storage service. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersAWS S3 FileOn this pageAWS S3 FileAmazon Simple Storage Service (Amazon S3) is an object
279
Simple Storage Service (Amazon S3) is an object storage service.AWS S3 BucketsThis covers how to load document objects from an AWS S3 File object.from langchain.document_loaders import S3FileLoader#!pip install boto3loader = S3FileLoader("testing-hwc", "fake.docx")loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 's3://testing-hwc/fake.docx'}, lookup_index=0)]Configuring the AWS Boto3 client‚ÄãYou can configure the AWS Boto3 client by passing
Amazon Simple Storage Service (Amazon S3) is an object storage service.
Amazon Simple Storage Service (Amazon S3) is an object storage service. ->: Simple Storage Service (Amazon S3) is an object storage service.AWS S3 BucketsThis covers how to load document objects from an AWS S3 File object.from langchain.document_loaders import S3FileLoader#!pip install boto3loader = S3FileLoader("testing-hwc", "fake.docx")loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 's3://testing-hwc/fake.docx'}, lookup_index=0)]Configuring the AWS Boto3 client‚ÄãYou can configure the AWS Boto3 client by passing
280
named arguments when creating the S3DirectoryLoader. This is useful for instance when AWS credentials can't be set as environment variables. See the list of parameters that can be configured.loader = S3FileLoader("testing-hwc", "fake.docx", aws_access_key_id="xxxx", aws_secret_access_key="yyyy")loader.load()PreviousAWS S3 DirectoryNextAZLyricsConfiguring the AWS Boto3 clientCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Amazon Simple Storage Service (Amazon S3) is an object storage service.
Amazon Simple Storage Service (Amazon S3) is an object storage service. ->: named arguments when creating the S3DirectoryLoader. This is useful for instance when AWS credentials can't be set as environment variables. See the list of parameters that can be configured.loader = S3FileLoader("testing-hwc", "fake.docx", aws_access_key_id="xxxx", aws_secret_access_key="yyyy")loader.load()PreviousAWS S3 DirectoryNextAZLyricsConfiguring the AWS Boto3 clientCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
281
College Confidential | 🦜️🔗 Langchain
College Confidential gives information on 3,800+ colleges and universities.
College Confidential gives information on 3,800+ colleges and universities. ->: College Confidential | 🦜️🔗 Langchain
282
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersCollege ConfidentialCollege ConfidentialCollege Confidential gives information on 3,800+
College Confidential gives information on 3,800+ colleges and universities.
College Confidential gives information on 3,800+ colleges and universities. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersCollege ConfidentialCollege ConfidentialCollege Confidential gives information on 3,800+
283
Confidential gives information on 3,800+ colleges and universities.This covers how to load College Confidential webpages into a document format that we can use downstream.from langchain.document_loaders import CollegeConfidentialLoaderloader = CollegeConfidentialLoader( "https://www.collegeconfidential.com/colleges/brown-university/")data = loader.load()data [Document(page_content='\n\n\n\n\n\n\n\nA68FEB02-9D19-447C-B8BC-818149FD6EAF\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Media (2)\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAbout Brown\n\n\n\n\n\n\nBrown University Overview\nBrown University is a private, nonprofit school in the urban setting of Providence, Rhode Island. Brown was founded in 1764 and the school currently enrolls around 10,696 students a year, including 7,349 undergraduates. Brown provides on-campus housing for students. Most students live in off campus housing.\n📆 Mark your calendar! January 5, 2023 is the final deadline to submit an application for the Fall 2023 semester. \nThere are many ways for students to get involved at Brown! \nLove music or performing? Join a campus band, sing in a chorus, or perform with one of the school\'s theater groups.\nInterested in journalism or communications? Brown students can write for the campus newspaper, host a radio show or be a producer for the student-run television channel.\nInterested in joining a fraternity or sorority? Brown has fraternities and sororities.\nPlanning to play sports? Brown has many options for athletes. See them all and learn more about life at Brown on the Student Life page.\n\n\n\n2022 Brown Facts At-A-Glance\n\n\n\n\n\nAcademic Calendar\nOther\n\n\nOverall Acceptance Rate\n6%\n\n\nEarly Decision Acceptance Rate\n16%\n\n\nEarly Action Acceptance Rate\nEA not offered\n\n\nApplicants Submitting SAT
College Confidential gives information on 3,800+ colleges and universities.
College Confidential gives information on 3,800+ colleges and universities. ->: Confidential gives information on 3,800+ colleges and universities.This covers how to load College Confidential webpages into a document format that we can use downstream.from langchain.document_loaders import CollegeConfidentialLoaderloader = CollegeConfidentialLoader( "https://www.collegeconfidential.com/colleges/brown-university/")data = loader.load()data [Document(page_content='\n\n\n\n\n\n\n\nA68FEB02-9D19-447C-B8BC-818149FD6EAF\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Media (2)\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAbout Brown\n\n\n\n\n\n\nBrown University Overview\nBrown University is a private, nonprofit school in the urban setting of Providence, Rhode Island. Brown was founded in 1764 and the school currently enrolls around 10,696 students a year, including 7,349 undergraduates. Brown provides on-campus housing for students. Most students live in off campus housing.\n📆 Mark your calendar! January 5, 2023 is the final deadline to submit an application for the Fall 2023 semester. \nThere are many ways for students to get involved at Brown! \nLove music or performing? Join a campus band, sing in a chorus, or perform with one of the school\'s theater groups.\nInterested in journalism or communications? Brown students can write for the campus newspaper, host a radio show or be a producer for the student-run television channel.\nInterested in joining a fraternity or sorority? Brown has fraternities and sororities.\nPlanning to play sports? Brown has many options for athletes. See them all and learn more about life at Brown on the Student Life page.\n\n\n\n2022 Brown Facts At-A-Glance\n\n\n\n\n\nAcademic Calendar\nOther\n\n\nOverall Acceptance Rate\n6%\n\n\nEarly Decision Acceptance Rate\n16%\n\n\nEarly Action Acceptance Rate\nEA not offered\n\n\nApplicants Submitting SAT
284
not offered\n\n\nApplicants Submitting SAT scores\n51%\n\n\nTuition\n$62,680\n\n\nPercent of Need Met\n100%\n\n\nAverage First-Year Financial Aid Package\n$59,749\n\n\n\n\nIs Brown a Good School?\n\nDifferent people have different ideas about what makes a "good" school. Some factors that can help you determine what a good school for you might be include admissions criteria, acceptance rate, tuition costs, and more.\nLet\'s take a look at these factors to get a clearer sense of what Brown offers and if it could be the right college for you.\nBrown Acceptance Rate 2022\nIt is extremely difficult to get into Brown. Around 6% of applicants get into Brown each year. In 2022, just 2,568 out of the 46,568 students who applied were accepted.\nRetention and Graduation Rates at Brown\nRetention refers to the number of students that stay enrolled at a school over time. This is a way to get a sense of how satisfied students are with their school experience, and if they have the support necessary to succeed in college. \nApproximately 98% of first-year, full-time undergrads who start at Browncome back their sophomore year. 95% of Brown undergrads graduate within six years. The average six-year graduation rate for U.S. colleges and universities is 61% for public schools, and 67% for private, non-profit schools.\nJob Outcomes for Brown Grads\nJob placement stats are a good resource for understanding the value of a degree from Brown by providing a look on how job placement has gone for other grads. \nCheck with Brown directly, for information on any information on starting salaries for recent grads.\nBrown\'s Endowment\nAn endowment is the total value of a school\'s investments, donations, and assets. Endowment is not necessarily an indicator of the quality of a school, but it can give you a sense of how much money a college can afford to invest in expanding programs, improving facilities, and support students. \nAs of 2022, the total market value of Brown University\'s endowment
College Confidential gives information on 3,800+ colleges and universities.
College Confidential gives information on 3,800+ colleges and universities. ->: not offered\n\n\nApplicants Submitting SAT scores\n51%\n\n\nTuition\n$62,680\n\n\nPercent of Need Met\n100%\n\n\nAverage First-Year Financial Aid Package\n$59,749\n\n\n\n\nIs Brown a Good School?\n\nDifferent people have different ideas about what makes a "good" school. Some factors that can help you determine what a good school for you might be include admissions criteria, acceptance rate, tuition costs, and more.\nLet\'s take a look at these factors to get a clearer sense of what Brown offers and if it could be the right college for you.\nBrown Acceptance Rate 2022\nIt is extremely difficult to get into Brown. Around 6% of applicants get into Brown each year. In 2022, just 2,568 out of the 46,568 students who applied were accepted.\nRetention and Graduation Rates at Brown\nRetention refers to the number of students that stay enrolled at a school over time. This is a way to get a sense of how satisfied students are with their school experience, and if they have the support necessary to succeed in college. \nApproximately 98% of first-year, full-time undergrads who start at Browncome back their sophomore year. 95% of Brown undergrads graduate within six years. The average six-year graduation rate for U.S. colleges and universities is 61% for public schools, and 67% for private, non-profit schools.\nJob Outcomes for Brown Grads\nJob placement stats are a good resource for understanding the value of a degree from Brown by providing a look on how job placement has gone for other grads. \nCheck with Brown directly, for information on any information on starting salaries for recent grads.\nBrown\'s Endowment\nAn endowment is the total value of a school\'s investments, donations, and assets. Endowment is not necessarily an indicator of the quality of a school, but it can give you a sense of how much money a college can afford to invest in expanding programs, improving facilities, and support students. \nAs of 2022, the total market value of Brown University\'s endowment
285
market value of Brown University\'s endowment was $4.7 billion. The average college endowment was $905 million in 2021. The school spends $34,086 for each full-time student enrolled. \nTuition and Financial Aid at Brown\nTuition is another important factor when choose a college. Some colleges may have high tuition, but do a better job at meeting students\' financial need.\nBrown meets 100% of the demonstrated financial need for undergraduates. The average financial aid package for a full-time, first-year student is around $59,749 a year. \nThe average student debt for graduates in the class of 2022 was around $24,102 per student, not including those with no debt. For context, compare this number with the average national debt, which is around $36,000 per borrower. \nThe 2023-2024 FAFSA Opened on October 1st, 2022\nSome financial aid is awarded on a first-come, first-served basis, so fill out the FAFSA as soon as you can. Visit the FAFSA website to apply for student aid. Remember, the first F in FAFSA stands for FREE! You should never have to pay to submit the Free Application for Federal Student Aid (FAFSA), so be very wary of anyone asking you for money.\nLearn more about Tuition and Financial Aid at Brown.\nBased on this information, does Brown seem like a good fit? Remember, a school that is perfect for one person may be a terrible fit for someone else! So ask yourself: Is Brown a good school for you?\nIf Brown University seems like a school you want to apply to, click the heart button to save it to your college list.\n\nStill Exploring Schools?\nChoose one of the options below to learn more about Brown:\nAdmissions\nStudent Life\nAcademics\nTuition & Aid\nBrown Community Forums\nThen use the college admissions predictor to take a data science look at your chances of getting into some of the best colleges and universities in the U.S.\nWhere is Brown?\nBrown is located in the urban setting of Providence, Rhode Island, less than an hour from Boston. \nIf you
College Confidential gives information on 3,800+ colleges and universities.
College Confidential gives information on 3,800+ colleges and universities. ->: market value of Brown University\'s endowment was $4.7 billion. The average college endowment was $905 million in 2021. The school spends $34,086 for each full-time student enrolled. \nTuition and Financial Aid at Brown\nTuition is another important factor when choose a college. Some colleges may have high tuition, but do a better job at meeting students\' financial need.\nBrown meets 100% of the demonstrated financial need for undergraduates. The average financial aid package for a full-time, first-year student is around $59,749 a year. \nThe average student debt for graduates in the class of 2022 was around $24,102 per student, not including those with no debt. For context, compare this number with the average national debt, which is around $36,000 per borrower. \nThe 2023-2024 FAFSA Opened on October 1st, 2022\nSome financial aid is awarded on a first-come, first-served basis, so fill out the FAFSA as soon as you can. Visit the FAFSA website to apply for student aid. Remember, the first F in FAFSA stands for FREE! You should never have to pay to submit the Free Application for Federal Student Aid (FAFSA), so be very wary of anyone asking you for money.\nLearn more about Tuition and Financial Aid at Brown.\nBased on this information, does Brown seem like a good fit? Remember, a school that is perfect for one person may be a terrible fit for someone else! So ask yourself: Is Brown a good school for you?\nIf Brown University seems like a school you want to apply to, click the heart button to save it to your college list.\n\nStill Exploring Schools?\nChoose one of the options below to learn more about Brown:\nAdmissions\nStudent Life\nAcademics\nTuition & Aid\nBrown Community Forums\nThen use the college admissions predictor to take a data science look at your chances of getting into some of the best colleges and universities in the U.S.\nWhere is Brown?\nBrown is located in the urban setting of Providence, Rhode Island, less than an hour from Boston. \nIf you
286
Island, less than an hour from Boston. \nIf you would like to see Brown for yourself, plan a visit. The best way to reach campus is to take Interstate 95 to Providence, or book a flight to the nearest airport, T.F. Green.\nYou can also take a virtual campus tour to get a sense of what Brown and Providence are like without leaving home.\nConsidering Going to School in Rhode Island?\nSee a full list of colleges in Rhode Island and save your favorites to your college list.\n\n\n\nCollege Info\n\n\n\n\n\n\n\n\n\n Providence, RI 02912\n \n\n\n\n Campus Setting: Urban\n \n\n\n\n\n\n\n\n (401) 863-2378\n \n\n Website\n \n\n Virtual Tour\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBrown Application Deadline\n\n\n\nFirst-Year Applications are Due\n\nJan 5\n\nTransfer Applications are Due\n\nMar 1\n\n\n\n \n The deadline for Fall first-year applications to Brown is \n Jan 5. \n \n \n \n\n \n The deadline for Fall transfer applications to Brown is \n Mar 1. \n \n \n \n\n \n Check the school website \n for more information about deadlines for specific programs or special admissions programs\n \n \n\n\n\n\n\n\nBrown ACT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nACT Range\n\n\n \n 33 - 35\n \n \n\n\n\nEstimated Chance of Acceptance by ACT Score\n\n\nACT Score\nEstimated Chance\n\n\n35 and Above\nGood\n\n\n33 to 35\nAvg\n\n\n33 and Less\nLow\n\n\n\n\n\n\nStand out on your college application\n\n• Qualify for scholarships\n• Most students who retest improve their score\n\nSponsored by ACT\n\n\n Take the
College Confidential gives information on 3,800+ colleges and universities.
College Confidential gives information on 3,800+ colleges and universities. ->: Island, less than an hour from Boston. \nIf you would like to see Brown for yourself, plan a visit. The best way to reach campus is to take Interstate 95 to Providence, or book a flight to the nearest airport, T.F. Green.\nYou can also take a virtual campus tour to get a sense of what Brown and Providence are like without leaving home.\nConsidering Going to School in Rhode Island?\nSee a full list of colleges in Rhode Island and save your favorites to your college list.\n\n\n\nCollege Info\n\n\n\n\n\n\n\n\n\n Providence, RI 02912\n \n\n\n\n Campus Setting: Urban\n \n\n\n\n\n\n\n\n (401) 863-2378\n \n\n Website\n \n\n Virtual Tour\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBrown Application Deadline\n\n\n\nFirst-Year Applications are Due\n\nJan 5\n\nTransfer Applications are Due\n\nMar 1\n\n\n\n \n The deadline for Fall first-year applications to Brown is \n Jan 5. \n \n \n \n\n \n The deadline for Fall transfer applications to Brown is \n Mar 1. \n \n \n \n\n \n Check the school website \n for more information about deadlines for specific programs or special admissions programs\n \n \n\n\n\n\n\n\nBrown ACT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nACT Range\n\n\n \n 33 - 35\n \n \n\n\n\nEstimated Chance of Acceptance by ACT Score\n\n\nACT Score\nEstimated Chance\n\n\n35 and Above\nGood\n\n\n33 to 35\nAvg\n\n\n33 and Less\nLow\n\n\n\n\n\n\nStand out on your college application\n\n• Qualify for scholarships\n• Most students who retest improve their score\n\nSponsored by ACT\n\n\n Take the
287
by ACT\n\n\n Take the Next ACT Test\n \n\n\n\n\n\nBrown SAT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nComposite SAT Range\n\n\n \n 720 - 770\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nMath SAT Range\n\n\n \n Not available\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nReading SAT Range\n\n\n \n 740 - 800\n \n \n\n\n\n\n\n\n Brown Tuition & Fees\n \n\n\n\nTuition & Fees\n\n\n\n $82,286\n \nIn State\n\n\n\n\n $82,286\n \nOut-of-State\n\n\n\n\n\n\n\nCost Breakdown\n\n\nIn State\n\n\nOut-of-State\n\n\n\n\nState Tuition\n\n\n\n $62,680\n \n\n\n\n $62,680\n \n\n\n\n\nFees\n\n\n\n $2,466\n \n\n\n\n $2,466\n \n\n\n\n\nHousing\n\n\n\n $15,840\n \n\n\n\n $15,840\n \n\n\n\n\nBooks\n\n\n\n $1,300\n \n\n\n\n $1,300\n \n\n\n\n\n\n Total (Before Financial Aid):\n \n\n\n\n $82,286\n \n\n\n\n $82,286\n \n\n\n\n\n\n\n\n\n\n\n\nStudent Life\n\n Wondering what life at Brown is like? There are approximately \n 10,696 students enrolled at \n Brown, \n including 7,349 undergraduate students and \n 3,347 graduate students.\n 96% percent of students attend school \n
College Confidential gives information on 3,800+ colleges and universities.
College Confidential gives information on 3,800+ colleges and universities. ->: by ACT\n\n\n Take the Next ACT Test\n \n\n\n\n\n\nBrown SAT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nComposite SAT Range\n\n\n \n 720 - 770\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nMath SAT Range\n\n\n \n Not available\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nReading SAT Range\n\n\n \n 740 - 800\n \n \n\n\n\n\n\n\n Brown Tuition & Fees\n \n\n\n\nTuition & Fees\n\n\n\n $82,286\n \nIn State\n\n\n\n\n $82,286\n \nOut-of-State\n\n\n\n\n\n\n\nCost Breakdown\n\n\nIn State\n\n\nOut-of-State\n\n\n\n\nState Tuition\n\n\n\n $62,680\n \n\n\n\n $62,680\n \n\n\n\n\nFees\n\n\n\n $2,466\n \n\n\n\n $2,466\n \n\n\n\n\nHousing\n\n\n\n $15,840\n \n\n\n\n $15,840\n \n\n\n\n\nBooks\n\n\n\n $1,300\n \n\n\n\n $1,300\n \n\n\n\n\n\n Total (Before Financial Aid):\n \n\n\n\n $82,286\n \n\n\n\n $82,286\n \n\n\n\n\n\n\n\n\n\n\n\nStudent Life\n\n Wondering what life at Brown is like? There are approximately \n 10,696 students enrolled at \n Brown, \n including 7,349 undergraduate students and \n 3,347 graduate students.\n 96% percent of students attend school \n
288
96% percent of students attend school \n full-time, \n 6% percent are from RI and \n 94% percent of students are from other states.\n \n\n\n\n\n\n None\n \n\n\n\n\nUndergraduate Enrollment\n\n\n\n 96%\n \nFull Time\n\n\n\n\n 4%\n \nPart Time\n\n\n\n\n\n\n\n 94%\n \n\n\n\n\nResidency\n\n\n\n 6%\n \nIn State\n\n\n\n\n 94%\n \nOut-of-State\n\n\n\n\n\n\n\n Data Source: IPEDs and Peterson\'s Databases © 2022 Peterson\'s LLC All rights reserved\n \n', lookup_str='', metadata={'source': 'https://www.collegeconfidential.com/colleges/brown-university/'}, lookup_index=0)]PreviousChatGPT DataNextConcurrent LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
College Confidential gives information on 3,800+ colleges and universities.
College Confidential gives information on 3,800+ colleges and universities. ->: 96% percent of students attend school \n full-time, \n 6% percent are from RI and \n 94% percent of students are from other states.\n \n\n\n\n\n\n None\n \n\n\n\n\nUndergraduate Enrollment\n\n\n\n 96%\n \nFull Time\n\n\n\n\n 4%\n \nPart Time\n\n\n\n\n\n\n\n 94%\n \n\n\n\n\nResidency\n\n\n\n 6%\n \nIn State\n\n\n\n\n 94%\n \nOut-of-State\n\n\n\n\n\n\n\n Data Source: IPEDs and Peterson\'s Databases © 2022 Peterson\'s LLC All rights reserved\n \n', lookup_str='', metadata={'source': 'https://www.collegeconfidential.com/colleges/brown-university/'}, lookup_index=0)]PreviousChatGPT DataNextConcurrent LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
289
Iugu | 🦜️🔗 Langchain
Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.
Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications. ->: Iugu | 🦜️🔗 Langchain
290
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersIuguIuguIugu is a Brazilian services and software as a service (SaaS) company. It offers
Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.
Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersIuguIuguIugu is a Brazilian services and software as a service (SaaS) company. It offers
291
software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.This notebook covers how to load data from the Iugu REST API into a format that can be ingested into LangChain, along with example usage for vectorization.import osfrom langchain.document_loaders import IuguLoaderfrom langchain.indexes import VectorstoreIndexCreatorThe Iugu API requires an access token, which can be found inside of the Iugu dashboard.This document loader also requires a resource option which defines what data you want to load.Following resources are available:Documentation Documentationiugu_loader = IuguLoader("charges")# Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([iugu_loader])iugu_doc_retriever = index.vectorstore.as_retriever()PreviousIMSDbNextJoplinCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.
Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications. ->: software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.This notebook covers how to load data from the Iugu REST API into a format that can be ingested into LangChain, along with example usage for vectorization.import osfrom langchain.document_loaders import IuguLoaderfrom langchain.indexes import VectorstoreIndexCreatorThe Iugu API requires an access token, which can be found inside of the Iugu dashboard.This document loader also requires a resource option which defines what data you want to load.Following resources are available:Documentation Documentationiugu_loader = IuguLoader("charges")# Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([iugu_loader])iugu_doc_retriever = index.vectorstore.as_retriever()PreviousIMSDbNextJoplinCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
292
Copy Paste | 🦜️🔗 Langchain
This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly.
This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly. ->: Copy Paste | 🦜️🔗 Langchain
293
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersCopy PasteOn this pageCopy PasteThis notebook covers how to load a document object from
This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly.
This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersCopy PasteOn this pageCopy PasteThis notebook covers how to load a document object from
294
covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly.from langchain.docstore.document import Documenttext = "..... put the text you copy pasted here......"doc = Document(page_content=text)Metadata​If you want to add metadata about the where you got this piece of text, you easily can with the metadata key.metadata = {"source": "internet", "date": "Friday"}doc = Document(page_content=text, metadata=metadata)PreviousCoNLL-UNextCSVMetadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly.
This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly. ->: covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly.from langchain.docstore.document import Documenttext = "..... put the text you copy pasted here......"doc = Document(page_content=text)Metadata​If you want to add metadata about the where you got this piece of text, you easily can with the metadata key.metadata = {"source": "internet", "date": "Friday"}doc = Document(page_content=text, metadata=metadata)PreviousCoNLL-UNextCSVMetadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
295
CoNLL-U | 🦜️🔗 Langchain
CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:
CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines: ->: CoNLL-U | 🦜️🔗 Langchain
296
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersCoNLL-UCoNLL-UCoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in
CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:
CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines: ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersCoNLL-UCoNLL-UCoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in
297
of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:Word lines containing the annotation of a word/token in 10 fields separated by single tab characters; see below.Blank lines marking sentence boundaries.Comment lines starting with hash (#).This is an example of how to load a file in CoNLL-U format. The whole file is treated as one document. The example data (conllu.conllu) is based on one of the standard UD/CoNLL-U examples.from langchain.document_loaders import CoNLLULoaderloader = CoNLLULoader("example_data/conllu.conllu")document = loader.load()document [Document(page_content='They buy and sell books.', metadata={'source': 'example_data/conllu.conllu'})]PreviousConfluenceNextCopy PasteCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:
CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines: ->: of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:Word lines containing the annotation of a word/token in 10 fields separated by single tab characters; see below.Blank lines marking sentence boundaries.Comment lines starting with hash (#).This is an example of how to load a file in CoNLL-U format. The whole file is treated as one document. The example data (conllu.conllu) is based on one of the standard UD/CoNLL-U examples.from langchain.document_loaders import CoNLLULoaderloader = CoNLLULoader("example_data/conllu.conllu")document = loader.load()document [Document(page_content='They buy and sell books.', metadata={'source': 'example_data/conllu.conllu'})]PreviousConfluenceNextCopy PasteCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
298
Slack | 🦜️🔗 Langchain
Slack is an instant messaging program.
Slack is an instant messaging program. ->: Slack | 🦜️🔗 Langchain
299
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersSlackOn this pageSlackSlack is an instant messaging program.This notebook covers how to load
Slack is an instant messaging program.
Slack is an instant messaging program. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersSlackOn this pageSlackSlack is an instant messaging program.This notebook covers how to load