id
stringlengths
14
16
text
stringlengths
29
2.73k
source
stringlengths
49
117
2f876e110efc-0
.ipynb .pdf Hugging Face Hub Hugging Face Hub# Let’s load the Hugging Face Embedding class. from langchain.embeddings import HuggingFaceEmbeddings embeddings = HuggingFaceEmbeddings() text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text]) previous Google Cloud Platform Vertex AI PaLM next InstructEmbeddings By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/huggingfacehub.html
073ca3db5a29-0
.ipynb .pdf SageMaker Endpoint Embeddings SageMaker Endpoint Embeddings# Let’s load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker. For instructions on how to do this, please see here. Note: In order to handle batched requests, you will need to adjust the return line in the predict_fn() function within the custom inference.py script: Change from return {"vectors": sentence_embeddings[0].tolist()} to: return {"vectors": sentence_embeddings.tolist()}. !pip3 install langchain boto3 from typing import Dict, List from langchain.embeddings import SagemakerEndpointEmbeddings from langchain.llms.sagemaker_endpoint import ContentHandlerBase import json class ContentHandler(ContentHandlerBase): content_type = "application/json" accepts = "application/json" def transform_input(self, inputs: list[str], model_kwargs: Dict) -> bytes: input_str = json.dumps({"inputs": inputs, **model_kwargs}) return input_str.encode('utf-8') def transform_output(self, output: bytes) -> List[List[float]]: response_json = json.loads(output.read().decode("utf-8")) return response_json["vectors"] content_handler = ContentHandler() embeddings = SagemakerEndpointEmbeddings( # endpoint_name="endpoint-name", # credentials_profile_name="credentials-profile-name", endpoint_name="huggingface-pytorch-inference-2023-03-21-16-14-03-834", region_name="us-east-1", content_handler=content_handler ) query_result = embeddings.embed_query("foo") doc_results = embeddings.embed_documents(["foo"])
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/sagemaker-endpoint.html
073ca3db5a29-1
query_result = embeddings.embed_query("foo") doc_results = embeddings.embed_documents(["foo"]) doc_results previous OpenAI next Self Hosted Embeddings By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/sagemaker-endpoint.html
e1825a10127c-0
.ipynb .pdf Elasticsearch Contents Testing with from_credentials Testing with Existing Elasticsearch client connection Elasticsearch# Walkthrough of how to generate embeddings using a hosted embedding model in Elasticsearch The easiest way to instantiate the ElasticsearchEmebddings class it either using the from_credentials constructor if you are using Elastic Cloud or using the from_es_connection constructor with any Elasticsearch cluster !pip -q install elasticsearch langchain import elasticsearch from langchain.embeddings.elasticsearch import ElasticsearchEmbeddings # Define the model ID model_id = 'your_model_id' Testing with from_credentials# This required an Elastic Cloud cloud_id # Instantiate ElasticsearchEmbeddings using credentials embeddings = ElasticsearchEmbeddings.from_credentials( model_id, es_cloud_id='your_cloud_id', es_user='your_user', es_password='your_password' ) # Create embeddings for multiple documents documents = [ 'This is an example document.', 'Another example document to generate embeddings for.' ] document_embeddings = embeddings.embed_documents(documents) # Print document embeddings for i, embedding in enumerate(document_embeddings): print(f"Embedding for document {i+1}: {embedding}") # Create an embedding for a single query query = 'This is a single query.' query_embedding = embeddings.embed_query(query) # Print query embedding print(f"Embedding for query: {query_embedding}") Testing with Existing Elasticsearch client connection# This can be used with any Elasticsearch deployment # Create Elasticsearch connection es_connection = Elasticsearch( hosts=['https://es_cluster_url:port'], basic_auth=('user', 'password') ) # Instantiate ElasticsearchEmbeddings using es_connection embeddings = ElasticsearchEmbeddings.from_es_connection( model_id, es_connection, )
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/elasticsearch.html
e1825a10127c-1
model_id, es_connection, ) # Create embeddings for multiple documents documents = [ 'This is an example document.', 'Another example document to generate embeddings for.' ] document_embeddings = embeddings.embed_documents(documents) # Print document embeddings for i, embedding in enumerate(document_embeddings): print(f"Embedding for document {i+1}: {embedding}") # Create an embedding for a single query query = 'This is a single query.' query_embedding = embeddings.embed_query(query) # Print query embedding print(f"Embedding for query: {query_embedding}") previous Cohere next Fake Embeddings Contents Testing with from_credentials Testing with Existing Elasticsearch client connection By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/elasticsearch.html
f0bf260d5fbc-0
.ipynb .pdf Fake Embeddings Fake Embeddings# LangChain also provides a fake embedding class. You can use this to test your pipelines. from langchain.embeddings import FakeEmbeddings embeddings = FakeEmbeddings(size=1352) query_result = embeddings.embed_query("foo") doc_results = embeddings.embed_documents(["foo"]) previous Elasticsearch next Google Cloud Platform Vertex AI PaLM By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/fake.html
0ad3e33739ac-0
.ipynb .pdf MosaicML embeddings MosaicML embeddings# MosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own. This example goes over how to use LangChain to interact with MosaicML Inference for text embedding. # sign up for an account: https://forms.mosaicml.com/demo?utm_source=langchain from getpass import getpass MOSAICML_API_TOKEN = getpass() import os os.environ["MOSAICML_API_TOKEN"] = MOSAICML_API_TOKEN from langchain.embeddings import MosaicMLInstructorEmbeddings embeddings = MosaicMLInstructorEmbeddings( query_instruction="Represent the query for retrieval: " ) query_text = "This is a test query." query_result = embeddings.embed_query(query_text) document_text = "This is a test document." document_result = embeddings.embed_documents([document_text]) import numpy as np query_numpy = np.array(query_result) document_numpy = np.array(document_result[0]) similarity = np.dot(query_numpy, document_numpy) / (np.linalg.norm(query_numpy)*np.linalg.norm(document_numpy)) print(f"Cosine similarity between document and query: {similarity}") previous ModelScope next OpenAI By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/mosaicml.html
267c5b3f911e-0
.ipynb .pdf InstructEmbeddings InstructEmbeddings# Let’s load the HuggingFace instruct Embeddings class. from langchain.embeddings import HuggingFaceInstructEmbeddings embeddings = HuggingFaceInstructEmbeddings( query_instruction="Represent the query for retrieval: " ) load INSTRUCTOR_Transformer max_seq_length 512 text = "This is a test document." query_result = embeddings.embed_query(text) previous Hugging Face Hub next Jina By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/instruct_embeddings.html
0b336920af0e-0
.ipynb .pdf Jina Jina# Let’s load the Jina Embedding class. from langchain.embeddings import JinaEmbeddings embeddings = JinaEmbeddings(jina_auth_token=jina_auth_token, model_name="ViT-B-32::openai") text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text]) In the above example, ViT-B-32::openai, OpenAI’s pretrained ViT-B-32 model is used. For a full list of models, see here. previous InstructEmbeddings next Llama-cpp By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/jina.html
af2f8cb2bc7e-0
.rst .pdf Retrievers Retrievers# Note Conceptual Guide The retriever interface is a generic interface that makes it easy to combine documents with language models. This interface exposes a get_relevant_documents method which takes in a query (a string) and returns a list of documents. Please see below for a list of all the retrievers supported. Arxiv Azure Cognitive Search Retriever ChatGPT Plugin Self-querying with Chroma Cohere Reranker Contextual Compression Stringing compressors and document transformers together Databerry ElasticSearch BM25 kNN Metal Pinecone Hybrid Search Self-querying with Qdrant Self-querying SVM TF-IDF Time Weighted VectorStore VectorStore Vespa Weaviate Hybrid Search Self-querying with Weaviate Wikipedia Zep Memory previous Zilliz next Arxiv By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/retrievers.html
4ab8ad096232-0
.rst .pdf Vectorstores Vectorstores# Note Conceptual Guide Vectorstores are one of the most important components of building indexes. For an introduction to vectorstores and generic functionality see: Getting Started We also have documentation for all the types of vectorstores that are supported. Please see below for that list. AnalyticDB Annoy Atlas Chroma Deep Lake DocArrayHnswSearch DocArrayInMemorySearch ElasticSearch FAISS LanceDB MatchingEngine Milvus MongoDB Atlas Vector Search MyScale OpenSearch PGVector Pinecone Qdrant Redis SKLearnVectorStore Supabase (Postgres) Tair Typesense Vectara Weaviate Persistance Retriever options Zilliz previous tiktoken (OpenAI) tokenizer next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores.html
e3665f35b9cc-0
.rst .pdf Text Splitters Text Splitters# Note Conceptual Guide When you want to deal with long pieces of text, it is necessary to split up that text into chunks. As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What “semantically related” means could depend on the type of text. This notebook showcases several ways to do that. At a high level, text splitters work as following: Split the text up into small, semantically meaningful chunks (often sentences). Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function). Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks). That means there are two different axes along which you can customize your text splitter: How the text is split How the chunk size is measured For an introduction to the default text splitter and generic functionality see: Getting Started Usage examples for the text splitters: Character Code (including HTML, Markdown, Latex, Python, etc) NLTK Recursive Character spaCy tiktoken (OpenAI) Most LLMs are constrained by the number of tokens that you can pass in, which is not the same as the number of characters. In order to get a more accurate estimate, we can use tokenizers to count the number of tokens in the text. We use this number inside the ..TextSplitter classes. This implemented as the from_<tokenizer> methods of the ..TextSplitter classes: Hugging Face tokenizer tiktoken (OpenAI) tokenizer previous Twitter next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase.
https://python.langchain.com/en/latest/modules/indexes/text_splitters.html
e3665f35b9cc-1
Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/text_splitters.html
34ba2656710e-0
.rst .pdf Document Loaders Contents Transform loaders Public dataset or service loaders Proprietary dataset or service loaders Document Loaders# Note Conceptual Guide Combining language models with your own text data is a powerful way to differentiate them. The first step in doing this is to load the data into “Documents” - a fancy way of say some pieces of text. The document loader is aimed at making this easy. The following document loaders are provided: Transform loaders# These transform loaders transform data from a specific format into the Document format. For example, there are transformers for CSV and SQL. Mostly, these loaders input data from files but sometime from URLs. A primary driver of a lot of these transformers is the Unstructured python package. This package transforms many types of files - text, powerpoint, images, html, pdf, etc - into text data. For detailed instructions on how to get set up with Unstructured, see installation guidelines here. CoNLL-U Copy Paste CSV Email EPub EverNote Facebook Chat File Directory HTML Images Jupyter Notebook JSON Markdown Microsoft PowerPoint Microsoft Word Open Document Format (ODT) Pandas DataFrame PDF Sitemap Subtitle Telegram TOML Unstructured File URL Selenium URL Loader Playwright URL Loader WebBaseLoader Weather WhatsApp Chat Public dataset or service loaders# These datasets and sources are created for public domain and we use queries to search there and download necessary documents. For example, Hacker News service. We don’t need any access permissions to these datasets and services. Arxiv AZLyrics BiliBili College Confidential Gutenberg Hacker News HuggingFace dataset iFixit IMSDb MediaWikiDump Wikipedia YouTube transcripts
https://python.langchain.com/en/latest/modules/indexes/document_loaders.html
34ba2656710e-1
iFixit IMSDb MediaWikiDump Wikipedia YouTube transcripts Proprietary dataset or service loaders# These datasets and services are not from the public domain. These loaders mostly transform data from specific formats of applications or cloud services, for example Google Drive. We need access tokens and sometime other parameters to get access to these datasets and services. Airbyte JSON Apify Dataset AWS S3 Directory AWS S3 File Azure Blob Storage Container Azure Blob Storage File Blackboard Blockchain ChatGPT Data Confluence Diffbot Docugami DuckDB Figma GitBook Git Google BigQuery Google Cloud Storage Directory Google Cloud Storage File Google Drive Image captions Iugu Joplin Microsoft OneDrive Modern Treasury Notion DB 2/2 Notion DB 1/2 Obsidian Psychic PySpark DataFrame Loader ReadTheDocs Documentation Reddit Roam Slack Spreedly Stripe 2Markdown Twitter previous Getting Started next CoNLL-U Contents Transform loaders Public dataset or service loaders Proprietary dataset or service loaders By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders.html
2ec0d7d30348-0
.ipynb .pdf Getting Started Contents One Line Index Creation Walkthrough Getting Started# LangChain primarily focuses on constructing indexes with the goal of using them as a Retriever. In order to best understand what this means, it’s worth highlighting what the base Retriever interface is. The BaseRetriever class in LangChain is as follows: from abc import ABC, abstractmethod from typing import List from langchain.schema import Document class BaseRetriever(ABC): @abstractmethod def get_relevant_documents(self, query: str) -> List[Document]: """Get texts relevant for a query. Args: query: string to find relevant texts for Returns: List of relevant documents """ It’s that simple! The get_relevant_documents method can be implemented however you see fit. Of course, we also help construct what we think useful Retrievers are. The main type of Retriever that we focus on is a Vectorstore retriever. We will focus on that for the rest of this guide. In order to understand what a vectorstore retriever is, it’s important to understand what a Vectorstore is. So let’s look at that. By default, LangChain uses Chroma as the vectorstore to index and search embeddings. To walk through this tutorial, we’ll first need to install chromadb. pip install chromadb This example showcases question answering over documents. We have chosen this as the example for getting started because it nicely combines a lot of different elements (Text splitters, embeddings, vectorstores) and then also shows how to use them in a chain. Question answering over documents consists of four steps: Create an index Create a Retriever from that index Create a question answering chain Ask questions!
https://python.langchain.com/en/latest/modules/indexes/getting_started.html
2ec0d7d30348-1
Create a Retriever from that index Create a question answering chain Ask questions! Each of the steps has multiple sub steps and potential configurations. In this notebook we will primarily focus on (1). We will start by showing the one-liner for doing so, but then break down what is actually going on. First, let’s import some common classes we’ll use no matter what. from langchain.chains import RetrievalQA from langchain.llms import OpenAI Next in the generic setup, let’s specify the document loader we want to use. You can download the state_of_the_union.txt file here from langchain.document_loaders import TextLoader loader = TextLoader('../state_of_the_union.txt', encoding='utf8') One Line Index Creation# To get started as quickly as possible, we can use the VectorstoreIndexCreator. from langchain.indexes import VectorstoreIndexCreator index = VectorstoreIndexCreator().from_loaders([loader]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. Now that the index is created, we can use it to ask questions of the data! Note that under the hood this is actually doing a few steps as well, which we will cover later in this guide. query = "What did the president say about Ketanji Brown Jackson" index.query(query) " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." query = "What did the president say about Ketanji Brown Jackson" index.query_with_sources(query)
https://python.langchain.com/en/latest/modules/indexes/getting_started.html
2ec0d7d30348-2
index.query_with_sources(query) {'question': 'What did the president say about Ketanji Brown Jackson', 'answer': " The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\n", 'sources': '../state_of_the_union.txt'} What is returned from the VectorstoreIndexCreator is VectorStoreIndexWrapper, which provides these nice query and query_with_sources functionality. If we just wanted to access the vectorstore directly, we can also do that. index.vectorstore <langchain.vectorstores.chroma.Chroma at 0x119aa5940> If we then want to access the VectorstoreRetriever, we can do that with: index.vectorstore.as_retriever() VectorStoreRetriever(vectorstore=<langchain.vectorstores.chroma.Chroma object at 0x119aa5940>, search_kwargs={}) Walkthrough# Okay, so what’s actually going on? How is this index getting created? A lot of the magic is being hid in this VectorstoreIndexCreator. What is this doing? There are three main steps going on after the documents are loaded: Splitting documents into chunks Creating embeddings for each document Storing documents and embeddings in a vectorstore Let’s walk through this in code documents = loader.load() Next, we will split the documents into chunks. from langchain.text_splitter import CharacterTextSplitter text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) We will then select which embeddings we want to use.
https://python.langchain.com/en/latest/modules/indexes/getting_started.html
2ec0d7d30348-3
We will then select which embeddings we want to use. from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() We now create the vectorstore to use as the index. from langchain.vectorstores import Chroma db = Chroma.from_documents(texts, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. So that’s creating the index. Then, we expose this index in a retriever interface. retriever = db.as_retriever() Then, as before, we create a chain and use it to answer questions! qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever) query = "What did the president say about Ketanji Brown Jackson" qa.run(query) " The President said that Judge Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He said she is a consensus builder and has received a broad range of support from organizations such as the Fraternal Order of Police and former judges appointed by Democrats and Republicans." VectorstoreIndexCreator is just a wrapper around all this logic. It is configurable in the text splitter it uses, the embeddings it uses, and the vectorstore it uses. For example, you can configure it as below: index_creator = VectorstoreIndexCreator( vectorstore_cls=Chroma, embedding=OpenAIEmbeddings(), text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) )
https://python.langchain.com/en/latest/modules/indexes/getting_started.html
2ec0d7d30348-4
) Hopefully this highlights what is going on under the hood of VectorstoreIndexCreator. While we think it’s important to have a simple way to create indexes, we also think it’s important to understand what’s going on under the hood. previous Indexes next Document Loaders Contents One Line Index Creation Walkthrough By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/getting_started.html
0804652b1893-0
.ipynb .pdf GitBook Contents Load from single GitBook page Load from all paths in a given GitBook GitBook# GitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs. This notebook shows how to pull page data from any GitBook. from langchain.document_loaders import GitbookLoader Load from single GitBook page# loader = GitbookLoader("https://docs.gitbook.com") page_data = loader.load() page_data [Document(page_content='Introduction to GitBook\nGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.\nWe want to help \nteams to work more efficiently\n by creating a simple yet powerful platform for them to \nshare their knowledge\n.\nOur mission is to make a \nuser-friendly\n and \ncollaborative\n product for everyone to create, edit and share knowledge through documentation.\nPublish your documentation in 5 easy steps\nImport\n\nMove your existing content to GitBook with ease.\nGit Sync\n\nBenefit from our bi-directional synchronisation with GitHub and GitLab.\nOrganise your content\n\nCreate pages and spaces and organize them into collections\nCollaborate\n\nInvite other users and collaborate asynchronously with ease.\nPublish your docs\n\nShare your documentation with selected users or with everyone.\nNext\n - Getting started\nOverview\nLast modified \n3mo ago', lookup_str='', metadata={'source': 'https://docs.gitbook.com', 'title': 'Introduction to GitBook'}, lookup_index=0)] Load from all paths in a given GitBook# For this to work, the GitbookLoader needs to be initialized with the root path (https://docs.gitbook.com in this example) and have load_all_paths set to True.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html
0804652b1893-1
loader = GitbookLoader("https://docs.gitbook.com", load_all_paths=True) all_pages_data = loader.load() Fetching text from https://docs.gitbook.com/ Fetching text from https://docs.gitbook.com/getting-started/overview Fetching text from https://docs.gitbook.com/getting-started/import Fetching text from https://docs.gitbook.com/getting-started/git-sync Fetching text from https://docs.gitbook.com/getting-started/content-structure Fetching text from https://docs.gitbook.com/getting-started/collaboration Fetching text from https://docs.gitbook.com/getting-started/publishing Fetching text from https://docs.gitbook.com/tour/quick-find Fetching text from https://docs.gitbook.com/tour/editor Fetching text from https://docs.gitbook.com/tour/customization Fetching text from https://docs.gitbook.com/tour/member-management Fetching text from https://docs.gitbook.com/tour/pdf-export Fetching text from https://docs.gitbook.com/tour/activity-history Fetching text from https://docs.gitbook.com/tour/insights Fetching text from https://docs.gitbook.com/tour/notifications Fetching text from https://docs.gitbook.com/tour/internationalization Fetching text from https://docs.gitbook.com/tour/keyboard-shortcuts Fetching text from https://docs.gitbook.com/tour/seo Fetching text from https://docs.gitbook.com/advanced-guides/custom-domain Fetching text from https://docs.gitbook.com/advanced-guides/advanced-sharing-and-security Fetching text from https://docs.gitbook.com/advanced-guides/integrations Fetching text from https://docs.gitbook.com/billing-and-admin/account-settings Fetching text from https://docs.gitbook.com/billing-and-admin/plans Fetching text from https://docs.gitbook.com/troubleshooting/faqs
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html
0804652b1893-2
Fetching text from https://docs.gitbook.com/troubleshooting/faqs Fetching text from https://docs.gitbook.com/troubleshooting/hard-refresh Fetching text from https://docs.gitbook.com/troubleshooting/report-bugs Fetching text from https://docs.gitbook.com/troubleshooting/connectivity-issues Fetching text from https://docs.gitbook.com/troubleshooting/support print(f"fetched {len(all_pages_data)} documents.") # show second document all_pages_data[2] fetched 28 documents.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html
0804652b1893-3
Document(page_content="Import\nFind out how to easily migrate your existing documentation and which formats are supported.\nThe import function allows you to migrate and unify existing documentation in GitBook. You can choose to import single or multiple pages although limits apply. \nPermissions\nAll members with editor permission or above can use the import feature.\nSupported formats\nGitBook supports imports from websites or files that are:\nMarkdown (.md or .markdown)\nHTML (.html)\nMicrosoft Word (.docx).\nWe also support import from:\nConfluence\nNotion\nGitHub Wiki\nQuip\nDropbox Paper\nGoogle Docs\nYou can also upload a ZIP\n \ncontaining HTML or Markdown files when \nimporting multiple pages.\nNote: this feature is in beta.\nFeel free to suggest import sources we don't support yet and \nlet us know\n if you have any issues.\nImport panel\nWhen you create a new space, you'll have the option to import content straight away:\nThe new page menu\nImport a page or subpage by selecting \nImport Page\n from the New Page menu, or \nImport Subpage\n in the page action menu, found in the table of contents:\nImport from the page action menu\nWhen you choose your input source, instructions will explain how to proceed.\nAlthough GitBook supports importing content from different kinds of sources, the end result might be different from your source due to differences in product features and document format.\nLimits\nGitBook currently has the following limits for imported content:\nThe maximum number of pages that can be uploaded in a single import is \n20.\nThe maximum number of files (images etc.) that can be uploaded in a single import is \n20.\nGetting started - \nPrevious\nOverview\nNext\n - Getting started\nGit Sync\nLast modified \n4mo ago", lookup_str='', metadata={'source':
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html
0804652b1893-4
started\nGit Sync\nLast modified \n4mo ago", lookup_str='', metadata={'source': 'https://docs.gitbook.com/getting-started/import', 'title': 'Import'}, lookup_index=0)
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html
0804652b1893-5
previous Figma next Git Contents Load from single GitBook page Load from all paths in a given GitBook By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html
cf5390df720b-0
.ipynb .pdf BiliBili BiliBili# Bilibili is one of the most beloved long-form video sites in China. This loader utilizes the bilibili-api to fetch the text transcript from Bilibili. With this BiliBiliLoader, users can easily obtain the transcript of their desired video content on the platform. #!pip install bilibili-api-python from langchain.document_loaders import BiliBiliLoader loader = BiliBiliLoader( ["https://www.bilibili.com/video/BV1xt411o7Xu/"] ) loader.load() previous AZLyrics next College Confidential By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/bilibili.html
6953cfa6dad0-0
.ipynb .pdf Hacker News Hacker News# Hacker News (sometimes abbreviated as HN) is a social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator Y Combinator. In general, content that can be submitted is defined as “anything that gratifies one’s intellectual curiosity.” This notebook covers how to pull page data and comments from Hacker News from langchain.document_loaders import HNLoader loader = HNLoader("https://news.ycombinator.com/item?id=34817881") data = loader.load() data[0].page_content[:300] "delta_p_delta_x 73 days ago \n | next [–] \n\nAstrophysical and cosmological simulations are often insightful. They're also very cross-disciplinary; besides the obvious astrophysics, there's networking and sysadmin, parallel computing and algorithm theory (so that the simulation programs a" data[0].metadata {'source': 'https://news.ycombinator.com/item?id=34817881', 'title': 'What Lights the Universe’s Standard Candles?'} previous Gutenberg next HuggingFace dataset By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hacker_news.html
88594934936e-0
.ipynb .pdf Twitter Twitter# Twitter is an online social media and social networking service. This loader fetches the text from the Tweets of a list of Twitter users, using the tweepy Python package. You must initialize the loader with your Twitter API token, and you need to pass in the Twitter username you want to extract. from langchain.document_loaders import TwitterTweetLoader #!pip install tweepy loader = TwitterTweetLoader.from_bearer_token( oauth2_bearer_token="YOUR BEARER TOKEN", twitter_users=['elonmusk'], number_tweets=50, # Default value is 100 ) # Or load from access token and consumer keys # loader = TwitterTweetLoader.from_secrets( # access_token='YOUR ACCESS TOKEN', # access_token_secret='YOUR ACCESS TOKEN SECRET', # consumer_key='YOUR CONSUMER KEY', # consumer_secret='YOUR CONSUMER SECRET', # twitter_users=['elonmusk'], # number_tweets=50, # ) documents = loader.load() documents[:5]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
88594934936e-1
[Document(page_content='@MrAndyNgo @REI One store after another shutting down', metadata={'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices':
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
88594934936e-2
'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333',
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
88594934936e-3
'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
88594934936e-4
Document(page_content='@KanekoaTheGreat @joshrogin @glennbeck Large ships are fundamentally vulnerable to ballistic (hypersonic) missiles', metadata={'created_at': 'Tue Apr 18 03:43:25 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846,
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
88594934936e-5
'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6',
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
88594934936e-6
'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
88594934936e-7
Document(page_content='@KanekoaTheGreat The Golden Rule', metadata={'created_at': 'Tue Apr 18 03:37:17 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11,
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
88594934936e-8
16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333',
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
88594934936e-9
'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
88594934936e-10
Document(page_content='@KanekoaTheGreat 🧐', metadata={'created_at': 'Tue Apr 18 03:35:48 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11,
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
88594934936e-11
16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333',
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
88594934936e-12
'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
88594934936e-13
Document(page_content='@TRHLofficial What’s he talking about and why is it sponsored by Erik’s son?', metadata={'created_at': 'Tue Apr 18 03:32:17 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846',
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
88594934936e-14
'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333',
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
88594934936e-15
'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}})]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
88594934936e-16
previous 2Markdown next Text Splitters By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html
d9ce9027a1a2-0
.ipynb .pdf Notion DB 2/2 Contents Requirements Setup 1. Create a Notion Table Database 2. Create a Notion Integration 3. Connect the Integration to the Database 4. Get the Database ID Usage Notion DB 2/2# Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management. NotionDBLoader is a Python class for loading content from a Notion database. It retrieves pages from the database, reads their content, and returns a list of Document objects. Requirements# A Notion Database Notion Integration Token Setup# 1. Create a Notion Table Database# Create a new table database in Notion. You can add any column to the database and they will be treated as metadata. For example you can add the following columns: Title: set Title as the default property. Categories: A Multi-select property to store categories associated with the page. Keywords: A Multi-select property to store keywords associated with the page. Add your content to the body of each page in the database. The NotionDBLoader will extract the content and metadata from these pages. 2. Create a Notion Integration# To create a Notion Integration, follow these steps: Visit the Notion Developers page and log in with your Notion account. Click on the “+ New integration” button. Give your integration a name and choose the workspace where your database is located. Select the require capabilities, this extension only need the Read content capability Click the “Submit” button to create the integration.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notiondb.html
d9ce9027a1a2-1
Click the “Submit” button to create the integration. Once the integration is created, you’ll be provided with an Integration Token (API key). Copy this token and keep it safe, as you’ll need it to use the NotionDBLoader. 3. Connect the Integration to the Database# To connect your integration to the database, follow these steps: Open your database in Notion. Click on the three-dot menu icon in the top right corner of the database view. Click on the “+ New integration” button. Find your integration, you may need to start typing its name in the search box. Click on the “Connect” button to connect the integration to the database. 4. Get the Database ID# To get the database ID, follow these steps: Open your database in Notion. Click on the three-dot menu icon in the top right corner of the database view. Select “Copy link” from the menu to copy the database URL to your clipboard. The database ID is the long string of alphanumeric characters found in the URL. It typically looks like this: https://www.notion.so/username/8935f9d140a04f95a872520c4f123456?v=…. In this example, the database ID is 8935f9d140a04f95a872520c4f123456. With the database properly set up and the integration token and database ID in hand, you can now use the NotionDBLoader code to load content and metadata from your Notion database. Usage# NotionDBLoader is part of the langchain package’s document loaders. You can use it as follows: from getpass import getpass NOTION_TOKEN = getpass() DATABASE_ID = getpass() ········ ········ from langchain.document_loaders import NotionDBLoader
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notiondb.html
d9ce9027a1a2-2
········ from langchain.document_loaders import NotionDBLoader loader = NotionDBLoader( integration_token=NOTION_TOKEN, database_id=DATABASE_ID, request_timeout_sec=30 # optional, defaults to 10 ) docs = loader.load() print(docs) previous Modern Treasury next Notion DB 1/2 Contents Requirements Setup 1. Create a Notion Table Database 2. Create a Notion Integration 3. Connect the Integration to the Database 4. Get the Database ID Usage By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notiondb.html
20e981bc2607-0
.ipynb .pdf Airbyte JSON Airbyte JSON# Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. This covers how to load any source from Airbyte into a local JSON file that can be read in as a document Prereqs: Have docker desktop installed Steps: Clone Airbyte from GitHub - git clone https://github.com/airbytehq/airbyte.git Switch into Airbyte directory - cd airbyte Start Airbyte - docker compose up In your browser, just visit http://localhost:8000. You will be asked for a username and password. By default, that’s username airbyte and password password. Setup any source you wish. Set destination as Local JSON, with specified destination path - lets say /json_data. Set up manual sync. Run the connection. To see what files are create, you can navigate to: file:///tmp/airbyte_local Find your data and copy path. That path should be saved in the file variable below. It should start with /tmp/airbyte_local from langchain.document_loaders import AirbyteJSONLoader !ls /tmp/airbyte_local/json_data/ _airbyte_raw_pokemon.jsonl loader = AirbyteJSONLoader('/tmp/airbyte_local/json_data/_airbyte_raw_pokemon.jsonl') data = loader.load() print(data[0].page_content[:500]) abilities: ability: name: blaze url: https://pokeapi.co/api/v2/ability/66/ is_hidden: False slot: 1 ability: name: solar-power url: https://pokeapi.co/api/v2/ability/94/ is_hidden: True slot: 3
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/airbyte_json.html
20e981bc2607-1
is_hidden: True slot: 3 base_experience: 267 forms: name: charizard url: https://pokeapi.co/api/v2/pokemon-form/6/ game_indices: game_index: 180 version: name: red url: https://pokeapi.co/api/v2/version/1/ game_index: 180 version: name: blue url: https://pokeapi.co/api/v2/version/2/ game_index: 180 version: n previous YouTube transcripts next Apify Dataset By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/airbyte_json.html
3248ccf80db7-0
.ipynb .pdf IMSDb IMSDb# IMSDb is the Internet Movie Script Database. This covers how to load IMSDb webpages into a document format that we can use downstream. from langchain.document_loaders import IMSDbLoader loader = IMSDbLoader("https://imsdb.com/scripts/BlacKkKlansman.html") data = loader.load() data[0].page_content[:500] '\n\r\n\r\n\r\n\r\n BLACKKKLANSMAN\r\n \r\n \r\n \r\n \r\n Written by\r\n\r\n Charlie Wachtel & David Rabinowitz\r\n\r\n and\r\n\r\n Kevin Willmott & Spike Lee\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n FADE IN:\r\n \r\n SCENE FROM "GONE WITH' data[0].metadata {'source': 'https://imsdb.com/scripts/BlacKkKlansman.html'} previous iFixit next MediaWikiDump By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/imsdb.html
584eda2c2941-0
.ipynb .pdf Jupyter Notebook Jupyter Notebook# Jupyter Notebook (formerly IPython Notebook) is a web-based interactive computational environment for creating notebook documents. This notebook covers how to load data from a Jupyter notebook (.ipynb) into a format suitable by LangChain. from langchain.document_loaders import NotebookLoader loader = NotebookLoader("example_data/notebook.ipynb", include_outputs=True, max_output_length=20, remove_newline=True) NotebookLoader.load() loads the .ipynb notebook file into a Document object. Parameters: include_outputs (bool): whether to include cell outputs in the resulting document (default is False). max_output_length (int): the maximum number of characters to include from each cell output (default is 10). remove_newline (bool): whether to remove newline characters from the cell sources and outputs (default is False). traceback (bool): whether to include full traceback (default is False). loader.load()
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/jupyter_notebook.html
584eda2c2941-1
traceback (bool): whether to include full traceback (default is False). loader.load() [Document(page_content='\'markdown\' cell: \'[\'# Notebook\', \'\', \'This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.\']\'\n\n \'code\' cell: \'[\'from langchain.document_loaders import NotebookLoader\']\'\n\n \'code\' cell: \'[\'loader = NotebookLoader("example_data/notebook.ipynb")\']\'\n\n \'markdown\' cell: \'[\'`NotebookLoader.load()` loads the `.ipynb` notebook file into a `Document` object.\', \'\', \'**Parameters**:\', \'\', \'* `include_outputs` (bool): whether to include cell outputs in the resulting document (default is False).\', \'* `max_output_length` (int): the maximum number of characters to include from each cell output (default is 10).\', \'* `remove_newline` (bool): whether to remove newline characters from the cell sources and outputs (default is False).\', \'* `traceback` (bool): whether to include full traceback (default is False).\']\'\n\n \'code\' cell: \'[\'loader.load(include_outputs=True, max_output_length=20, remove_newline=True)\']\'\n\n', metadata={'source': 'example_data/notebook.ipynb'})] previous Images next JSON By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/jupyter_notebook.html
21b312da3a6b-0
.ipynb .pdf Image captions Contents Prepare a list of image urls from Wikimedia Create the loader Create the index Query Image captions# By default, the loader utilizes the pre-trained Salesforce BLIP image captioning model. This notebook shows how to use the ImageCaptionLoader to generate a query-able index of image captions #!pip install transformers from langchain.document_loaders import ImageCaptionLoader Prepare a list of image urls from Wikimedia# list_image_urls = [ 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Hyla_japonica_sep01.jpg/260px-Hyla_japonica_sep01.jpg', 'https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg/270px-Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg', 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg/251px-Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg', 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Passion_fruits_-_whole_and_halved.jpg/270px-Passion_fruits_-_whole_and_halved.jpg',
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image_captions.html
21b312da3a6b-1
'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Messier83_-_Heic1403a.jpg/277px-Messier83_-_Heic1403a.jpg', 'https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg/288px-2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg', 'https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg/224px-Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg', ] Create the loader# loader = ImageCaptionLoader(path_images=list_image_urls) list_docs = loader.load() list_docs /Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/transformers/generation/utils.py:1313: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation. warnings.warn(
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image_captions.html
21b312da3a6b-2
warnings.warn( [Document(page_content='an image of a frog on a flower [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Hyla_japonica_sep01.jpg/260px-Hyla_japonica_sep01.jpg'}), Document(page_content='an image of a shark swimming in the ocean [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg/270px-Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg'}), Document(page_content='an image of a painting of a battle scene [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg/251px-Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg'}), Document(page_content='an image of a passion fruit and a half cut passion [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Passion_fruits_-_whole_and_halved.jpg/270px-Passion_fruits_-_whole_and_halved.jpg'}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image_captions.html
21b312da3a6b-3
Document(page_content='an image of the spiral galaxy [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Messier83_-_Heic1403a.jpg/277px-Messier83_-_Heic1403a.jpg'}), Document(page_content='an image of a man on skis in the snow [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg/288px-2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg'}), Document(page_content='an image of a flower in the dark [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg/224px-Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg'})] from PIL import Image import requests Image.open(requests.get(list_image_urls[0], stream=True).raw).convert('RGB') Create the index# from langchain.indexes import VectorstoreIndexCreator index = VectorstoreIndexCreator().from_loaders([loader])
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image_captions.html
21b312da3a6b-4
index = VectorstoreIndexCreator().from_loaders([loader]) /Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm /Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/transformers/generation/utils.py:1313: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation. warnings.warn( Using embedded DuckDB without persistence: data will be transient Query# query = "What's the painting about?" index.query(query) ' The painting is about a battle scene.' query = "What kind of images are there?" index.query(query) ' There are images of a spiral galaxy, a painting of a battle scene, a flower in the dark, and a frog on a flower.' previous Google Drive next Iugu Contents Prepare a list of image urls from Wikimedia Create the loader Create the index Query By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image_captions.html
9cc014eaf58e-0
.ipynb .pdf AWS S3 File AWS S3 File# Amazon Simple Storage Service (Amazon S3) is an object storage service. AWS S3 Buckets This covers how to load document objects from an AWS S3 File object. from langchain.document_loaders import S3FileLoader #!pip install boto3 loader = S3FileLoader("testing-hwc", "fake.docx") loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)] previous AWS S3 Directory next Azure Blob Storage Container By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/aws_s3_file.html
6b4b01271522-0
.ipynb .pdf Obsidian Obsidian# Obsidian is a powerful and extensible knowledge base that works on top of your local folder of plain text files. This notebook covers how to load documents from an Obsidian database. Since Obsidian is just stored on disk as a folder of Markdown files, the loader just takes a path to this directory. Obsidian files also sometimes contain metadata which is a YAML block at the top of the file. These values will be added to the document’s metadata. (ObsidianLoader can also be passed a collect_metadata=False argument to disable this behavior.) from langchain.document_loaders import ObsidianLoader loader = ObsidianLoader("<path-to-obsidian>") docs = loader.load() previous Notion DB 1/2 next Psychic By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/obsidian.html
bdf636a2307f-0
.ipynb .pdf Notion DB 1/2 Contents 🧑 Instructions for ingesting your own dataset Notion DB 1/2# Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management. This notebook covers how to load documents from a Notion database dump. In order to get this notion dump, follow these instructions: 🧑 Instructions for ingesting your own dataset# Export your dataset from Notion. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export. When exporting, make sure to select the Markdown & CSV format option. This will produce a .zip file in your Downloads folder. Move the .zip file into this repository. Run the following command to unzip the zip file (replace the Export... with your own file name as needed). unzip Export-d3adfe0f-3131-4bf3-8987-a52017fc1bae.zip -d Notion_DB Run the following command to ingest the data. from langchain.document_loaders import NotionDirectoryLoader loader = NotionDirectoryLoader("Notion_DB") docs = loader.load() previous Notion DB 2/2 next Obsidian Contents 🧑 Instructions for ingesting your own dataset By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notion.html
5957676b2e9b-0
.ipynb .pdf Facebook Chat Facebook Chat# Messenger is an American proprietary instant messaging app and platform developed by Meta Platforms. Originally developed as Facebook Chat in 2008, the company revamped its messaging service in 2010. This notebook covers how to load data from the Facebook Chats into a format that can be ingested into LangChain. #pip install pandas from langchain.document_loaders import FacebookChatLoader loader = FacebookChatLoader("example_data/facebook_chat.json") loader.load()
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/facebook_chat.html
5957676b2e9b-1
loader = FacebookChatLoader("example_data/facebook_chat.json") loader.load() [Document(page_content='User 2 on 2023-02-05 03:46:11: Bye!\n\nUser 1 on 2023-02-05 03:43:55: Oh no worries! Bye\n\nUser 2 on 2023-02-05 03:24:37: No Im sorry it was my mistake, the blue one is not for sale\n\nUser 1 on 2023-02-05 03:05:40: I thought you were selling the blue one!\n\nUser 1 on 2023-02-05 03:05:09: Im not interested in this bag. Im interested in the blue one!\n\nUser 2 on 2023-02-05 03:04:28: Here is $129\n\nUser 2 on 2023-02-05 03:04:05: Online is at least $100\n\nUser 1 on 2023-02-05 02:59:59: How much do you want?\n\nUser 2 on 2023-02-04 22:17:56: Goodmorning! $50 is too low.\n\nUser 1 on 2023-02-04 14:17:02: Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!\n\n', metadata={'source': 'example_data/facebook_chat.json'})] previous EverNote next File Directory By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/facebook_chat.html
3c2481bc8e45-0
.ipynb .pdf Sitemap Contents Filtering sitemap URLs Local Sitemap Sitemap# Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. The scraping is done concurrently. There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren’t concerned about being a good citizen, or you control the scrapped server, or don’t care about load. Note, while this will speed up the scraping process, but it may cause the server to block you. Be careful! !pip install nest_asyncio Requirement already satisfied: nest_asyncio in /Users/tasp/Code/projects/langchain/.venv/lib/python3.10/site-packages (1.5.6) [notice] A new release of pip available: 22.3.1 -> 23.0.1 [notice] To update, run: pip install --upgrade pip # fixes a bug with asyncio and jupyter import nest_asyncio nest_asyncio.apply() from langchain.document_loaders.sitemap import SitemapLoader sitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml") docs = sitemap_loader.load() You can change the requests_per_second parameter to increase the max concurrent requests. and use requests_kwargs to pass kwargs when send requests. sitemap_loader.requests_per_second = 2 # Optional: avoid `[SSL: CERTIFICATE_VERIFY_FAILED]` issue sitemap_loader.requests_kwargs = {"verify": False} docs[0]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-1
Document(page_content='\n\n\n\n\n\nWelcome to LangChain — 🦜🔗 LangChain 0.0.123\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSkip to main content\n\n\n\n\n\n\n\n\n\n\nCtrl+K\n\n\n\n\n\n\n\n\n\n\n\n\n🦜🔗 LangChain 0.0.123\n\n\n\nGetting Started\n\nQuickstart Guide\n\nModules\n\nPrompt Templates\nGetting Started\nKey Concepts\nHow-To Guides\nCreate a custom prompt template\nCreate a custom example selector\nProvide few shot examples to a prompt\nPrompt Serialization\nExample Selectors\nOutput Parsers\n\n\nReference\nPromptTemplates\nExample Selector\n\n\n\n\nLLMs\nGetting Started\nKey Concepts\nHow-To Guides\nGeneric Functionality\nCustom LLM\nFake LLM\nLLM Caching\nLLM Serialization\nToken Usage Tracking\n\n\nIntegrations\nAI21\nAleph Alpha\nAnthropic\nAzure OpenAI LLM Example\nBanana\nCerebriumAI LLM Example\nCohere\nDeepInfra LLM Example\nForefrontAI LLM Example\nGooseAI LLM Example\nHugging Face Hub\nManifest\nModal\nOpenAI\nPetals LLM Example\nPromptLayer OpenAI\nSageMakerEndpoint\nSelf-Hosted Models via Runhouse\nStochasticAI\nWriter\n\n\nAsync API for LLM\nStreaming with LLMs\n\n\nReference\n\n\nDocument Loaders\nKey Concepts\nHow To Guides\nCoNLL-U\nAirbyte JSON\nAZLyrics\nBlackboard\nCollege Confidential\nCopy Paste\nCSV Loader\nDirectory
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-2
JSON\nAZLyrics\nBlackboard\nCollege Confidential\nCopy Paste\nCSV Loader\nDirectory Loader\nEmail\nEverNote\nFacebook Chat\nFigma\nGCS Directory\nGCS File Storage\nGitBook\nGoogle Drive\nGutenberg\nHacker News\nHTML\niFixit\nImages\nIMSDb\nMarkdown\nNotebook\nNotion\nObsidian\nPDF\nPowerPoint\nReadTheDocs Documentation\nRoam\ns3 Directory\ns3 File\nSubtitle Files\nTelegram\nUnstructured File Loader\nURL\nWeb Base\nWord Documents\nYouTube\n\n\n\n\nUtils\nKey Concepts\nGeneric Utilities\nBash\nBing Search\nGoogle Search\nGoogle Serper API\nIFTTT WebHooks\nPython REPL\nRequests\nSearxNG Search API\nSerpAPI\nWolfram Alpha\nZapier Natural Language Actions API\n\n\nReference\nPython REPL\nSerpAPI\nSearxNG Search\nDocstore\nText Splitter\nEmbeddings\nVectorStores\n\n\n\n\nIndexes\nGetting Started\nKey Concepts\nHow To Guides\nEmbeddings\nHypothetical Document Embeddings\nText Splitter\nVectorStores\nAtlasDB\nChroma\nDeep Lake\nElasticSearch\nFAISS\nMilvus\nOpenSearch\nPGVector\nPinecone\nQdrant\nRedis\nWeaviate\nChatGPT Plugin Retriever\nVectorStore Retriever\nAnalyze Document\nChat Index\nGraph QA\nQuestion Answering with Sources\nQuestion Answering\nSummarization\nRetrieval Question/Answering\nRetrieval Question Answering with Sources\nVector DB Text Generation\n\n\n\n\nChains\nGetting Started\nHow-To Guides\nGeneric Chains\nLoading from LangChainHub\nLLM Chain\nSequential Chains\nSerialization\nTransformation Chain\n\n\nUtility Chains\nAPI Chains\nSelf-Critique Chain with Constitutional
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-3
Chain\n\n\nUtility Chains\nAPI Chains\nSelf-Critique Chain with Constitutional AI\nBashChain\nLLMCheckerChain\nLLM Math\nLLMRequestsChain\nLLMSummarizationCheckerChain\nModeration\nPAL\nSQLite example\n\n\nAsync API for Chain\n\n\nKey Concepts\nReference\n\n\nAgents\nGetting Started\nKey Concepts\nHow-To Guides\nAgents and Vectorstores\nAsync API for Agent\nConversation Agent (for Chat Models)\nChatGPT Plugins\nCustom Agent\nDefining Custom Tools\nHuman as a tool\nIntermediate Steps\nLoading from LangChainHub\nMax Iterations\nMulti Input Tools\nSearch Tools\nSerialization\nAdding SharedMemory to an Agent and its Tools\nCSV Agent\nJSON Agent\nOpenAPI Agent\nPandas Dataframe Agent\nPython Agent\nSQL Database Agent\nVectorstore Agent\nMRKL\nMRKL Chat\nReAct\nSelf Ask With Search\n\n\nReference\n\n\nMemory\nGetting Started\nKey Concepts\nHow-To Guides\nConversationBufferMemory\nConversationBufferWindowMemory\nEntity Memory\nConversation Knowledge Graph Memory\nConversationSummaryMemory\nConversationSummaryBufferMemory\nConversationTokenBufferMemory\nAdding Memory To an LLMChain\nAdding Memory to a Multi-Input Chain\nAdding Memory to an Agent\nChatGPT Clone\nConversation Agent\nConversational Memory Customization\nCustom Memory\nMultiple Memory\n\n\n\n\nChat\nGetting Started\nKey Concepts\nHow-To Guides\nAgent\nChat Vector DB\nFew Shot Examples\nMemory\nPromptLayer ChatOpenAI\nStreaming\nRetrieval Question/Answering\nRetrieval Question Answering with Sources\n\n\n\n\n\nUse Cases\n\nAgents\nChatbots\nGenerate Examples\nData Augmented Generation\nQuestion Answering\nSummarization\nQuerying Tabular Data\nExtraction\nEvaluation\nAgent Benchmarking: Search + Calculator\nAgent VectorDB Question Answering Benchmarking\nBenchmarking
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-4
Benchmarking: Search + Calculator\nAgent VectorDB Question Answering Benchmarking\nBenchmarking Template\nData Augmented Question Answering\nUsing Hugging Face Datasets\nLLM Math\nQuestion Answering Benchmarking: Paul Graham Essay\nQuestion Answering Benchmarking: State of the Union Address\nQA Generation\nQuestion Answering\nSQL Question Answering Benchmarking: Chinook\n\n\nModel Comparison\n\nReference\n\nInstallation\nIntegrations\nAPI References\nPrompts\nPromptTemplates\nExample Selector\n\n\nUtilities\nPython REPL\nSerpAPI\nSearxNG Search\nDocstore\nText Splitter\nEmbeddings\nVectorStores\n\n\nChains\nAgents\n\n\n\nEcosystem\n\nLangChain Ecosystem\nAI21 Labs\nAtlasDB\nBanana\nCerebriumAI\nChroma\nCohere\nDeepInfra\nDeep Lake\nForefrontAI\nGoogle Search Wrapper\nGoogle Serper Wrapper\nGooseAI\nGraphsignal\nHazy Research\nHelicone\nHugging Face\nMilvus\nModal\nNLPCloud\nOpenAI\nOpenSearch\nPetals\nPGVector\nPinecone\nPromptLayer\nQdrant\nRunhouse\nSearxNG Search API\nSerpAPI\nStochasticAI\nUnstructured\nWeights & Biases\nWeaviate\nWolfram Alpha Wrapper\nWriter\n\n\n\nAdditional Resources\n\nLangChainHub\nGlossary\nLangChain Gallery\nDeployments\nTracing\nDiscord\nProduction Support\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n.rst\n\n\n\n\n\n\n\n.pdf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWelcome to LangChain\n\n\n\n\n Contents
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-5
to LangChain\n\n\n\n\n Contents \n\n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\n\n\n\n\n\n\n\n\nWelcome to LangChain#\nLarge language models (LLMs) are emerging as a transformative technology, enabling\ndevelopers to build applications that they previously could not.\nBut using these LLMs in isolation is often not enough to\ncreate a truly powerful app - the real power comes when you are able to\ncombine them with other sources of computation or knowledge.\nThis library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include:\n❓ Question Answering over specific documents\n\nDocumentation\nEnd-to-end Example: Question Answering over Notion Database\n\n💬 Chatbots\n\nDocumentation\nEnd-to-end Example: Chat-LangChain\n\n🤖 Agents\n\nDocumentation\nEnd-to-end Example: GPT+WolframAlpha\n\n\nGetting Started#\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\n\nGetting Started Documentation\n\n\n\n\n\nModules#\nThere are several main modules that LangChain provides support for.\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\nThese modules are, in increasing order of complexity:\n\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\nLLMs: This includes a generic interface for all LLMs, and common utilities for working with LLMs.\nDocument Loaders: This includes a standard interface for loading documents, as well as specific integrations to all types of text data sources.\nUtils: Language models are often more powerful when interacting with other sources of knowledge or computation. This can include Python REPLs, embeddings, search engines, and more.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-6
of knowledge or computation. This can include Python REPLs, embeddings, search engines, and more. LangChain provides a large collection of common utils to use in your application.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nChat: Chat models are a variation on Language Models that expose a different API - rather than working with raw text, they work with messages. LangChain provides a standard interface for working with them and doing all the same things as above.\n\n\n\n\n\nUse Cases#\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\n\nAgents: Agents are systems that use a language model to interact with other tools. These can be used to do more grounded question/answering, interact with APIs, or even take actions.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nData Augmented Generation: Data Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-7
Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.\nQuestion Answering: Answering questions over specific documents, only utilizing the information in those documents to construct an answer. A type of Data Augmented Generation.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\nGenerate similar examples: Generating similar examples to a given input. This is a common use case for many applications, and LangChain provides some prompts/chains for assisting in this.\nCompare models: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\n\n\n\n\n\nReference Docs#\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\n\nReference Documentation\n\n\n\n\n\nLangChain Ecosystem#\nGuides for how other companies/products can be used with LangChain\n\nLangChain Ecosystem\n\n\n\n\n\nAdditional Resources#\nAdditional collection of resources we think may be useful as you develop your application!\n\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-8
and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nDiscord: Join us on our Discord to discuss all things LangChain!\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.\n\n\n\n\n\n\n\n\n\n\n\nnext\nQuickstart Guide\n\n\n\n\n\n\n\n\n\n Contents\n \n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\n\n\n\n\n\n\n\n\n\nBy Harrison Chase\n\n\n\n\n \n © Copyright 2023, Harrison Chase.\n \n\n\n\n\n Last updated on Mar 24, 2023.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n', lookup_str='', metadata={'source': 'https://python.langchain.com/en/stable/', 'loc': 'https://python.langchain.com/en/stable/', 'lastmod': '2023-03-24T19:30:54.647430+00:00', 'changefreq': 'weekly', 'priority': '1'}, lookup_index=0)
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-9
Filtering sitemap URLs# Sitemaps can be massive files, with thousands of URLs. Often you don’t need every single one of them. You can filter the URLs by passing a list of strings or regex patterns to the url_filter parameter. Only URLs that match one of the patterns will be loaded. loader = SitemapLoader( "https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://python.langchain.com/en/latest/"] ) documents = loader.load() documents[0]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-10
Document(page_content='\n\n\n\n\n\nWelcome to LangChain — 🦜🔗 LangChain 0.0.123\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSkip to main content\n\n\n\n\n\n\n\n\n\n\nCtrl+K\n\n\n\n\n\n\n\n\n\n\n\n\n🦜🔗 LangChain 0.0.123\n\n\n\nGetting Started\n\nQuickstart Guide\n\nModules\n\nModels\nLLMs\nGetting Started\nGeneric Functionality\nHow to use the async API for LLMs\nHow to write a custom LLM wrapper\nHow (and why) to use the fake LLM\nHow to cache LLM calls\nHow to serialize LLM classes\nHow to stream LLM responses\nHow to track token usage\n\n\nIntegrations\nAI21\nAleph Alpha\nAnthropic\nAzure OpenAI LLM Example\nBanana\nCerebriumAI LLM Example\nCohere\nDeepInfra LLM Example\nForefrontAI LLM Example\nGooseAI LLM Example\nHugging Face Hub\nManifest\nModal\nOpenAI\nPetals LLM Example\nPromptLayer OpenAI\nSageMakerEndpoint\nSelf-Hosted Models via Runhouse\nStochasticAI\nWriter\n\n\nReference\n\n\nChat Models\nGetting Started\nHow-To Guides\nHow to use few shot examples\nHow to stream responses\n\n\nIntegrations\nAzure\nOpenAI\nPromptLayer ChatOpenAI\n\n\n\n\nText Embedding Models\nAzureOpenAI\nCohere\nFake Embeddings\nHugging Face Hub\nInstructEmbeddings\nOpenAI\nSageMaker Endpoint Embeddings\nSelf
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-11
Face Hub\nInstructEmbeddings\nOpenAI\nSageMaker Endpoint Embeddings\nSelf Hosted Embeddings\nTensorflowHub\n\n\n\n\nPrompts\nPrompt Templates\nGetting Started\nHow-To Guides\nHow to create a custom prompt template\nHow to create a prompt template that uses few shot examples\nHow to work with partial Prompt Templates\nHow to serialize prompts\n\n\nReference\nPromptTemplates\nExample Selector\n\n\n\n\nChat Prompt Template\nExample Selectors\nHow to create a custom example selector\nLengthBased ExampleSelector\nMaximal Marginal Relevance ExampleSelector\nNGram Overlap ExampleSelector\nSimilarity ExampleSelector\n\n\nOutput Parsers\nOutput Parsers\nCommaSeparatedListOutputParser\nOutputFixingParser\nPydanticOutputParser\nRetryOutputParser\nStructured Output Parser\n\n\n\n\nIndexes\nGetting Started\nDocument Loaders\nCoNLL-U\nAirbyte JSON\nAZLyrics\nBlackboard\nCollege Confidential\nCopy Paste\nCSV Loader\nDirectory Loader\nEmail\nEverNote\nFacebook Chat\nFigma\nGCS Directory\nGCS File Storage\nGitBook\nGoogle Drive\nGutenberg\nHacker News\nHTML\niFixit\nImages\nIMSDb\nMarkdown\nNotebook\nNotion\nObsidian\nPDF\nPowerPoint\nReadTheDocs Documentation\nRoam\ns3 Directory\ns3 File\nSubtitle Files\nTelegram\nUnstructured File Loader\nURL\nWeb Base\nWord Documents\nYouTube\n\n\nText Splitters\nGetting Started\nCharacter Text Splitter\nHuggingFace Length Function\nLatex Text Splitter\nMarkdown Text Splitter\nNLTK Text Splitter\nPython Code Text Splitter\nRecursiveCharacterTextSplitter\nSpacy Text Splitter\ntiktoken (OpenAI) Length Function\nTiktokenText Splitter\n\n\nVectorstores\nGetting Started\nAtlasDB\nChroma\nDeep
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-12
Splitter\n\n\nVectorstores\nGetting Started\nAtlasDB\nChroma\nDeep Lake\nElasticSearch\nFAISS\nMilvus\nOpenSearch\nPGVector\nPinecone\nQdrant\nRedis\nWeaviate\n\n\nRetrievers\nChatGPT Plugin Retriever\nVectorStore Retriever\n\n\n\n\nMemory\nGetting Started\nHow-To Guides\nConversationBufferMemory\nConversationBufferWindowMemory\nEntity Memory\nConversation Knowledge Graph Memory\nConversationSummaryMemory\nConversationSummaryBufferMemory\nConversationTokenBufferMemory\nHow to add Memory to an LLMChain\nHow to add memory to a Multi-Input Chain\nHow to add Memory to an Agent\nHow to customize conversational memory\nHow to create a custom Memory class\nHow to use multiple memroy classes in the same chain\n\n\n\n\nChains\nGetting Started\nHow-To Guides\nAsync API for Chain\nLoading from LangChainHub\nLLM Chain\nSequential Chains\nSerialization\nTransformation Chain\nAnalyze Document\nChat Index\nGraph QA\nHypothetical Document Embeddings\nQuestion Answering with Sources\nQuestion Answering\nSummarization\nRetrieval Question/Answering\nRetrieval Question Answering with Sources\nVector DB Text Generation\nAPI Chains\nSelf-Critique Chain with Constitutional AI\nBashChain\nLLMCheckerChain\nLLM Math\nLLMRequestsChain\nLLMSummarizationCheckerChain\nModeration\nPAL\nSQLite example\n\n\nReference\n\n\nAgents\nGetting Started\nTools\nGetting Started\nDefining Custom Tools\nMulti Input Tools\nBash\nBing Search\nChatGPT Plugins\nGoogle Search\nGoogle Serper API\nHuman as a tool\nIFTTT WebHooks\nPython REPL\nRequests\nSearch Tools\nSearxNG Search API\nSerpAPI\nWolfram Alpha\nZapier Natural Language Actions
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-13
Search API\nSerpAPI\nWolfram Alpha\nZapier Natural Language Actions API\n\n\nAgents\nAgent Types\nCustom Agent\nConversation Agent (for Chat Models)\nConversation Agent\nMRKL\nMRKL Chat\nReAct\nSelf Ask With Search\n\n\nToolkits\nCSV Agent\nJSON Agent\nOpenAPI Agent\nPandas Dataframe Agent\nPython Agent\nSQL Database Agent\nVectorstore Agent\n\n\nAgent Executors\nHow to combine agents and vectorstores\nHow to use the async API for Agents\nHow to create ChatGPT Clone\nHow to access intermediate steps\nHow to cap the max number of iterations\nHow to add SharedMemory to an Agent and its Tools\n\n\n\n\n\nUse Cases\n\nPersonal Assistants\nQuestion Answering over Docs\nChatbots\nQuerying Tabular Data\nInteracting with APIs\nSummarization\nExtraction\nEvaluation\nAgent Benchmarking: Search + Calculator\nAgent VectorDB Question Answering Benchmarking\nBenchmarking Template\nData Augmented Question Answering\nUsing Hugging Face Datasets\nLLM Math\nQuestion Answering Benchmarking: Paul Graham Essay\nQuestion Answering Benchmarking: State of the Union Address\nQA Generation\nQuestion Answering\nSQL Question Answering Benchmarking: Chinook\n\n\n\nReference\n\nInstallation\nIntegrations\nAPI References\nPrompts\nPromptTemplates\nExample Selector\n\n\nUtilities\nPython REPL\nSerpAPI\nSearxNG Search\nDocstore\nText Splitter\nEmbeddings\nVectorStores\n\n\nChains\nAgents\n\n\n\nEcosystem\n\nLangChain Ecosystem\nAI21 Labs\nAtlasDB\nBanana\nCerebriumAI\nChroma\nCohere\nDeepInfra\nDeep Lake\nForefrontAI\nGoogle Search Wrapper\nGoogle Serper Wrapper\nGooseAI\nGraphsignal\nHazy Research\nHelicone\nHugging
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-14
Serper Wrapper\nGooseAI\nGraphsignal\nHazy Research\nHelicone\nHugging Face\nMilvus\nModal\nNLPCloud\nOpenAI\nOpenSearch\nPetals\nPGVector\nPinecone\nPromptLayer\nQdrant\nRunhouse\nSearxNG Search API\nSerpAPI\nStochasticAI\nUnstructured\nWeights & Biases\nWeaviate\nWolfram Alpha Wrapper\nWriter\n\n\n\nAdditional Resources\n\nLangChainHub\nGlossary\nLangChain Gallery\nDeployments\nTracing\nDiscord\nProduction Support\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n.rst\n\n\n\n\n\n\n\n.pdf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWelcome to LangChain\n\n\n\n\n Contents \n\n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\n\n\n\n\n\n\n\n\nWelcome to LangChain#\nLangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\n\nBe data-aware: connect a language model to other sources of data\nBe agentic: allow a language model to interact with its environment\n\nThe LangChain framework is designed with the above principles in mind.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\n\nGetting Started#\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\n\nGetting Started Documentation\n\n\n\n\n\nModules#\nThere
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-15
an Language Model application.\n\nGetting Started Documentation\n\n\n\n\n\nModules#\nThere are several main modules that LangChain provides support for.\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\nThese modules are, in increasing order of complexity:\n\nModels: The various model types and model integrations LangChain supports.\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\n\n\n\n\n\nUse Cases#\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\n\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\nChatbots: Since language models are good at producing text, that makes them
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-16
construct an answer.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\nExtraction: Extract structured information from text.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\n\n\n\n\n\nReference Docs#\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\n\nReference Documentation\n\n\n\n\n\nLangChain Ecosystem#\nGuides for how other companies/products can be used with LangChain\n\nLangChain Ecosystem\n\n\n\n\n\nAdditional Resources#\nAdditional collection of resources we think may be useful as you develop your application!\n\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nTracing: A guide on using tracing in LangChain
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-17
template repositories for deploying LangChain apps.\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\nDiscord: Join us on our Discord to discuss all things LangChain!\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.\n\n\n\n\n\n\n\n\n\n\n\nnext\nQuickstart Guide\n\n\n\n\n\n\n\n\n\n Contents\n \n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\n\n\n\n\n\n\n\n\n\nBy Harrison Chase\n\n\n\n\n \n © Copyright 2023, Harrison Chase.\n \n\n\n\n\n Last updated on Mar 27, 2023.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n', lookup_str='', metadata={'source': 'https://python.langchain.com/en/latest/', 'loc': 'https://python.langchain.com/en/latest/', 'lastmod': '2023-03-27T22:50:49.790324+00:00', 'changefreq': 'daily', 'priority': '0.9'}, lookup_index=0)
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
3c2481bc8e45-18
Local Sitemap# The sitemap loader can also be used to load local files. sitemap_loader = SitemapLoader(web_path="example_data/sitemap.xml", is_local=True) docs = sitemap_loader.load() Fetching pages: 100%|####################################################################################################################################| 3/3 [00:00<00:00, 3.91it/s] previous PDF next Subtitle Contents Filtering sitemap URLs Local Sitemap By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/sitemap.html
6eab9dd4e8b9-0
.ipynb .pdf Images Contents Using Unstructured Retain Elements Images# This covers how to load images such as JPG or PNG into a document format that we can use downstream. Using Unstructured# #!pip install pdfminer from langchain.document_loaders.image import UnstructuredImageLoader loader = UnstructuredImageLoader("layout-parser-paper-fast.jpg") data = loader.load() data[0]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html
6eab9dd4e8b9-1
Document(page_content="LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n\n\n‘Zxjiang Shen' (F3}, Ruochen Zhang”, Melissa Dell*, Benjamin Charles Germain\nLeet, Jacob Carlson, and Weining LiF\n\n\nsugehen\n\nshangthrows, et\n\n“Abstract. Recent advanocs in document image analysis (DIA) have been\n‘pimarliy driven bythe application of neural networks dell roar\n{uteomer could be aly deployed in production and extended fo farther\n[nvetigtion. However, various factory ke lcely organize codebanee\nsnd sophisticated modal cnigurations compat the ey ree of\n‘erin! innovation by wide sence, Though there have been sng\n‘Hors to improve reuablty and simplify deep lees (DL) mode\n‘aon, sone of them ae optimized for challenge inthe demain of DIA,\nThis roprscte a major gap in the extng fol, sw DIA i eal to\nscademic research acon wie range of dpi in the social ssencee\n[rary for streamlining the sage of DL in DIA research and appicn\n‘tons The core LayoutFaraer brary comes with a sch of simple and\nIntative interfaee or applying and eutomiing DI. odel fr Inyo de\npltfom for sharing both protrined modes an fal document dist\n{ation pipeline We demonutate that LayootPareer shea fr both\nlightweight and lrgeseledgtieation pipelines in eal-word uae ces\nThe leary pblely smal at Btspe://layost-pareergsthab So\n\n\n\n‘Keywords: Document Image Analysis» Deep Learning Layout Analysis\n‘Character Renguition - Open Serres dary «
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html
6eab9dd4e8b9-2
Image Analysis» Deep Learning Layout Analysis\n‘Character Renguition - Open Serres dary « Tol\n\n\nIntroduction\n\n\n‘Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndoctiment image analysis (DIA) tea including document image clasiffeation [I]\n", lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg'}, lookup_index=0)
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html
6eab9dd4e8b9-3
Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredImageLoader("layout-parser-paper-fast.jpg", mode="elements") data = loader.load() data[0] Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg', 'filename': 'layout-parser-paper-fast.jpg', 'page_number': 1, 'category': 'Title'}, lookup_index=0) previous HTML next Jupyter Notebook Contents Using Unstructured Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html
47f11e6dbbda-0
.ipynb .pdf Modern Treasury Modern Treasury# Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money. Connect to banks and payment systems Track transactions and balances in real-time Automate payment operations for scale This notebook covers how to load data from the Modern Treasury REST API into a format that can be ingested into LangChain, along with example usage for vectorization. import os from langchain.document_loaders import ModernTreasuryLoader from langchain.indexes import VectorstoreIndexCreator The Modern Treasury API requires an organization ID and API key, which can be found in the Modern Treasury dashboard within developer settings. This document loader also requires a resource option which defines what data you want to load. Following resources are available: payment_orders Documentation expected_payments Documentation returns Documentation incoming_payment_details Documentation counterparties Documentation internal_accounts Documentation external_accounts Documentation transactions Documentation ledgers Documentation ledger_accounts Documentation ledger_transactions Documentation events Documentation invoices Documentation modern_treasury_loader = ModernTreasuryLoader("payment_orders") # Create a vectorstore retriver from the loader # see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details index = VectorstoreIndexCreator().from_loaders([modern_treasury_loader]) modern_treasury_doc_retriever = index.vectorstore.as_retriever() previous Microsoft OneDrive next Notion DB 2/2 By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/modern_treasury.html
67bacbbdb7bf-0
.ipynb .pdf HuggingFace dataset Contents Example HuggingFace dataset# The Hugging Face Hub is home to over 5,000 datasets in more than 100 languages that can be used for a broad range of tasks across NLP, Computer Vision, and Audio. They used for a diverse range of tasks such as translation, automatic speech recognition, and image classification. This notebook shows how to load Hugging Face Hub datasets to LangChain. from langchain.document_loaders import HuggingFaceDatasetLoader dataset_name="imdb" page_content_column="text" loader=HuggingFaceDatasetLoader(dataset_name,page_content_column) data = loader.load() data[:15]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
67bacbbdb7bf-1
data = loader.load() data[:15] [Document(page_content='I rented I AM CURIOUS-YELLOW from my video store because of all the controversy that surrounded it when it was first released in 1967. I also heard that at first it was seized by U.S. customs if it ever tried to enter this country, therefore being a fan of films considered "controversial" I really had to see this for myself.<br /><br />The plot is centered around a young Swedish drama student named Lena who wants to learn everything she can about life. In particular she wants to focus her attentions to making some sort of documentary on what the average Swede thought about certain political issues such as the Vietnam War and race issues in the United States. In between asking politicians and ordinary denizens of Stockholm about their opinions on politics, she has sex with her drama teacher, classmates, and married men.<br /><br />What kills me about I AM CURIOUS-YELLOW is that 40 years ago, this was considered pornographic. Really, the sex and nudity scenes are few and far between, even then it\'s not shot like some cheaply made porno. While my countrymen mind find it shocking, in reality sex and nudity are a major staple in Swedish cinema. Even Ingmar Bergman, arguably their answer to good old boy John Ford, had sex scenes in his films.<br /><br />I do commend the filmmakers for the fact that any sex shown in the film is shown for artistic purposes rather than just to shock people and make money to be shown in pornographic theaters in America. I AM CURIOUS-YELLOW is a good film for anyone wanting to study the meat and potatoes (no pun intended) of Swedish cinema. But really, this film doesn\'t have much of a plot.', metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
67bacbbdb7bf-2
Document(page_content='"I Am Curious: Yellow" is a risible and pretentious steaming pile. It doesn\'t matter what one\'s political views are because this film can hardly be taken seriously on any level. As for the claim that frontal male nudity is an automatic NC-17, that isn\'t true. I\'ve seen R-rated films with male nudity. Granted, they only offer some fleeting views, but where are the R-rated films with gaping vulvas and flapping labia? Nowhere, because they don\'t exist. The same goes for those crappy cable shows: schlongs swinging in the breeze but not a clitoris in sight. And those pretentious indie movies like The Brown Bunny, in which we\'re treated to the site of Vincent Gallo\'s throbbing johnson, but not a trace of pink visible on Chloe Sevigny. Before crying (or implying) "double-standard" in matters of nudity, the mentally obtuse should take into account one unavoidably obvious anatomical difference between men and women: there are no genitals on display when actresses appears nude, and the same cannot be said for a man. In fact, you generally won\'t see female genitals in an American film in anything short of porn or explicit erotica. This alleged double-standard is less a double standard than an admittedly depressing ability to come to terms culturally with the insides of women\'s bodies.', metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
67bacbbdb7bf-3
Document(page_content="If only to avoid making this type of film in the future. This film is interesting as an experiment but tells no cogent story.<br /><br />One might feel virtuous for sitting thru it because it touches on so many IMPORTANT issues but it does so without any discernable motive. The viewer comes away with no new perspectives (unless one comes up with one while one's mind wanders, as it will invariably do during this pointless film).<br /><br />One might better spend one's time staring out a window at a tree growing.<br /><br />", metadata={'label': 0}), Document(page_content="This film was probably inspired by Godard's Masculin, féminin and I urge you to see that film instead.<br /><br />The film has two strong elements and those are, (1) the realistic acting (2) the impressive, undeservedly good, photo. Apart from that, what strikes me most is the endless stream of silliness. Lena Nyman has to be most annoying actress in the world. She acts so stupid and with all the nudity in this film,...it's unattractive. Comparing to Godard's film, intellectuality has been replaced with stupidity. Without going too far on this subject, I would say that follows from the difference in ideals between the French and the Swedish society.<br /><br />A movie of its time, and place. 2/10.", metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
67bacbbdb7bf-4
Document(page_content='Oh, brother...after hearing about this ridiculous film for umpteen years all I can think of is that old Peggy Lee song..<br /><br />"Is that all there is??" ...I was just an early teen when this smoked fish hit the U.S. I was too young to get in the theater (although I did manage to sneak into "Goodbye Columbus"). Then a screening at a local film museum beckoned - Finally I could see this film, except now I was as old as my parents were when they schlepped to see it!!<br /><br />The ONLY reason this film was not condemned to the anonymous sands of time was because of the obscenity case sparked by its U.S. release. MILLIONS of people flocked to this stinker, thinking they were going to see a sex film...Instead, they got lots of closeups of gnarly, repulsive Swedes, on-street interviews in bland shopping malls, asinie political pretension...and feeble who-cares simulated sex scenes with saggy, pale actors.<br /><br />Cultural icon, holy grail, historic artifact..whatever this thing was, shred it, burn it, then stuff the ashes in a lead box!<br /><br />Elite esthetes still scrape to find value in its boring pseudo revolutionary political spewings..But if it weren\'t for the censorship scandal, it would have been ignored, then forgotten.<br /><br />Instead, the "I Am Blank, Blank" rhythymed title was repeated endlessly for years as a titilation for porno films (I am Curious, Lavender - for gay films, I Am Curious, Black - for blaxploitation films, etc..) and every ten years or so the thing rises from the dead, to be viewed by a new generation of suckers who want to see that "naughty sex film" that "revolutionized
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
67bacbbdb7bf-5
new generation of suckers who want to see that "naughty sex film" that "revolutionized the film industry"...<br /><br />Yeesh, avoid like the plague..Or if you MUST see it - rent the video and fast forward to the "dirty" parts, just to get it over with.<br /><br />', metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
67bacbbdb7bf-6
Document(page_content="I would put this at the top of my list of films in the category of unwatchable trash! There are films that are bad, but the worst kind are the ones that are unwatchable but you are suppose to like them because they are supposed to be good for you! The sex sequences, so shocking in its day, couldn't even arouse a rabbit. The so called controversial politics is strictly high school sophomore amateur night Marxism. The film is self-consciously arty in the worst sense of the term. The photography is in a harsh grainy black and white. Some scenes are out of focus or taken from the wrong angle. Even the sound is bad! And some people call this art?<br /><br />", metadata={'label': 0}), Document(page_content="Whoever wrote the screenplay for this movie obviously never consulted any books about Lucille Ball, especially her autobiography. I've never seen so many mistakes in a biopic, ranging from her early years in Celoron and Jamestown to her later years with Desi. I could write a whole list of factual errors, but it would go on for pages. In all, I believe that Lucille Ball is one of those inimitable people who simply cannot be portrayed by anyone other than themselves. If I were Lucie Arnaz and Desi, Jr., I would be irate at how many mistakes were made in this film. The filmmakers tried hard, but the movie seems awfully sloppy to me.", metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
67bacbbdb7bf-7
Document(page_content='When I first saw a glimpse of this movie, I quickly noticed the actress who was playing the role of Lucille Ball. Rachel York\'s portrayal of Lucy is absolutely awful. Lucille Ball was an astounding comedian with incredible talent. To think about a legend like Lucille Ball being portrayed the way she was in the movie is horrendous. I cannot believe out of all the actresses in the world who could play a much better Lucy, the producers decided to get Rachel York. She might be a good actress in other roles but to play the role of Lucille Ball is tough. It is pretty hard to find someone who could resemble Lucille Ball, but they could at least find someone a bit similar in looks and talent. If you noticed York\'s portrayal of Lucy in episodes of I Love Lucy like the chocolate factory or vitavetavegamin, nothing is similar in any way-her expression, voice, or movement.<br /><br />To top it all off, Danny Pino playing Desi Arnaz is horrible. Pino does not qualify to play as Ricky. He\'s small and skinny, his accent is unreal, and once again, his acting is unbelievable. Although Fred and Ethel were not similar either, they were not as bad as the characters of Lucy and Ricky.<br /><br />Overall, extremely horrible casting and the story is badly told. If people want to understand the real life situation of Lucille Ball, I suggest watching A&E Biography of Lucy and Desi, read the book from Lucille Ball herself, or PBS\' American Masters: Finding Lucy. If you want to see a docudrama, "Before the Laughter" would be a better choice. The casting of Lucille Ball and Desi Arnaz in "Before the Laughter" is much better compared to this. At least, a similar aspect is shown rather than nothing.', metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
67bacbbdb7bf-8
Document(page_content='Who are these "They"- the actors? the filmmakers? Certainly couldn\'t be the audience- this is among the most air-puffed productions in existence. It\'s the kind of movie that looks like it was a lot of fun to shoot\x97 TOO much fun, nobody is getting any actual work done, and that almost always makes for a movie that\'s no fun to watch.<br /><br />Ritter dons glasses so as to hammer home his character\'s status as a sort of doppleganger of the bespectacled Bogdanovich; the scenes with the breezy Ms. Stratten are sweet, but have an embarrassing, look-guys-I\'m-dating-the-prom-queen feel to them. Ben Gazzara sports his usual cat\'s-got-canary grin in a futile attempt to elevate the meager plot, which requires him to pursue Audrey Hepburn with all the interest of a narcoleptic at an insomnia clinic. In the meantime, the budding couple\'s respective children (nepotism alert: Bogdanovich\'s daughters) spew cute and pick up some fairly disturbing pointers on \'love\' while observing their parents. (Ms. Hepburn, drawing on her dignity, manages to rise above the proceedings- but she has the monumental challenge of playing herself, ostensibly.) Everybody looks great, but so what? It\'s a movie and we can expect that much, if that\'s what you\'re looking for you\'d be better off picking up a copy of Vogue.<br /><br />Oh- and it has to be mentioned that Colleen Camp thoroughly annoys, even apart from her singing, which, while competent, is wholly unconvincing... the country and western numbers are woefully mismatched with the standards on the soundtrack. Surely this is NOT what Gershwin (who wrote the song from which the movie\'s title is derived)
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
67bacbbdb7bf-9
NOT what Gershwin (who wrote the song from which the movie\'s title is derived) had in mind; his stage musicals of the 20\'s may have been slight, but at least they were long on charm. "They All Laughed" tries to coast on its good intentions, but nobody- least of all Peter Bogdanovich - has the good sense to put on the brakes.<br /><br />Due in no small part to the tragic death of Dorothy Stratten, this movie has a special place in the heart of Mr. Bogdanovich- he even bought it back from its producers, then distributed it on his own and went bankrupt when it didn\'t prove popular. His rise and fall is among the more sympathetic and tragic of Hollywood stories, so there\'s no joy in criticizing the film... there _is_ real emotional investment in Ms. Stratten\'s scenes. But "Laughed" is a faint echo of "The Last Picture Show", "Paper Moon" or "What\'s Up, Doc"- following "Daisy Miller" and "At Long Last Love", it was a thundering confirmation of the phase from which P.B. has never emerged.<br /><br />All in all, though, the movie is harmless, only a waste of rental. I want to watch people having a good time, I\'ll go to the park on a sunny day. For filmic expressions of joy and love, I\'ll stick to Ernest Lubitsch and Jaques Demy...', metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
67bacbbdb7bf-10
Document(page_content="This is said to be a personal film for Peter Bogdonavitch. He based it on his life but changed things around to fit the characters, who are detectives. These detectives date beautiful models and have no problem getting them. Sounds more like a millionaire playboy filmmaker than a detective, doesn't it? This entire movie was written by Peter, and it shows how out of touch with real people he was. You're supposed to write what you know, and he did that, indeed. And leaves the audience bored and confused, and jealous, for that matter. This is a curio for people who want to see Dorothy Stratten, who was murdered right after filming. But Patti Hanson, who would, in real life, marry Keith Richards, was also a model, like Stratten, but is a lot better and has a more ample part. In fact, Stratten's part seemed forced; added. She doesn't have a lot to do with the story, which is pretty convoluted to begin with. All in all, every character in this film is somebody that very few people can relate with, unless you're millionaire from Manhattan with beautiful supermodels at your beckon call. For the rest of us, it's an irritating snore fest. That's what happens when you're out of touch. You entertain your few friends with inside jokes, and bore all the rest.", metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
67bacbbdb7bf-11
Document(page_content='It was great to see some of my favorite stars of 30 years ago including John Ritter, Ben Gazarra and Audrey Hepburn. They looked quite wonderful. But that was it. They were not given any characters or good lines to work with. I neither understood or cared what the characters were doing.<br /><br />Some of the smaller female roles were fine, Patty Henson and Colleen Camp were quite competent and confident in their small sidekick parts. They showed some talent and it is sad they didn\'t go on to star in more and better films. Sadly, I didn\'t think Dorothy Stratten got a chance to act in this her only important film role.<br /><br />The film appears to have some fans, and I was very open-minded when I started watching it. I am a big Peter Bogdanovich fan and I enjoyed his last movie, "Cat\'s Meow" and all his early ones from "Targets" to "Nickleodeon". So, it really surprised me that I was barely able to keep awake watching this one.<br /><br />It is ironic that this movie is about a detective agency where the detectives and clients get romantically involved with each other. Five years later, Bogdanovich\'s ex-girlfriend, Cybil Shepherd had a hit television series called "Moonlighting" stealing the story idea from Bogdanovich. Of course, there was a great difference in that the series relied on tons of witty dialogue, while this tries to make do with slapstick and a few screwball lines.<br /><br />Bottom line: It ain\'t no "Paper Moon" and only a very pale version of "What\'s Up, Doc".', metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
67bacbbdb7bf-12
Document(page_content="I can't believe that those praising this movie herein aren't thinking of some other film. I was prepared for the possibility that this would be awful, but the script (or lack thereof) makes for a film that's also pointless. On the plus side, the general level of craft on the part of the actors and technical crew is quite competent, but when you've got a sow's ear to work with you can't make a silk purse. Ben G fans should stick with just about any other movie he's been in. Dorothy S fans should stick to Galaxina. Peter B fans should stick to Last Picture Show and Target. Fans of cheap laughs at the expense of those who seem to be asking for it should stick to Peter B's amazingly awful book, Killing of the Unicorn.", metadata={'label': 0}), Document(page_content='Never cast models and Playboy bunnies in your films! Bob Fosse\'s "Star 80" about Dorothy Stratten, of whom Bogdanovich was obsessed enough to have married her SISTER after her murder at the hands of her low-life husband, is a zillion times more interesting than Dorothy herself on the silver screen. Patty Hansen is no actress either..I expected to see some sort of lost masterpiece a la Orson Welles but instead got Audrey Hepburn cavorting in jeans and a god-awful "poodlesque" hair-do....Very disappointing...."Paper Moon" and "The Last Picture Show" I could watch again and again. This clunker I could barely sit through once. This movie was reputedly not released because of the brouhaha surrounding Ms. Stratten\'s tawdry death; I think the real reason was because it was so bad!', metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html