id
stringlengths
14
16
text
stringlengths
29
2.73k
source
stringlengths
49
117
8f5aa8608be3-29
Document(page_content="1.16 Landlord 's Notice Address . DHA Group , Suite 1010 , 111 Bauer Dr , Oakland , New Jersey , 07436 , with a copy to the Building Management Office at the Project , Attention: On - Site Property Manager .", metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:PercentageRent/docset:NoticeAddress[2]/docset:LandlordsNoticeAddress-section/docset:LandlordsNoticeAddress[2]', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'LandlordsNoticeAddress', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
8f5aa8608be3-30
Document(page_content='1.6 Rentable Area of the Premises. 13,500 square feet . This square footage figure includes an add-on factor for Common Areas in the Building and has been agreed upon by the parties as final and correct and is not subject to challenge or dispute by either party.', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:Premises[2]/docset:RentableAreaofthePremises-section/docset:RentableAreaofthePremises', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'RentableAreaofthePremises', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'})]} This time the answer is correct, since the self-querying retriever created a filter on the landlord attribute of the metadata, correctly filtering to document that specifically is about the DHA Group landlord. The resulting source chunks are all relevant to this landlord, and this improves answer accuracy even though the landlord is not directly mentioned in the specific chunk that contains the correct answer. previous Diffbot next DuckDB Contents Prerequisites Quick start Advantages vs Other Chunking Techniques Load Documents Basic Use: Docugami Loader for Document QA Using Docugami to Add Metadata to Chunks for High Accuracy Document QA By Harrison Chase © Copyright 2023, Harrison Chase.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
8f5aa8608be3-31
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
664391deccb9-0
.ipynb .pdf Markdown Contents Retain Elements Markdown# Markdown is a lightweight markup language for creating formatted text using a plain-text editor. This covers how to load markdown documents into a document format that we can use downstream. # !pip install unstructured > /dev/null from langchain.document_loaders import UnstructuredMarkdownLoader markdown_path = "../../../../../README.md" loader = UnstructuredMarkdownLoader(markdown_path) data = loader.load() data
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html
664391deccb9-1
[Document(page_content="ð\x9f¦\x9cï¸\x8fð\x9f”\x97 LangChain\n\nâ\x9a¡ Building applications with LLMs through composability â\x9a¡\n\nLooking for the JS/TS version? Check out LangChain.js.\n\nProduction Support: As you move your LangChains into production, we'd love to offer more comprehensive support.\nPlease fill out this form and we'll set up a dedicated support Slack channel.\n\nQuick Install\n\npip install langchain\nor\nconda install langchain -c conda-forge\n\nð\x9f¤” What is this?\n\nLarge language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. However, using these LLMs in isolation is often insufficient for creating a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.\n\nThis library aims to assist in the development of those types of applications. Common examples of these applications include:\n\nâ\x9d“ Question Answering over specific documents\n\nDocumentation\n\nEnd-to-end Example: Question Answering over Notion Database\n\nð\x9f’¬ Chatbots\n\nDocumentation\n\nEnd-to-end Example: Chat-LangChain\n\nð\x9f¤\x96 Agents\n\nDocumentation\n\nEnd-to-end Example: GPT+WolframAlpha\n\nð\x9f“\x96 Documentation\n\nPlease see here for full documentation on:\n\nGetting started (installation, setting up the environment, simple examples)\n\nHow-To examples (demos, integrations, helper functions)\n\nReference (full API docs)\n\nResources (high-level explanation of core concepts)\n\nð\x9f\x9a\x80 What can this help
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html
664391deccb9-2
explanation of core concepts)\n\nð\x9f\x9a\x80 What can this help with?\n\nThere are six main areas that LangChain is designed to help with.\nThese are, in increasing order of complexity:\n\nð\x9f“\x83 LLMs and Prompts:\n\nThis includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs.\n\nð\x9f”\x97 Chains:\n\nChains go beyond a single LLM call and involve sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\n\nð\x9f“\x9a Data Augmented Generation:\n\nData Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.\n\nð\x9f¤\x96 Agents:\n\nAgents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.\n\nð\x9f§\xa0 Memory:\n\nMemory refers to persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\n\nð\x9f§\x90 Evaluation:\n\n[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html
664391deccb9-3
is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\n\nFor more information on these concepts, please see our full documentation.\n\nð\x9f’\x81 Contributing\n\nAs an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.\n\nFor detailed information on how to contribute, see here.", metadata={'source': '../../../../../README.md'})]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html
664391deccb9-4
Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredMarkdownLoader(markdown_path, mode="elements") data = loader.load() data[0] Document(page_content='ð\x9f¦\x9cï¸\x8fð\x9f”\x97 LangChain', metadata={'source': '../../../../../README.md', 'page_number': 1, 'category': 'Title'}) previous JSON next Microsoft PowerPoint Contents Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html
693211809f33-0
.ipynb .pdf Microsoft Word Contents Using Docx2txt Using Unstructured Retain Elements Microsoft Word# Microsoft Word is a word processor developed by Microsoft. This covers how to load Word documents into a document format that we can use downstream. Using Docx2txt# Load .docx using Docx2txt into a document. from langchain.document_loaders import Docx2txtLoader loader = Docx2txtLoader("example_data/fake.docx") data = loader.load() data [Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'})] Using Unstructured# from langchain.document_loaders import UnstructuredWordDocumentLoader loader = UnstructuredWordDocumentLoader("example_data/fake.docx") data = loader.load() data [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx'}, lookup_index=0)] Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredWordDocumentLoader("example_data/fake.docx", mode="elements") data = loader.load() data[0] Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx', 'filename': 'fake.docx', 'category': 'Title'}, lookup_index=0) previous Microsoft PowerPoint next Open Document Format (ODT) Contents Using Docx2txt Using Unstructured Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/microsoft_word.html
88db52f5ee62-0
.ipynb .pdf Pandas DataFrame Pandas DataFrame# This notebook goes over how to load data from a pandas DataFrame. #!pip install pandas import pandas as pd df = pd.read_csv('example_data/mlb_teams_2012.csv') df.head() Team "Payroll (millions)" "Wins" 0 Nationals 81.34 98 1 Reds 82.20 97 2 Yankees 197.96 95 3 Giants 117.62 94 4 Braves 83.31 94 from langchain.document_loaders import DataFrameLoader loader = DataFrameLoader(df, page_content_column="Team") loader.load() [Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}), Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}), Document(page_content='Giants', metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}), Document(page_content='Braves', metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pandas_dataframe.html
88db52f5ee62-1
Document(page_content='Orioles', metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}), Document(page_content='Rays', metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}), Document(page_content='Angels', metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}), Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pandas_dataframe.html
88db52f5ee62-2
Document(page_content='Mariners', metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}), Document(page_content='Mets', metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}), Document(page_content='Royals', metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}), Document(page_content='Indians', metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}), Document(page_content='Twins', metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}), Document(page_content='Astros', metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55})] previous Open Document Format (ODT) next PDF By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pandas_dataframe.html
a778ec86929e-0
.ipynb .pdf YouTube transcripts Contents Add video info YouTube loader from Google Cloud Prerequisites 🧑 Instructions for ingesting your Google Docs data YouTube transcripts# YouTube is an online video sharing and social media platform created by Google. This notebook covers how to load documents from YouTube transcripts. from langchain.document_loaders import YoutubeLoader # !pip install youtube-transcript-api loader = YoutubeLoader.from_youtube_url("https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True) loader.load() Add video info# # ! pip install pytube loader = YoutubeLoader.from_youtube_url("https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True) loader.load() YouTube loader from Google Cloud# Prerequisites# Create a Google Cloud project or use an existing project Enable the Youtube Api Authorize credentials for desktop app pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib youtube-transcript-api 🧑 Instructions for ingesting your Google Docs data# By default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_file keyword argument. Same thing with token.json. Note that token.json will be created automatically the first time you use the loader. GoogleApiYoutubeLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL: Note depending on your set up, the service_account_path needs to be set up. See here for more details. from langchain.document_loaders import GoogleApiClient, GoogleApiYoutubeLoader # Init the GoogleApiClient from pathlib import Path google_api_client = GoogleApiClient(credentials_path=Path("your_path_creds.json"))
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/youtube_transcript.html
a778ec86929e-1
google_api_client = GoogleApiClient(credentials_path=Path("your_path_creds.json")) # Use a Channel youtube_loader_channel = GoogleApiYoutubeLoader(google_api_client=google_api_client, channel_name="Reducible",captions_language="en") # Use Youtube Ids youtube_loader_ids = GoogleApiYoutubeLoader(google_api_client=google_api_client, video_ids=["TrdevFK_am4"], add_video_info=True) # returns a list of Documents youtube_loader_channel.load() previous Wikipedia next Airbyte JSON Contents Add video info YouTube loader from Google Cloud Prerequisites 🧑 Instructions for ingesting your Google Docs data By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/youtube_transcript.html
6bfc0f6691dc-0
.ipynb .pdf Joplin Joplin# Joplin is an open source note-taking app. Capture your thoughts and securely access them from any device. This notebook covers how to load documents from a Joplin database. Joplin has a REST API for accessing its local database. This loader uses the API to retrieve all notes in the database and their metadata. This requires an access token that can be obtained from the app by following these steps: Open the Joplin app. The app must stay open while the documents are being loaded. Go to settings / options and select “Web Clipper”. Make sure that the Web Clipper service is enabled. Under “Advanced Options”, copy the authorization token. You may either initialize the loader directly with the access token, or store it in the environment variable JOPLIN_ACCESS_TOKEN. An alternative to this approach is to export the Joplin’s note database to Markdown files (optionally, with Front Matter metadata) and use a Markdown loader, such as ObsidianLoader, to load them. from langchain.document_loaders import JoplinLoader loader = JoplinLoader(access_token="<access-token>") docs = loader.load() previous Iugu next Microsoft OneDrive By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/joplin.html
701bbb27666f-0
.ipynb .pdf Unstructured File Contents Retain Elements Define a Partitioning Strategy PDF Example Unstructured API Unstructured File# This notebook covers how to use Unstructured package to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more. # # Install package !pip install "unstructured[local-inference]" !pip install layoutparser[layoutmodels,tesseract] # # Install other dependencies # # https://github.com/Unstructured-IO/unstructured/blob/main/docs/source/installing.rst # !brew install libmagic # !brew install poppler # !brew install tesseract # # If parsing xml / html documents: # !brew install libxml2 # !brew install libxslt # import nltk # nltk.download('punkt') from langchain.document_loaders import UnstructuredFileLoader loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt") docs = loader.load() docs[0].page_content[:400] 'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\n\nLast year COVID-19 kept us apart. This year we are finally together again.\n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.\n\nWith a duty to one another to the American people to the Constit' Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt", mode="elements") docs = loader.load() docs[:5]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html
701bbb27666f-1
docs = loader.load() docs[:5] [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Last year COVID-19 kept us apart. This year we are finally together again.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='With a duty to one another to the American people to the Constitution.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='And with an unwavering resolve that freedom will always triumph over tyranny.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)] Define a Partitioning Strategy# Unstructured document loader allow users to pass in a strategy parameter that lets unstructured know how to partition the document. Currently supported strategies are "hi_res" (the default) and "fast". Hi res partitioning strategies are more accurate, but take longer to process. Fast strategies partition the document more quickly, but trade-off accuracy. Not all document types have separate hi res and fast partitioning strategies. For those document types, the strategy kwarg is ignored. In some cases, the high res strategy will fallback to fast if there is a dependency missing (i.e. a model for document partitioning). You can see how to apply a strategy to an UnstructuredFileLoader below. from langchain.document_loaders import UnstructuredFileLoader
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html
701bbb27666f-2
from langchain.document_loaders import UnstructuredFileLoader loader = UnstructuredFileLoader("layout-parser-paper-fast.pdf", strategy="fast", mode="elements") docs = loader.load() docs[:5] [Document(page_content='1', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='0', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'Title'}, lookup_index=0)] PDF Example# Processing PDF documents works exactly the same way. Unstructured detects the file type and extracts the same types of elements. !wget https://raw.githubusercontent.com/Unstructured-IO/unstructured/main/example-docs/layout-parser-paper.pdf -P "../../" loader = UnstructuredFileLoader("./example_data/layout-parser-paper.pdf", mode="elements") docs = loader.load() docs[:5]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html
701bbb27666f-3
docs = loader.load() docs[:5] [Document(page_content='LayoutParser : A Unified Toolkit for Deep Learning Based Document Image Analysis', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Zejiang Shen 1 ( (ea)\n ), Ruochen Zhang 2 , Melissa Dell 3 , Benjamin Charles Germain Lee 4 , Jacob Carlson 3 , and Weining Li 5', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Allen Institute for AI shannons@allenai.org', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Brown University ruochen zhang@brown.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Harvard University { melissadell,jacob carlson } @fas.harvard.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0)] Unstructured API# If you want to get up and running with less set up, you can simply run pip install unstructured and use UnstructuredAPIFileLoader or UnstructuredAPIFileIOLoader. That will process your document using the hosted Unstructured API. Note that currently (as of 11 May 2023) the Unstructured API is open, but it will soon require an API. The Unstructured documentation page will have instructions on how to generate an API key once they’re available. Check out the instructions here if you’d like to self-host the Unstructured API or run it locally. from langchain.document_loaders import UnstructuredAPIFileLoader filenames = ["example_data/fake.docx", "example_data/fake-email.eml"] loader = UnstructuredAPIFileLoader(
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html
701bbb27666f-4
loader = UnstructuredAPIFileLoader( file_path=filenames[0], api_key="FAKE_API_KEY", ) docs = loader.load() docs[0] Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'}) You can also batch multiple files through the Unstructured API in a single API using UnstructuredAPIFileLoader. loader = UnstructuredAPIFileLoader( file_path=filenames, api_key="FAKE_API_KEY", ) docs = loader.load() docs[0] Document(page_content='Lorem ipsum dolor sit amet.\n\nThis is a test email to use for unit tests.\n\nImportant points:\n\nRoses are red\n\nViolets are blue', metadata={'source': ['example_data/fake.docx', 'example_data/fake-email.eml']}) previous TOML next URL Contents Retain Elements Define a Partitioning Strategy PDF Example Unstructured API By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html
c1fd815dc26e-0
.ipynb .pdf Psychic Contents Prerequisites Loading documents Converting the docs to embeddings Psychic# This notebook covers how to load documents from Psychic. See here for more details. Prerequisites# Follow the Quick Start section in this document Log into the Psychic dashboard and get your secret key Install the frontend react library into your web app and have a user authenticate a connection. The connection will be created using the connection id that you specify. Loading documents# Use the PsychicLoader class to load in documents from a connection. Each connection has a connector id (corresponding to the SaaS app that was connected) and a connection id (which you passed in to the frontend library). # Uncomment this to install psychicapi if you don't already have it installed !poetry run pip -q install psychicapi [notice] A new release of pip is available: 23.0.1 -> 23.1.2 [notice] To update, run: pip install --upgrade pip from langchain.document_loaders import PsychicLoader from psychicapi import ConnectorId # Create a document loader for google drive. We can also load from other connectors by setting the connector_id to the appropriate value e.g. ConnectorId.notion.value # This loader uses our test credentials google_drive_loader = PsychicLoader( api_key="7ddb61c1-8b6a-4d31-a58e-30d1c9ea480e", connector_id=ConnectorId.gdrive.value, connection_id="google-test" ) documents = google_drive_loader.load() Converting the docs to embeddings# We can now convert these documents into embeddings and store them in a vector database like Chroma from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/psychic.html
c1fd815dc26e-1
from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain.llms import OpenAI from langchain.chains import RetrievalQAWithSourcesChain text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_documents(texts, embeddings) chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever()) chain({"question": "what is psychic?"}, return_only_outputs=True) previous Obsidian next PySpark DataFrame Loader Contents Prerequisites Loading documents Converting the docs to embeddings By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/psychic.html
ce07f9b28e1c-0
.ipynb .pdf EverNote EverNote# EverNote is intended for archiving and creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtual “notebooks” and can be tagged, annotated, edited, searched, and exported. This notebook shows how to load an Evernote export file (.enex) from disk. A document will be created for each note in the export. # lxml and html2text are required to parse EverNote notes # !pip install lxml # !pip install html2text from langchain.document_loaders import EverNoteLoader # By default all notes are combined into a single Document loader = EverNoteLoader("example_data/testing.enex") loader.load() [Document(page_content='testing this\n\nwhat happens?\n\nto the world?**Jan - March 2022**', metadata={'source': 'example_data/testing.enex'})] # It's likely more useful to return a Document for each note loader = EverNoteLoader("example_data/testing.enex", load_single_document=False) loader.load() [Document(page_content='testing this\n\nwhat happens?\n\nto the world?', metadata={'title': 'testing', 'created': time.struct_time(tm_year=2023, tm_mon=2, tm_mday=9, tm_hour=3, tm_min=47, tm_sec=46, tm_wday=3, tm_yday=40, tm_isdst=-1), 'updated': time.struct_time(tm_year=2023, tm_mon=2, tm_mday=9, tm_hour=3, tm_min=53, tm_sec=28, tm_wday=3, tm_yday=40, tm_isdst=-1), 'note-attributes.author': 'Harrison Chase', 'source': 'example_data/testing.enex'}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/evernote.html
ce07f9b28e1c-1
Document(page_content='**Jan - March 2022**', metadata={'title': 'Summer Training Program', 'created': time.struct_time(tm_year=2022, tm_mon=12, tm_mday=27, tm_hour=1, tm_min=59, tm_sec=48, tm_wday=1, tm_yday=361, tm_isdst=-1), 'note-attributes.author': 'Mike McGarry', 'note-attributes.source': 'mobile.iphone', 'source': 'example_data/testing.enex'})] previous EPub next Facebook Chat By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/evernote.html
c91c21f871f7-0
.ipynb .pdf Iugu Iugu# Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications. This notebook covers how to load data from the Iugu REST API into a format that can be ingested into LangChain, along with example usage for vectorization. import os from langchain.document_loaders import IuguLoader from langchain.indexes import VectorstoreIndexCreator The Iugu API requires an access token, which can be found inside of the Iugu dashboard. This document loader also requires a resource option which defines what data you want to load. Following resources are available: Documentation Documentation iugu_loader = IuguLoader("charges") # Create a vectorstore retriver from the loader # see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details index = VectorstoreIndexCreator().from_loaders([iugu_loader]) iugu_doc_retriever = index.vectorstore.as_retriever() previous Image captions next Joplin By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/iugu.html
3524fc7ce6e3-0
.ipynb .pdf Roam Contents 🧑 Instructions for ingesting your own dataset Roam# ROAM is a note-taking tool for networked thought, designed to create a personal knowledge base. This notebook covers how to load documents from a Roam database. This takes a lot of inspiration from the example repo here. 🧑 Instructions for ingesting your own dataset# Export your dataset from Roam Research. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export. When exporting, make sure to select the Markdown & CSV format option. This will produce a .zip file in your Downloads folder. Move the .zip file into this repository. Run the following command to unzip the zip file (replace the Export... with your own file name as needed). unzip Roam-Export-1675782732639.zip -d Roam_DB from langchain.document_loaders import RoamLoader loader = RoamLoader("Roam_DB") docs = loader.load() previous Reddit next Slack Contents 🧑 Instructions for ingesting your own dataset By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/roam.html
c521311bc7ee-0
.ipynb .pdf Git Contents Load existing repository from disk Clone repository from url Filtering files to load Git# Git is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers collaboratively developing source code during software development. This notebook shows how to load text files from Git repository. Load existing repository from disk# !pip install GitPython from git import Repo repo = Repo.clone_from( "https://github.com/hwchase17/langchain", to_path="./example_data/test_repo1" ) branch = repo.head.reference from langchain.document_loaders import GitLoader loader = GitLoader(repo_path="./example_data/test_repo1/", branch=branch) data = loader.load() len(data) print(data[0]) page_content='.venv\n.github\n.git\n.mypy_cache\n.pytest_cache\nDockerfile' metadata={'file_path': '.dockerignore', 'file_name': '.dockerignore', 'file_type': ''} Clone repository from url# from langchain.document_loaders import GitLoader loader = GitLoader( clone_url="https://github.com/hwchase17/langchain", repo_path="./example_data/test_repo2/", branch="master", ) data = loader.load() len(data) 1074 Filtering files to load# from langchain.document_loaders import GitLoader # eg. loading only python files loader = GitLoader(repo_path="./example_data/test_repo1/", file_filter=lambda file_path: file_path.endswith(".py")) previous GitBook next Google BigQuery Contents Load existing repository from disk Clone repository from url Filtering files to load By Harrison Chase © Copyright 2023, Harrison Chase.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/git.html
c521311bc7ee-1
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/git.html
0ca9261047e5-0
.ipynb .pdf Microsoft OneDrive Contents Prerequisites 🧑 Instructions for ingesting your documents from OneDrive 🔑 Authentication 🗂️ Documents loader 📑 Loading documents from a OneDrive Directory 📑 Loading documents from a list of Documents IDs Microsoft OneDrive# Microsoft OneDrive (formerly SkyDrive) is a file hosting service operated by Microsoft. This notebook covers how to load documents from OneDrive. Currently, only docx, doc, and pdf files are supported. Prerequisites# Register an application with the Microsoft identity platform instructions. When registration finishes, the Azure portal displays the app registration’s Overview pane. You see the Application (client) ID. Also called the client ID, this value uniquely identifies your application in the Microsoft identity platform. During the steps you will be following at item 1, you can set the redirect URI as http://localhost:8000/callback During the steps you will be following at item 1, generate a new password (client_secret) under Application Secrets section. Follow the instructions at this document to add the following SCOPES (offline_access and Files.Read.All) to your application. Visit the Graph Explorer Playground to obtain your OneDrive ID. The first step is to ensure you are logged in with the account associated your OneDrive account. Then you need to make a request to https://graph.microsoft.com/v1.0/me/drive and the response will return a payload with a field id that holds the ID of your OneDrive account. You need to install the o365 package using the command pip install o365. At the end of the steps you must have the following values: CLIENT_ID CLIENT_SECRET DRIVE_ID 🧑 Instructions for ingesting your documents from OneDrive# 🔑 Authentication#
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/microsoft_onedrive.html
0ca9261047e5-1
🧑 Instructions for ingesting your documents from OneDrive# 🔑 Authentication# By default, the OneDriveLoader expects that the values of CLIENT_ID and CLIENT_SECRET must be stored as environment variables named O365_CLIENT_ID and O365_CLIENT_SECRET respectively. You could pass those environment variables through a .env file at the root of your application or using the following command in your script. os.environ['O365_CLIENT_ID'] = "YOUR CLIENT ID" os.environ['O365_CLIENT_SECRET'] = "YOUR CLIENT SECRET" This loader uses an authentication called on behalf of a user. It is a 2 step authentication with user consent. When you instantiate the loader, it will call will print a url that the user must visit to give consent to the app on the required permissions. The user must then visit this url and give consent to the application. Then the user must copy the resulting page url and paste it back on the console. The method will then return True if the login attempt was succesful. from langchain.document_loaders.onedrive import OneDriveLoader loader = OneDriveLoader(drive_id="YOUR DRIVE ID") Once the authentication has been done, the loader will store a token (o365_token.txt) at ~/.credentials/ folder. This token could be used later to authenticate without the copy/paste steps explained earlier. To use this token for authentication, you need to change the auth_with_token parameter to True in the instantiation of the loader. from langchain.document_loaders.onedrive import OneDriveLoader loader = OneDriveLoader(drive_id="YOUR DRIVE ID", auth_with_token=True) 🗂️ Documents loader# 📑 Loading documents from a OneDrive Directory# OneDriveLoader can load documents from a specific folder within your OneDrive. For instance, you want to load all documents that are stored at Documents/clients folder within your OneDrive.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/microsoft_onedrive.html
0ca9261047e5-2
from langchain.document_loaders.onedrive import OneDriveLoader loader = OneDriveLoader(drive_id="YOUR DRIVE ID", folder_path="Documents/clients", auth_with_token=True) documents = loader.load() 📑 Loading documents from a list of Documents IDs# Another possibility is to provide a list of object_id for each document you want to load. For that, you will need to query the Microsoft Graph API to find all the documents ID that you are interested in. This link provides a list of endpoints that will be helpful to retrieve the documents ID. For instance, to retrieve information about all objects that are stored at the root of the Documents folder, you need make a request to: https://graph.microsoft.com/v1.0/drives/{YOUR DRIVE ID}/root/children. Once you have the list of IDs that you are interested in, then you can instantiate the loader with the following parameters. from langchain.document_loaders.onedrive import OneDriveLoader loader = OneDriveLoader(drive_id="YOUR DRIVE ID", object_ids=["ID_1", "ID_2"], auth_with_token=True) documents = loader.load() previous Joplin next Modern Treasury Contents Prerequisites 🧑 Instructions for ingesting your documents from OneDrive 🔑 Authentication 🗂️ Documents loader 📑 Loading documents from a OneDrive Directory 📑 Loading documents from a list of Documents IDs By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/microsoft_onedrive.html
bcfbd1a720d2-0
.ipynb .pdf WhatsApp Chat WhatsApp Chat# WhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content. This notebook covers how to load data from the WhatsApp Chats into a format that can be ingested into LangChain. from langchain.document_loaders import WhatsAppChatLoader loader = WhatsAppChatLoader("example_data/whatsapp_chat.txt") loader.load() previous Weather next Arxiv By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/whatsapp_chat.html
ca2b85522ef3-0
.ipynb .pdf Confluence Confluence# Confluence is a wiki collaboration platform that saves and organizes all of the project-related material. Confluence is a knowledge base that primarily handles content management activities. A loader for Confluence pages currently supports both username/api_key and Oauth2 login. See instructions. Specify a list page_id-s and/or space_key to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned. You can also specify a boolean include_attachments to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG, SVG, Word and Excel. Hint: space_key and page_id can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id> #!pip install atlassian-python-api from langchain.document_loaders import ConfluenceLoader loader = ConfluenceLoader( url="https://yoursite.atlassian.com/wiki", username="me", api_key="12345" ) documents = loader.load(space_key="SPACE", include_attachments=True, limit=50) previous ChatGPT Data next Diffbot By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/confluence.html
ee627deb1191-0
.ipynb .pdf Wikipedia Contents Installation Examples Wikipedia# Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history. This notebook shows how to load wiki pages from wikipedia.org into the Document format that we use downstream. Installation# First, you need to install wikipedia python package. #!pip install wikipedia Examples# WikipediaLoader has these arguments: query: free text which used to find documents in Wikipedia optional lang: default=”en”. Use it to search in a specific language part of Wikipedia optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now. optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), title, Summary. If True, other fields also downloaded. from langchain.document_loaders import WikipediaLoader docs = WikipediaLoader(query='HUNTER X HUNTER', load_max_docs=2).load() len(docs) docs[0].metadata # meta-information of the Document docs[0].page_content[:400] # a content of the Document previous MediaWikiDump next YouTube transcripts Contents Installation Examples By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/wikipedia.html
9fd963b17fd0-0
.ipynb .pdf CSV Contents Customizing the csv parsing and loading Specify a column to identify the document source CSV# A comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. Load csv data with a single row per document. from langchain.document_loaders.csv_loader import CSVLoader loader = CSVLoader(file_path='./example_data/mlb_teams_2012.csv') data = loader.load() print(data)
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-1
[Document(page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\n"Payroll (millions)": 197.96\n"Wins": 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\n"Payroll (millions)": 117.62\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='Team: Braves\n"Payroll (millions)": 83.31\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\n"Payroll (millions)": 55.37\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\n"Payroll (millions)": 120.51\n"Wins": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\n"Payroll
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-2
6}, lookup_index=0), Document(page_content='Team: Orioles\n"Payroll (millions)": 81.43\n"Wins": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\n"Payroll (millions)": 64.17\n"Wins": 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\n"Payroll (millions)": 154.49\n"Wins": 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\n"Payroll (millions)": 132.30\n"Wins": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\n"Payroll (millions)": 110.30\n"Wins": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\n"Payroll (millions)": 95.14\n"Wins": 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\n"Payroll (millions)": 96.92\n"Wins": 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='Team:
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-3
'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\n"Payroll (millions)": 97.65\n"Wins": 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\n"Payroll (millions)": 174.54\n"Wins": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\n"Payroll (millions)": 74.28\n"Wins": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\n"Payroll (millions)": 63.43\n"Wins": 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='Team: Padres\n"Payroll (millions)": 55.24\n"Wins": 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\n"Payroll (millions)": 81.97\n"Wins": 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\n"Payroll (millions)": 93.35\n"Wins": 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-4
'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\n"Payroll (millions)": 75.48\n"Wins": 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\n"Payroll (millions)": 60.91\n"Wins": 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\n"Payroll (millions)": 118.07\n"Wins": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\n"Payroll (millions)": 173.18\n"Wins": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\n"Payroll (millions)": 78.43\n"Wins": 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)": 94.08\n"Wins": 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\n"Payroll (millions)": 78.06\n"Wins": 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-5
'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\n"Payroll (millions)": 88.19\n"Wins": 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\n"Payroll (millions)": 60.65\n"Wins": 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0)]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-6
Customizing the csv parsing and loading# See the csv module documentation for more information of what csv args are supported. loader = CSVLoader(file_path='./example_data/mlb_teams_2012.csv', csv_args={ 'delimiter': ',', 'quotechar': '"', 'fieldnames': ['MLB Team', 'Payroll in millions', 'Wins'] }) data = loader.load() print(data)
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-7
[Document(page_content='MLB Team: Team\nPayroll in millions: "Payroll (millions)"\nWins: "Wins"', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='MLB Team: Nationals\nPayroll in millions: 81.34\nWins: 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='MLB Team: Reds\nPayroll in millions: 82.20\nWins: 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='MLB Team: Yankees\nPayroll in millions: 197.96\nWins: 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='MLB Team: Giants\nPayroll in millions: 117.62\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='MLB Team: Braves\nPayroll in millions: 83.31\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='MLB Team: Athletics\nPayroll in millions: 55.37\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='MLB Team: Rangers\nPayroll in millions:
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-8
lookup_index=0), Document(page_content='MLB Team: Rangers\nPayroll in millions: 120.51\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='MLB Team: Orioles\nPayroll in millions: 81.43\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='MLB Team: Rays\nPayroll in millions: 64.17\nWins: 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='MLB Team: Angels\nPayroll in millions: 154.49\nWins: 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='MLB Team: Tigers\nPayroll in millions: 132.30\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='MLB Team: Cardinals\nPayroll in millions: 110.30\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='MLB Team: Dodgers\nPayroll in millions: 95.14\nWins: 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='MLB Team: White Sox\nPayroll in millions:
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-9
Document(page_content='MLB Team: White Sox\nPayroll in millions: 96.92\nWins: 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='MLB Team: Brewers\nPayroll in millions: 97.65\nWins: 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='MLB Team: Phillies\nPayroll in millions: 174.54\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='MLB Team: Diamondbacks\nPayroll in millions: 74.28\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='MLB Team: Pirates\nPayroll in millions: 63.43\nWins: 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='MLB Team: Padres\nPayroll in millions: 55.24\nWins: 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='MLB Team: Mariners\nPayroll in millions: 81.97\nWins: 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0), Document(page_content='MLB Team: Mets\nPayroll in millions:
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-10
lookup_index=0), Document(page_content='MLB Team: Mets\nPayroll in millions: 93.35\nWins: 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='MLB Team: Blue Jays\nPayroll in millions: 75.48\nWins: 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='MLB Team: Royals\nPayroll in millions: 60.91\nWins: 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0), Document(page_content='MLB Team: Marlins\nPayroll in millions: 118.07\nWins: 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='MLB Team: Red Sox\nPayroll in millions: 173.18\nWins: 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='MLB Team: Indians\nPayroll in millions: 78.43\nWins: 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='MLB Team: Twins\nPayroll in millions: 94.08\nWins: 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0), Document(page_content='MLB Team: Rockies\nPayroll in millions:
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-11
lookup_index=0), Document(page_content='MLB Team: Rockies\nPayroll in millions: 78.06\nWins: 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='MLB Team: Cubs\nPayroll in millions: 88.19\nWins: 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0), Document(page_content='MLB Team: Astros\nPayroll in millions: 60.65\nWins: 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 30}, lookup_index=0)]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-12
Specify a column to identify the document source# Use the source_column argument to specify a source for the document created from each row. Otherwise file_path will be used as the source for all documents created from the CSV file. This is useful when using documents loaded from CSV files for chains that answer questions using sources. loader = CSVLoader(file_path='./example_data/mlb_teams_2012.csv', source_column="Team") data = loader.load() print(data)
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-13
[Document(page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98', lookup_str='', metadata={'source': 'Nationals', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97', lookup_str='', metadata={'source': 'Reds', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\n"Payroll (millions)": 197.96\n"Wins": 95', lookup_str='', metadata={'source': 'Yankees', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\n"Payroll (millions)": 117.62\n"Wins": 94', lookup_str='', metadata={'source': 'Giants', 'row': 3}, lookup_index=0), Document(page_content='Team: Braves\n"Payroll (millions)": 83.31\n"Wins": 94', lookup_str='', metadata={'source': 'Braves', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\n"Payroll (millions)": 55.37\n"Wins": 94', lookup_str='', metadata={'source': 'Athletics', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\n"Payroll (millions)": 120.51\n"Wins": 93', lookup_str='', metadata={'source': 'Rangers', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\n"Payroll (millions)": 81.43\n"Wins": 93', lookup_str='', metadata={'source': 'Orioles', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\n"Payroll
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-14
7}, lookup_index=0), Document(page_content='Team: Rays\n"Payroll (millions)": 64.17\n"Wins": 90', lookup_str='', metadata={'source': 'Rays', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\n"Payroll (millions)": 154.49\n"Wins": 89', lookup_str='', metadata={'source': 'Angels', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\n"Payroll (millions)": 132.30\n"Wins": 88', lookup_str='', metadata={'source': 'Tigers', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\n"Payroll (millions)": 110.30\n"Wins": 88', lookup_str='', metadata={'source': 'Cardinals', 'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\n"Payroll (millions)": 95.14\n"Wins": 86', lookup_str='', metadata={'source': 'Dodgers', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\n"Payroll (millions)": 96.92\n"Wins": 85', lookup_str='', metadata={'source': 'White Sox', 'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\n"Payroll (millions)": 97.65\n"Wins": 83', lookup_str='', metadata={'source': 'Brewers', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\n"Payroll (millions)": 174.54\n"Wins": 81', lookup_str='', metadata={'source': 'Phillies', 'row': 15}, lookup_index=0), Document(page_content='Team:
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-15
'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\n"Payroll (millions)": 74.28\n"Wins": 81', lookup_str='', metadata={'source': 'Diamondbacks', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\n"Payroll (millions)": 63.43\n"Wins": 79', lookup_str='', metadata={'source': 'Pirates', 'row': 17}, lookup_index=0), Document(page_content='Team: Padres\n"Payroll (millions)": 55.24\n"Wins": 76', lookup_str='', metadata={'source': 'Padres', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\n"Payroll (millions)": 81.97\n"Wins": 75', lookup_str='', metadata={'source': 'Mariners', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\n"Payroll (millions)": 93.35\n"Wins": 74', lookup_str='', metadata={'source': 'Mets', 'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\n"Payroll (millions)": 75.48\n"Wins": 73', lookup_str='', metadata={'source': 'Blue Jays', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\n"Payroll (millions)": 60.91\n"Wins": 72', lookup_str='', metadata={'source': 'Royals', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\n"Payroll (millions)": 118.07\n"Wins": 69', lookup_str='', metadata={'source': 'Marlins', 'row': 23}, lookup_index=0),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-16
metadata={'source': 'Marlins', 'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\n"Payroll (millions)": 173.18\n"Wins": 69', lookup_str='', metadata={'source': 'Red Sox', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\n"Payroll (millions)": 78.43\n"Wins": 68', lookup_str='', metadata={'source': 'Indians', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)": 94.08\n"Wins": 66', lookup_str='', metadata={'source': 'Twins', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\n"Payroll (millions)": 78.06\n"Wins": 64', lookup_str='', metadata={'source': 'Rockies', 'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\n"Payroll (millions)": 88.19\n"Wins": 61', lookup_str='', metadata={'source': 'Cubs', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\n"Payroll (millions)": 60.65\n"Wins": 55', lookup_str='', metadata={'source': 'Astros', 'row': 29}, lookup_index=0)]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
9fd963b17fd0-17
previous Copy Paste next Email Contents Customizing the csv parsing and loading Specify a column to identify the document source By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/csv.html
40b46cedbc5f-0
.ipynb .pdf Blockchain Contents Overview Load NFTs into Document Loader Option 1: Ethereum Mainnet (default BlockchainType) Option 2: Polygon Mainnet Blockchain# Overview# The intention of this notebook is to provide a means of testing functionality in the Langchain Document Loader for Blockchain. Initially this Loader supports: Loading NFTs as Documents from NFT Smart Contracts (ERC721 and ERC1155) Ethereum Mainnnet, Ethereum Testnet, Polygon Mainnet, Polygon Testnet (default is eth-mainnet) Alchemy’s getNFTsForCollection API It can be extended if the community finds value in this loader. Specifically: Additional APIs can be added (e.g. Tranction-related APIs) This Document Loader Requires: A free Alchemy API Key The output takes the following format: pageContent= Individual NFT metadata={‘source’: ‘0x1a92f7381b9f03921564a437210bb9396471050c’, ‘blockchain’: ‘eth-mainnet’, ‘tokenId’: ‘0x15’}) Load NFTs into Document Loader# # get ALCHEMY_API_KEY from https://www.alchemy.com/ alchemyApiKey = "..." Option 1: Ethereum Mainnet (default BlockchainType)# from langchain.document_loaders.blockchain import BlockchainDocumentLoader, BlockchainType contractAddress = "0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d" # Bored Ape Yacht Club contract address blockchainType = BlockchainType.ETH_MAINNET #default value, optional parameter blockchainLoader = BlockchainDocumentLoader(contract_address=contractAddress, api_key=alchemyApiKey) nfts = blockchainLoader.load() nfts[:2]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/blockchain.html
40b46cedbc5f-1
nfts = blockchainLoader.load() nfts[:2] Option 2: Polygon Mainnet# contractAddress = "0x448676ffCd0aDf2D85C1f0565e8dde6924A9A7D9" # Polygon Mainnet contract address blockchainType = BlockchainType.POLYGON_MAINNET blockchainLoader = BlockchainDocumentLoader(contract_address=contractAddress, blockchainType=blockchainType, api_key=alchemyApiKey) nfts = blockchainLoader.load() nfts[:2] previous Blackboard next ChatGPT Data Contents Overview Load NFTs into Document Loader Option 1: Ethereum Mainnet (default BlockchainType) Option 2: Polygon Mainnet By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/blockchain.html
3f34bda68793-0
.ipynb .pdf Google Drive Contents Prerequisites 🧑 Instructions for ingesting your Google Docs data Google Drive# Google Drive is a file storage and synchronization service developed by Google. This notebook covers how to load documents from Google Drive. Currently, only Google Docs are supported. Prerequisites# Create a Google Cloud project or use an existing project Enable the Google Drive API Authorize credentials for desktop app pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib 🧑 Instructions for ingesting your Google Docs data# By default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_path keyword argument. Same thing with token.json - token_path. Note that token.json will be created automatically the first time you use the loader. GoogleDriveLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL: Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is "1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5" Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is "1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw" !pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib from langchain.document_loaders import GoogleDriveLoader loader = GoogleDriveLoader(
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html
3f34bda68793-1
from langchain.document_loaders import GoogleDriveLoader loader = GoogleDriveLoader( folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5", # Optional: configure whether to recursively fetch files from subfolders. Defaults to False. recursive=False ) docs = loader.load() When you pass a folder_id by default all files of type document, sheet and pdf are loaded. You can modify this behaviour by passing a file_types argument loader = GoogleDriveLoader( folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5", file_types=["document", "sheet"] recursive=False ) previous Google Cloud Storage File next Image captions Contents Prerequisites 🧑 Instructions for ingesting your Google Docs data By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html
d2ba772ddee1-0
.ipynb .pdf HTML Contents Loading HTML with BeautifulSoup4 HTML# The HyperText Markup Language or HTML is the standard markup language for documents designed to be displayed in a web browser. This covers how to load HTML documents into a document format that we can use downstream. from langchain.document_loaders import UnstructuredHTMLLoader loader = UnstructuredHTMLLoader("example_data/fake-content.html") data = loader.load() data [Document(page_content='My First Heading\n\nMy first paragraph.', lookup_str='', metadata={'source': 'example_data/fake-content.html'}, lookup_index=0)] Loading HTML with BeautifulSoup4# We can also use BeautifulSoup4 to load HTML documents using the BSHTMLLoader. This will extract the text from the HTML into page_content, and the page title as title into metadata. from langchain.document_loaders import BSHTMLLoader loader = BSHTMLLoader("example_data/fake-content.html") data = loader.load() data [Document(page_content='\n\nTest Title\n\n\nMy First Heading\nMy first paragraph.\n\n\n', metadata={'source': 'example_data/fake-content.html', 'title': 'Test Title'})] previous File Directory next Images Contents Loading HTML with BeautifulSoup4 By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/html.html
f59600623a37-0
.ipynb .pdf Azure Blob Storage Container Contents Specifying a prefix Azure Blob Storage Container# Azure Blob Storage is Microsoft’s object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data. Azure Blob Storage is designed for: Serving images or documents directly to a browser. Storing files for distributed access. Streaming video and audio. Writing to log files. Storing data for backup and restore, disaster recovery, and archiving. Storing data for analysis by an on-premises or Azure-hosted service. This notebook covers how to load document objects from a container on Azure Blob Storage. #!pip install azure-storage-blob from langchain.document_loaders import AzureBlobStorageContainerLoader loader = AzureBlobStorageContainerLoader(conn_str="<conn_str>", container="<container>") loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpaa9xl6ch/fake.docx'}, lookup_index=0)] Specifying a prefix# You can also specify a prefix for more finegrained control over what files to load. loader = AzureBlobStorageContainerLoader(conn_str="<conn_str>", container="<container>", prefix="<prefix>") loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)] previous AWS S3 File next Azure Blob Storage File Contents
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azure_blob_storage_container.html
f59600623a37-1
previous AWS S3 File next Azure Blob Storage File Contents Specifying a prefix By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azure_blob_storage_container.html
e0624458ecf6-0
.ipynb .pdf Apify Dataset Contents Prerequisites An example with question answering Apify Dataset# Apify Dataset is a scaleable append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of Apify Actors—serverless cloud programs for varius web scraping, crawling, and data extraction use cases. This notebook shows how to load Apify datasets to LangChain. Prerequisites# You need to have an existing dataset on the Apify platform. If you don’t have one, please first check out this notebook on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs. #!pip install apify-client First, import ApifyDatasetLoader into your source code: from langchain.document_loaders import ApifyDatasetLoader from langchain.document_loaders.base import Document Then provide a function that maps Apify dataset record fields to LangChain Document format. For example, if your dataset items are structured like this: { "url": "https://apify.com", "text": "Apify is the best web scraping and automation platform." } The mapping function in the code below will convert them to LangChain Document format, so that you can use them further with any LLM model (e.g. for question answering). loader = ApifyDatasetLoader( dataset_id="your-dataset-id", dataset_mapping_function=lambda dataset_item: Document( page_content=dataset_item["text"], metadata={"source": dataset_item["url"]} ), ) data = loader.load() An example with question answering# In this example, we use data from a dataset to answer a question.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/apify_dataset.html
e0624458ecf6-1
In this example, we use data from a dataset to answer a question. from langchain.docstore.document import Document from langchain.document_loaders import ApifyDatasetLoader from langchain.indexes import VectorstoreIndexCreator loader = ApifyDatasetLoader( dataset_id="your-dataset-id", dataset_mapping_function=lambda item: Document( page_content=item["text"] or "", metadata={"source": item["url"]} ), ) index = VectorstoreIndexCreator().from_loaders([loader]) query = "What is Apify?" result = index.query_with_sources(query) print(result["answer"]) print(result["sources"]) Apify is a platform for developing, running, and sharing serverless cloud programs. It enables users to create web scraping and automation tools and publish them on the Apify platform. https://docs.apify.com/platform/actors, https://docs.apify.com/platform/actors/running/actors-in-store, https://docs.apify.com/platform/security, https://docs.apify.com/platform/actors/examples previous Airbyte JSON next AWS S3 Directory Contents Prerequisites An example with question answering By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/apify_dataset.html
fefe20c0eb7d-0
.ipynb .pdf EPub Contents Retain Elements EPub# EPUB is an e-book file format that uses the “.epub” file extension. The term is short for electronic publication and is sometimes styled ePub. EPUB is supported by many e-readers, and compatible software is available for most smartphones, tablets, and computers. This covers how to load .epub documents into the Document format that we can use downstream. You’ll need to install the pandocs package for this loader to work. #!pip install pandocs from langchain.document_loaders import UnstructuredEPubLoader loader = UnstructuredEPubLoader("winter-sports.epub") data = loader.load() Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredEPubLoader("winter-sports.epub", mode="elements") data = loader.load() data[0] Document(page_content='The Project Gutenberg eBook of Winter Sports in\nSwitzerland, by E. F. Benson', lookup_str='', metadata={'source': 'winter-sports.epub', 'page_number': 1, 'category': 'Title'}, lookup_index=0) previous Email next EverNote Contents Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/epub.html
bde5bdfe05ce-0
.ipynb .pdf Azure Blob Storage File Azure Blob Storage File# Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network File System (NFS) protocol, and Azure Files REST API. This covers how to load document objects from a Azure Files. #!pip install azure-storage-blob from langchain.document_loaders import AzureBlobStorageFileLoader loader = AzureBlobStorageFileLoader(conn_str='<connection string>', container='<container name>', blob_name='<blob name>') loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)] previous Azure Blob Storage Container next Blackboard By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azure_blob_storage_file.html
78f6255705be-0
.ipynb .pdf ReadTheDocs Documentation ReadTheDocs Documentation# Read the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator. This notebook covers how to load content from HTML that was generated as part of a Read-The-Docs build. For an example of this in the wild, see here. This assumes that the HTML has already been scraped into a folder. This can be done by uncommenting and running the following command #!pip install beautifulsoup4 #!wget -r -A.html -P rtdocs https://langchain.readthedocs.io/en/latest/ from langchain.document_loaders import ReadTheDocsLoader loader = ReadTheDocsLoader("rtdocs", features='html.parser') docs = loader.load() previous PySpark DataFrame Loader next Reddit By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/readthedocs_documentation.html
67aa8207f303-0
.ipynb .pdf URL Contents URL Selenium URL Loader Setup Playwright URL Loader Setup URL# This covers how to load HTML documents from a list of URLs into a document format that we can use downstream. from langchain.document_loaders import UnstructuredURLLoader urls = [ "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023", "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023" ] loader = UnstructuredURLLoader(urls=urls) data = loader.load() Selenium URL Loader# This covers how to load HTML documents from a list of URLs using the SeleniumURLLoader. Using selenium allows us to load pages that require JavaScript to render. Setup# To use the SeleniumURLLoader, you will need to install selenium and unstructured. from langchain.document_loaders import SeleniumURLLoader urls = [ "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "https://goo.gl/maps/NDSHwePEyaHMFGwh8" ] loader = SeleniumURLLoader(urls=urls) data = loader.load() Playwright URL Loader# This covers how to load HTML documents from a list of URLs using the PlaywrightURLLoader. As in the Selenium case, Playwright allows us to load pages that need JavaScript to render. Setup# To use the PlaywrightURLLoader, you will need to install playwright and unstructured. Additionally, you will need to install the Playwright Chromium browser: # Install playwright !pip install "playwright" !pip install "unstructured" !playwright install
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/url.html
67aa8207f303-1
!pip install "unstructured" !playwright install from langchain.document_loaders import PlaywrightURLLoader urls = [ "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "https://goo.gl/maps/NDSHwePEyaHMFGwh8" ] loader = PlaywrightURLLoader(urls=urls, remove_selectors=["header", "footer"]) data = loader.load() previous Unstructured File next WebBaseLoader Contents URL Selenium URL Loader Setup Playwright URL Loader Setup By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/url.html
43835c6e617b-0
.ipynb .pdf Google Cloud Storage File Google Cloud Storage File# Google Cloud Storage is a managed service for storing unstructured data. This covers how to load document objects from an Google Cloud Storage (GCS) file object (blob). # !pip install google-cloud-storage from langchain.document_loaders import GCSFileLoader loader = GCSFileLoader(project_name="aist", bucket="testing-hwc", blob="fake.docx") loader.load() /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmp3srlf8n8/fake.docx'}, lookup_index=0)] previous Google Cloud Storage Directory next Google Drive By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_cloud_storage_file.html
34b9d39c9d85-0
.ipynb .pdf Open Document Format (ODT) Open Document Format (ODT)# The Open Document Format for Office Applications (ODF), also known as OpenDocument, is an open file format for word processing documents, spreadsheets, presentations and graphics and using ZIP-compressed XML files. It was developed with the aim of providing an open, XML-based file format specification for office applications. The standard is developed and maintained by a technical committee in the Organization for the Advancement of Structured Information Standards (OASIS) consortium. It was based on the Sun Microsystems specification for OpenOffice.org XML, the default format for OpenOffice.org and LibreOffice. It was originally developed for StarOffice “to provide an open standard for office documents.” The UnstructuredODTLoader is used to load Open Office ODT files. from langchain.document_loaders import UnstructuredODTLoader loader = UnstructuredODTLoader("example_data/fake.odt", mode="elements") docs = loader.load() docs[0] Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.odt', 'filename': 'example_data/fake.odt', 'category': 'Title'}) previous Microsoft Word next Pandas DataFrame By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/odt.html
921dc799fadc-0
.ipynb .pdf Discord Discord# Discord is a VoIP and instant messaging social platform. Users have the ability to communicate with voice calls, video calls, text messaging, media and files in private chats or as part of communities called “servers”. A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links. Follow these steps to download your Discord data: Go to your User Settings Then go to Privacy and Safety Head over to the Request all of my Data and click on Request Data button It might take 30 days for you to receive your data. You’ll receive an email at the address which is registered with Discord. That email will have a download button using which you would be able to download your personal Discord data. import pandas as pd import os path = input("Please enter the path to the contents of the Discord \"messages\" folder: ") li = [] for f in os.listdir(path): expected_csv_path = os.path.join(path, f, 'messages.csv') csv_exists = os.path.isfile(expected_csv_path) if csv_exists: df = pd.read_csv(expected_csv_path, index_col=None, header=0) li.append(df) df = pd.concat(li, axis=0, ignore_index=True, sort=False) from langchain.document_loaders.discord import DiscordChatLoader loader = DiscordChatLoader(df, user_id_col="ID") print(loader.load()) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/discord.html
464054771dfb-0
.ipynb .pdf Stripe Stripe# Stripe is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications. This notebook covers how to load data from the Stripe REST API into a format that can be ingested into LangChain, along with example usage for vectorization. import os from langchain.document_loaders import StripeLoader from langchain.indexes import VectorstoreIndexCreator The Stripe API requires an access token, which can be found inside of the Stripe dashboard. This document loader also requires a resource option which defines what data you want to load. Following resources are available: balance_transations Documentation charges Documentation customers Documentation events Documentation refunds Documentation disputes Documentation stripe_loader = StripeLoader("charges") # Create a vectorstore retriver from the loader # see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details index = VectorstoreIndexCreator().from_loaders([stripe_loader]) stripe_doc_retriever = index.vectorstore.as_retriever() previous Spreedly next 2Markdown By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/stripe.html
4dfb0289ce18-0
.ipynb .pdf Figma Figma# Figma is a collaborative web application for interface design. This notebook covers how to load data from the Figma REST API into a format that can be ingested into LangChain, along with example usage for code generation. import os from langchain.document_loaders.figma import FigmaFileLoader from langchain.text_splitter import CharacterTextSplitter from langchain.chat_models import ChatOpenAI from langchain.indexes import VectorstoreIndexCreator from langchain.chains import ConversationChain, LLMChain from langchain.memory import ConversationBufferWindowMemory from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) The Figma API Requires an access token, node_ids, and a file key. The file key can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilename Node IDs are also available in the URL. Click on anything and look for the ‘?node-id={node_id}’ param. Access token instructions are in the Figma help center article: https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokens figma_loader = FigmaFileLoader( os.environ.get('ACCESS_TOKEN'), os.environ.get('NODE_IDS'), os.environ.get('FILE_KEY') ) # see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details index = VectorstoreIndexCreator().from_loaders([figma_loader]) figma_doc_retriever = index.vectorstore.as_retriever() def generate_code(human_input): # I have no idea if the Jon Carmack thing makes for better code. YMMV.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html
4dfb0289ce18-1
# See https://python.langchain.com/en/latest/modules/models/chat/getting_started.html for chat info system_prompt_template = """You are expert coder Jon Carmack. Use the provided design context to create idomatic HTML/CSS code as possible based on the user request. Everything must be inline in one file and your response must be directly renderable by the browser. Figma file nodes and metadata: {context}""" human_prompt_template = "Code the {text}. Ensure it's mobile responsive" system_message_prompt = SystemMessagePromptTemplate.from_template(system_prompt_template) human_message_prompt = HumanMessagePromptTemplate.from_template(human_prompt_template) # delete the gpt-4 model_name to use the default gpt-3.5 turbo for faster results gpt_4 = ChatOpenAI(temperature=.02, model_name='gpt-4') # Use the retriever's 'get_relevant_documents' method if needed to filter down longer docs relevant_nodes = figma_doc_retriever.get_relevant_documents(human_input) conversation = [system_message_prompt, human_message_prompt] chat_prompt = ChatPromptTemplate.from_messages(conversation) response = gpt_4(chat_prompt.format_prompt( context=relevant_nodes, text=human_input).to_messages()) return response response = generate_code("page top header") Returns the following in response.content:
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html
4dfb0289ce18-2
<!DOCTYPE html>\n<html lang="en">\n<head>\n <meta charset="UTF-8">\n <meta name="viewport" content="width=device-width, initial-scale=1.0">\n <style>\n @import url(\'https://fonts.googleapis.com/css2?family=DM+Sans:wght@500;700&family=Inter:wght@600&display=swap\');\n\n body {\n margin: 0;\n font-family: \'DM Sans\', sans-serif;\n }\n\n .header {\n display: flex;\n justify-content: space-between;\n align-items: center;\n padding: 20px;\n background-color: #fff;\n box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);\n }\n\n .header h1 {\n font-size: 16px;\n font-weight: 700;\n margin: 0;\n }\n\n .header nav {\n
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html
4dfb0289ce18-3
}\n\n .header nav {\n display: flex;\n align-items: center;\n }\n\n .header nav a {\n font-size: 14px;\n font-weight: 500;\n text-decoration: none;\n color: #000;\n margin-left: 20px;\n }\n\n @media (max-width: 768px) {\n .header nav {\n display: none;\n }\n }\n </style>\n</head>\n<body>\n <header class="header">\n <h1>Company Contact</h1>\n <nav>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n </nav>\n </header>\n</body>\n</html>
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html
4dfb0289ce18-4
previous DuckDB next GitBook By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html
bb6bb3b54a5a-0
.ipynb .pdf TOML TOML# TOML is a file format for configuration files. It is intended to be easy to read and write, and is designed to map unambiguously to a dictionary. Its specification is open-source. TOML is implemented in many programming languages. The name TOML is an acronym for “Tom’s Obvious, Minimal Language” referring to its creator, Tom Preston-Werner. If you need to load Toml files, use the TomlLoader. from langchain.document_loaders import TomlLoader loader = TomlLoader('example_data/fake_rule.toml') rule = loader.load() rule [Document(page_content='{"internal": {"creation_date": "2023-05-01", "updated_date": "2022-05-01", "release": ["release_type"], "min_endpoint_version": "some_semantic_version", "os_list": ["operating_system_list"]}, "rule": {"uuid": "some_uuid", "name": "Fake Rule Name", "description": "Fake description of rule", "query": "process where process.name : \\"somequery\\"\\n", "threat": [{"framework": "MITRE ATT&CK", "tactic": {"name": "Execution", "id": "TA0002", "reference": "https://attack.mitre.org/tactics/TA0002/"}}]}}', metadata={'source': 'example_data/fake_rule.toml'})] previous Telegram next Unstructured File By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/toml.html
38414c0962d7-0
.ipynb .pdf Getting Started Contents Add texts From Documents Getting Started# This notebook showcases basic functionality related to VectorStores. A key part of working with vectorstores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the embedding notebook before diving into this. This covers generic high level functionality related to all vector stores. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Chroma with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_texts(texts, embeddings) query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search(query) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. print(docs[0].page_content) In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/getting_started.html
38414c0962d7-1
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Add texts# You can easily add text to a vectorstore with the add_texts method. It will return a list of document IDs (in case you need to use them downstream). docsearch.add_texts(["Ankush went to Princeton"]) ['a05e3d0c-ab40-11ed-a853-e65801318981'] query = "Where did Ankush go to college?" docs = docsearch.similarity_search(query) docs[0] Document(page_content='Ankush went to Princeton', lookup_str='', metadata={}, lookup_index=0) From Documents# We can also initialize a vectorstore from documents directly. This is useful when we use the method on the text splitter to get documents directly (handy when the original documents have associated metadata). documents = text_splitter.create_documents([state_of_the_union], metadatas=[{"source": "State of the Union"}]) docsearch = Chroma.from_documents(documents, embeddings) query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search(query) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. print(docs[0].page_content) In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/getting_started.html
38414c0962d7-2
We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. previous Vectorstores next AnalyticDB Contents Add texts From Documents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/getting_started.html
6a0a6b348249-0
.ipynb .pdf Weaviate Contents Weaviate Similarity search with score Persistance Retriever options Retriever options MMR Question Answering with Sources Weaviate# Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from your favorite ML-models, and scale seamlessly into billions of data objects. This notebook shows how to use functionality related to the Weaviatevector database. See the Weaviate installation instructions. !pip install weaviate-client Requirement already satisfied: weaviate-client in /workspaces/langchain/.venv/lib/python3.9/site-packages (3.19.1) Requirement already satisfied: requests<2.29.0,>=2.28.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (2.28.2) Requirement already satisfied: validators<=0.21.0,>=0.18.2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (0.20.0) Requirement already satisfied: tqdm<5.0.0,>=4.59.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (4.65.0) Requirement already satisfied: authlib>=1.1.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (1.2.0) Requirement already satisfied: cryptography>=3.2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from authlib>=1.1.0->weaviate-client) (40.0.2)
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-1
Requirement already satisfied: charset-normalizer<4,>=2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (3.1.0) Requirement already satisfied: idna<4,>=2.5 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (3.4) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (1.26.15) Requirement already satisfied: certifi>=2017.4.17 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (2023.5.7) Requirement already satisfied: decorator>=3.4.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from validators<=0.21.0,>=0.18.2->weaviate-client) (5.1.1) Requirement already satisfied: cffi>=1.12 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from cryptography>=3.2->authlib>=1.1.0->weaviate-client) (1.15.1)
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-2
Requirement already satisfied: pycparser in /workspaces/langchain/.venv/lib/python3.9/site-packages (from cffi>=1.12->cryptography>=3.2->authlib>=1.1.0->weaviate-client) (2.21) We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import os import getpass os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") WEAVIATE_URL = getpass.getpass("WEAVIATE_URL:") os.environ["WEAVIATE_API_KEY"] = getpass.getpass("WEAVIATE_API_KEY:") from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Weaviate from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader("../../../state_of_the_union.txt") documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() db = Weaviate.from_documents(docs, embeddings, weaviate_url=WEAVIATE_URL, by_text=False) query = "What did the president say about Ketanji Brown Jackson" docs = db.similarity_search(query) print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-3
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Similarity search with score# docs = db.similarity_search_with_score(query, by_text=False) docs[0]
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-4
(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'_additional': {'vector': [-0.015289668, -0.011418287, -0.018540842, 0.00274522, 0.008310737, 0.014179829, 0.0080104275, -0.0010217049, -0.022327352, -0.0055002323, 0.018958665, 0.0020548347, -0.0044393567, -0.021609223, -0.013709779, -0.004543812, 0.025722157, 0.01821442, 0.031728342, -0.031388864, -0.01051083, -0.029978717, 0.011555385, 0.0009751897, 0.014675993, -0.02102166, 0.0301354, -0.031754456, 0.013526983,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-5
-0.031754456, 0.013526983, -0.03392191, 0.002800712, -0.0027778621, -0.024259781, -0.006202043, -0.019950991, 0.0176138, -0.0001134321, 0.008343379, 0.034209162, -0.027654583, 0.03149332, -0.0008389079, 0.0053696632, -0.0024644958, -0.016582303, 0.0066720927, -0.005036711, -0.035514854, 0.002942706, 0.02958701, 0.032825127, 0.015694432, -0.019846536, -0.024520919, -0.021974817, -0.0063293483, -0.01081114, -0.0084282495, 0.003025944, -0.010210521, 0.008780787, 0.014793505, -0.006486031, 0.011966679, 0.01774437, -0.006985459, -0.015459408, 0.01625588, -0.016007798, 0.01706541, 0.035567082, 0.0029900377, 0.021543937, -0.0068483613, 0.040868197, -0.010909067, -0.03339963, 0.010954766, -0.014689049, -0.021596165, 0.0025607906, -0.01599474,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-6
0.0025607906, -0.01599474, -0.017757427, -0.0041651614, 0.010752384, 0.0053598704, -0.00019248774, 0.008480477, -0.010517359, -0.005017126, 0.0020434097, 0.011699011, 0.0051379027, 0.021687564, -0.010830725, 0.020734407, -0.006606808, 0.029769806, 0.02817686, -0.047318324, 0.024338122, -0.001150642, -0.026231378, -0.012325744, -0.0318328, -0.0094989175, -0.00897664, 0.004736402, 0.0046482678, 0.0023241339, -0.005826656, 0.0072531262, 0.015498579, -0.0077819317, -0.011953622, -0.028934162, -0.033974137, -0.01574666, 0.0086306315, -0.029299757, 0.030213742, -0.0033148287, 0.013448641, -0.013474754, 0.015851116, 0.0076578907, -0.037421167, -0.015185213, 0.010719741, -0.014636821, 0.0001918757, 0.011783881, 0.0036330915, -0.02132197,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-7
0.0036330915, -0.02132197, 0.0031010215, 0.0024334856, -0.0033229894, 0.050086394, 0.0031973163, -0.01115062, 0.004837593, 0.01298512, -0.018645298, -0.02992649, 0.004837593, 0.0067634913, 0.02992649, 0.0145062525, 0.00566018, -0.0017055618, -0.0056667086, 0.012697867, 0.0150677, -0.007559964, -0.01991182, -0.005268472, -0.008650217, -0.008702445, 0.027550127, 0.0018296026, 0.0018589807, -0.033295177, 0.0036265631, -0.0060290387, 0.014349569, 0.019898765, 0.00023339267, 0.0034568228, -0.018958665, 0.012031963, 0.005186866, 0.020747464, -0.03817847, 0.028202975, -0.01340947, 0.00091643346, 0.014884903, -0.02314994, -0.024468692, 0.0004859627, 0.018828096, 0.012906778, 0.027941836, 0.027550127, -0.015028529, 0.018606128,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-8
-0.015028529, 0.018606128, 0.03449641, -0.017757427, -0.016020855, -0.012142947, 0.025304336, 0.00821281, -0.0025461016, -0.01902395, -0.635507, -0.030083172, 0.0177052, -0.0104912445, 0.012502013, -0.0010747487, 0.00465806, 0.020825805, -0.006887532, 0.013892576, -0.019977106, 0.029952602, 0.0012004217, -0.015211326, -0.008708973, -0.017809656, 0.008578404, -0.01612531, 0.022614606, -0.022327352, -0.032616217, 0.0050693536, -0.020629952, -0.01357921, 0.011477043, 0.0013938275, -0.0052390937, 0.0142581705, -0.013200559, 0.013252786, -0.033582427, 0.030579336, -0.011568441, 0.0038387382, 0.049564116, 0.016791213, -0.01991182, 0.010889481, -0.0028251936, 0.035932675, -0.02183119, -0.008611047, 0.025121538, 0.008349908, 0.00035641342,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-9
0.008349908, 0.00035641342, 0.009028868, 0.007631777, -0.01298512, -0.0015350056, 0.009982024, -0.024207553, -0.003332782, 0.006283649, 0.01868447, -0.010732798, -0.00876773, -0.0075273216, -0.016530076, 0.018175248, 0.016020855, -0.00067284, 0.013461698, -0.0065904865, -0.017809656, -0.014741276, 0.016582303, -0.0088526, 0.0046482678, 0.037473395, -0.02237958, 0.010112594, 0.022549322, 9.680491e-05, -0.0059082615, 0.020747464, -0.026923396, 0.01162067, -0.0074816225, 0.00024277734, 0.011842638, 0.016921783, -0.019285088, 0.005565517, 0.0046907025, 0.018109964, 0.0028676286, -0.015080757, -0.01536801, 0.0024726565, 0.020943318, 0.02187036, 0.0037767177, 0.018997835, -0.026766712, 0.005026919, 0.015942514, 0.0097469995,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-10
0.015942514, 0.0097469995, -0.0067830766, 0.023828901, -0.01523744, -0.0121494755, 0.00744898, 0.010445545, -0.011006993, -0.0032789223, 0.020394927, -0.017796598, -0.0029116957, 0.02318911, -0.031754456, -0.018188305, -0.031441092, -0.030579336, 0.0011832844, 0.0065023527, -0.027053965, 0.009198609, 0.022079272, -0.027785152, 0.005846241, 0.013500868, 0.016699815, 0.010445545, -0.025265165, -0.004396922, 0.0076774764, 0.014597651, -0.009851455, -0.03637661, 0.0004745379, -0.010112594, -0.009205136, 0.01578583, 0.015211326, -0.0011653311, -0.0015847852, 0.01489796, -0.01625588, -0.0029067993, -0.011411758, 0.0046286825, 0.0036330915, -0.0034143878, 0.011894866, -0.03658552, 0.007266183, -0.015172156, -0.02038187, -0.033739112,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-11
-0.02038187, -0.033739112, 0.0018948873, -0.011379116, -0.0020923733, -0.014075373, 0.01970291, 0.0020352493, -0.0075273216, -0.02136114, 0.0027974476, -0.009577259, -0.023815846, 0.024847344, 0.014675993, -0.019454828, -0.013670608, 0.011059221, -0.005438212, 0.0406854, 0.0006218364, -0.024494806, -0.041259903, 0.022013986, -0.0040019494, -0.0052097156, 0.015798887, 0.016190596, 0.0003794671, -0.017444061, 0.012325744, 0.024769, 0.029482553, -0.0046547963, -0.015955571, -0.018397218, -0.0102431625, 0.020577725, 0.016190596, -0.02038187, 0.030030945, -0.01115062, 0.0032560725, -0.014819618, 0.005647123, -0.0032560725, 0.0038909658, 0.013311543, 0.024285894, -0.0045699263, -0.010112594, 0.009237779, 0.008728559, 0.0423828, 0.010909067,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-12
0.0423828, 0.010909067, 0.04225223, -0.031806685, -0.013696723, -0.025787441, 0.00838255, -0.008715502, 0.006776548, 0.01825359, -0.014480138, -0.014427911, -0.017600743, -0.030004831, 0.0145845935, 0.013762007, -0.013226673, 0.004168425, 0.0047951583, -0.026923396, 0.014675993, 0.0055851024, 0.015616091, -0.012306159, 0.007670948, 0.038439605, -0.015759716, 0.00016178355, 0.01076544, -0.008232395, -0.009942854, 0.018801982, -0.0025314125, 0.030709906, -0.001442791, -0.042617824, -0.007409809, -0.013109161, 0.031101612, 0.016229765, 0.006162872, 0.017901054, -0.0063619902, -0.0054577976, 0.01872364, -0.0032430156, 0.02966535, 0.006495824, 0.0011008625, -0.00024318536, -0.007011573, -0.002746852, -0.004298995, 0.007710119, 0.03407859,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-13
0.007710119, 0.03407859, -0.008898299, -0.008565348, 0.030527107, -0.0003027576, 0.025082368, 0.0405026, 0.03867463, 0.0014117807, -0.024076983, 0.003933401, -0.009812284, 0.00829768, -0.0074293944, 0.0061530797, -0.016647588, -0.008147526, -0.015629148, 0.02055161, 0.000504324, 0.03157166, 0.010112594, -0.009009283, 0.026557801, -0.013997031, -0.0071878415, 0.009414048, -0.03480978, 0.006626393, 0.013827291, -0.011444401, -0.011823053, -0.0042957305, -0.016229765, -0.014192886, 0.026531687, -0.012534656, -0.0056569157, -0.0010331298, 0.007977786, 0.0033654245, -0.017352663, 0.034626983, -0.011803466, 0.009035396, 0.0005288057, 0.020421041, 0.013115689, -0.0152504975, -0.0111114485, 0.032355078, 0.0025542623, -0.0030226798, -0.00074261305,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-14
-0.0030226798, -0.00074261305, 0.030892702, -0.026218321, 0.0062803845, -0.018031623, -0.021504767, -0.012834964, 0.009009283, -0.0029198565, -0.014349569, -0.020434098, 0.009838398, -0.005993132, -0.013618381, -0.031597774, -0.019206747, 0.00086583785, 0.15835446, 0.033765227, 0.00893747, 0.015119928, -0.019128405, 0.0079582, -0.026270548, -0.015877228, 0.014153715, -0.011960151, 0.007853745, 0.006972402, -0.014101488, 0.02456009, 0.015119928, -0.0018850947, 0.019010892, -0.0046188897, -0.0050954674, -0.03548874, -0.01608614, -0.00324628, 0.009466276, 0.031911142, 7.033402e-05, -0.025095424, 0.020225188, 0.014832675, 0.023228282, -0.011829581, -0.011300774, -0.004073763, 0.0032544404, -0.0025983294, -0.020943318, 0.019650683, -0.0074424515,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-15
0.019650683, -0.0074424515, -0.0030977572, 0.0073379963, -0.00012455089, 0.010230106, -0.0007254758, -0.0025052987, -0.009681715, 0.03439196, -0.035123147, -0.0028806855, 0.012828437, 0.00018646932, 0.0066133365, 0.025539361, -0.00055736775, -0.025356563, -0.004537284, -0.007031158, 0.015825002, -0.013076518, 0.00736411, -0.00075689406, 0.0076578907, -0.019337315, -0.0024187965, -0.0110331075, -0.01187528, 0.0013048771, 0.0009711094, -0.027863493, -0.020616895, -0.0024481746, -0.0040802914, 0.014571536, -0.012306159, -0.037630077, 0.012652168, 0.009068039, -0.0018263385, 0.0371078, -0.0026831995, 0.011333417, -0.011548856, -0.0059049972, -0.025186824, 0.0069789304, -0.010993936, -0.0009066408, 0.0002619547, 0.01727432, -0.008082241,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-16
0.01727432, -0.008082241, -0.018645298, 0.024507863, 0.0030895968, -0.0014656406, 0.011137563, -0.025513247, -0.022967143, -0.002033617, 0.006887532, 0.016621474, -0.019337315, -0.0030618508, 0.0014697209, -0.011679426, -0.003597185, -0.0049844836, -0.012332273, 0.009068039, 0.009407519, 0.027080078, -0.011215905, -0.0062542707, -0.0013114056, -0.031911142, 0.011209376, 0.009903682, -0.007351053, 0.021335026, -0.005510025, 0.0062053073, -0.010869896, -0.0045601334, 0.017561574, -0.024847344, 0.04115545, -0.00036457402, -0.0061400225, 0.013037347, -0.005480647, 0.005947433, 0.020799693, 0.014702106, 0.03272067, 0.026701428, -0.015550806, -0.036193814, -0.021126116, -0.005412098, -0.013076518, 0.027080078, 0.012900249, -0.0073379963, -0.015119928,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-17
-0.0073379963, -0.015119928, -0.019781252, 0.0062346854, -0.03266844, 0.025278222, -0.022797402, -0.0028415148, 0.021452539, -0.023162996, 0.005170545, -0.022314297, 0.011215905, -0.009838398, -0.00033233972, 0.0019650683, 0.0026326037, 0.009753528, -0.0029639236, 0.021126116, 0.01944177, -0.00044883206, -0.00961643, 0.008846072, -0.0035775995, 0.02352859, -0.0020956376, 0.0053468137, 0.013305014, 0.0006418298, 0.023802789, 0.013122218, -0.0031548813, -0.027471786, 0.005046504, 0.008545762, 0.011261604, -0.01357921, -0.01110492, -0.014845733, -0.035384286, -0.02550019, 0.008154054, -0.0058331843, -0.008702445, -0.007311882, -0.006525202, 0.03817847, 0.00372449, 0.022914914, -0.0018981516, 0.031545546, -0.01051083, 0.013801178, -0.006296706,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-18
0.013801178, -0.006296706, -0.00025052988, -0.01795328, -0.026296662, 0.0017659501, 0.021883417, 0.0028937424, 0.00495837, -0.011888337, -0.008950527, -0.012058077, 0.020316586, 0.00804307, -0.0068483613, -0.0038387382, 0.019715967, -0.025069311, -0.000797697, -0.04507253, -0.009179023, -0.016242823, 0.013553096, -0.0019014158, 0.010223578, 0.0062934416, -5.5644974e-05, -0.038282923, -0.038544063, -0.03162389, -0.006815719, 0.009936325, 0.014192886, 0.02277129, -0.006972402, -0.029769806, 0.034862008, 0.01217559, -0.0037179615, 0.0008666539, 0.008924413, -0.026296662, -0.012678281, 0.014480138, 0.020734407, -0.012103776, -0.037499506, 0.022131499, 0.015028529, -0.033843566, 0.00020187242, 0.002650557, -0.0015113399, 0.021570051, -0.008284623,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-19
0.021570051, -0.008284623, -0.003793039, -0.013422526, -0.009655601, -0.0016614947, -0.02388113, 0.00114901, 0.0034405016, 0.02796795, -0.039118566, 0.0023975791, -0.010608757, 0.00093438674, 0.0017382042, -0.02047327, 0.026283605, -0.020799693, 0.005947433, -0.014349569, 0.009890626, -0.022719061, -0.017248206, 0.0042565595, 0.022327352, -0.015681375, -0.013840348, 6.502964e-05, 0.015485522, -0.002678303, -0.0047984226, -0.012182118, -0.001512972, 0.013931747, -0.009642544, 0.012652168, -0.012932892, -0.027759038, -0.01085031, 0.0050236546, -0.009675186, -0.00893747, -0.0051770736, 0.036011018, 0.003528636, -0.001008648, -0.015811944, -0.008865656, 0.012364916, 0.016621474, -0.01340947, 0.03219839, 0.032955695, -0.021517823, 0.00372449,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
6a0a6b348249-20
-0.021517823, 0.00372449, -0.045124754, 0.015589978, -0.033582427, -0.01642562, -0.009609901, -0.031179955, 0.0012591778, -0.011176733, -0.018658355, -0.015224383, 0.014884903, 0.013083046, 0.0063587264, -0.008238924, -0.008917884, -0.003877909, 0.022836573, -0.004374072, -0.031127727, 0.02604858, -0.018136078, 0.000769951, -0.002312709, -0.025095424, -0.010621814, 0.013207087, 0.013944804, -0.0070899143, -0.022183727, -0.0028088724, -0.011424815, 0.026087752, -0.0058625625, -0.020186016, -0.010217049, 0.015315781, -0.012580355, 0.01374895, 0.004948577, -0.0021854038, 0.023215225, 0.00207442, 0.029639237, 0.01391869, -0.015811944, -0.005356606, -0.022327352, -0.021844247, -0.008310737, -0.020786636, -0.022484036, 0.011411758, 0.005826656, 0.012188647,
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html