id
stringlengths 14
15
| text
stringlengths 22
2.51k
| source
stringlengths 61
160
|
---|---|---|
fac7a986c558-1 | DocArrayHnswSearch
MyScale
ClickHouse Vector Search
Qdrant
Tigris
AwaDB
Supabase (Postgres)
OpenSearch
Pinecone
Azure Cognitive Search
Cassandra
Milvus
ElasticSearch
Marqo
DocArrayInMemorySearch
pg_embedding
FAISS
AnalyticDB
Hologres
MongoDB Atlas
Meilisearch
Question Answering Benchmarking: State of the Union Address
QA Generation
Question Answering Benchmarking: Paul Graham Essay
Data Augmented Question Answering
Agent VectorDB Question Answering Benchmarking
Question answering over a group chat messages using Activeloop’s DeepLake
Structure answers with OpenAI functions
QA using Activeloop’s DeepLake
Graph QA
Analysis of Twitter the-algorithm source code with LangChain, GPT4 and Activeloop’s Deep Lake
Use LangChain, GPT and Activeloop’s Deep Lake to work with code base
Combine agents and vector stores
Loading from LangChainHub
Retrieval QA using OpenAI functions | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.text.TextLoader.html |
f18508267cd7-0 | langchain.document_loaders.slack_directory.SlackDirectoryLoader¶
class langchain.document_loaders.slack_directory.SlackDirectoryLoader(zip_path: str, workspace_url: Optional[str] = None)[source]¶
Bases: BaseLoader
Loads documents from a Slack directory dump.
Initialize the SlackDirectoryLoader.
Parameters
zip_path (str) – The path to the Slack directory dump zip file.
workspace_url (Optional[str]) – The Slack workspace URL.
Including the URL will turn
sources into links. Defaults to None.
Methods
__init__(zip_path[, workspace_url])
Initialize the SlackDirectoryLoader.
lazy_load()
A lazy loader for Documents.
load()
Load and return documents from the Slack directory dump.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load and return documents from the Slack directory dump.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SlackDirectoryLoader¶
Slack | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.slack_directory.SlackDirectoryLoader.html |
24506fc4a6cc-0 | langchain.document_loaders.trello.TrelloLoader¶
class langchain.document_loaders.trello.TrelloLoader(client: TrelloClient, board_name: str, *, include_card_name: bool = True, include_comments: bool = True, include_checklist: bool = True, card_filter: Literal['closed', 'open', 'all'] = 'all', extra_metadata: Tuple[str, ...] = ('due_date', 'labels', 'list', 'closed'))[source]¶
Bases: BaseLoader
Trello loader. Reads all cards from a Trello board.
Initialize Trello loader.
Parameters
client – Trello API client.
board_name – The name of the Trello board.
include_card_name – Whether to include the name of the card in the document.
include_comments – Whether to include the comments on the card in the
document.
include_checklist – Whether to include the checklist on the card in the
document.
card_filter – Filter on card status. Valid values are “closed”, “open”,
“all”.
extra_metadata – List of additional metadata fields to include as document
metadata.Valid values are “due_date”, “labels”, “list”, “closed”.
Methods
__init__(client, board_name, *[, ...])
Initialize Trello loader.
from_credentials(board_name, *[, api_key, token])
Convenience constructor that builds TrelloClient init param for you.
lazy_load()
A lazy loader for Documents.
load()
Loads all cards from the specified Trello board.
load_and_split([text_splitter])
Load Documents and split into chunks.
classmethod from_credentials(board_name: str, *, api_key: Optional[str] = None, token: Optional[str] = None, **kwargs: Any) → TrelloLoader[source]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.trello.TrelloLoader.html |
24506fc4a6cc-1 | Convenience constructor that builds TrelloClient init param for you.
Parameters
board_name – The name of the Trello board.
api_key – Trello API key. Can also be specified as environment variable
TRELLO_API_KEY.
token – Trello token. Can also be specified as environment variable
TRELLO_TOKEN.
include_card_name – Whether to include the name of the card in the document.
include_comments – Whether to include the comments on the card in the
document.
include_checklist – Whether to include the checklist on the card in the
document.
card_filter – Filter on card status. Valid values are “closed”, “open”,
“all”.
extra_metadata – List of additional metadata fields to include as document
metadata.Valid values are “due_date”, “labels”, “list”, “closed”.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Loads all cards from the specified Trello board.
You can filter the cards, metadata and text included by using the optional
parameters.
Returns:A list of documents, one for each card in the board.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TrelloLoader¶
Trello | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.trello.TrelloLoader.html |
f6f6381b243d-0 | langchain.document_loaders.rocksetdb.ColumnNotFoundError¶
class langchain.document_loaders.rocksetdb.ColumnNotFoundError(missing_key: str, query: str)[source]¶
Bases: Exception
Column not found error.
add_note()¶
Exception.add_note(note) –
add a note to the exception
with_traceback()¶
Exception.with_traceback(tb) –
set self.__traceback__ to tb and return self.
args¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.ColumnNotFoundError.html |
1ae2e102faf7-0 | langchain.document_loaders.blockchain.BlockchainDocumentLoader¶
class langchain.document_loaders.blockchain.BlockchainDocumentLoader(contract_address: str, blockchainType: BlockchainType = BlockchainType.ETH_MAINNET, api_key: str = 'docs-demo', startToken: str = '', get_all_tokens: bool = False, max_execution_time: Optional[int] = None)[source]¶
Bases: BaseLoader
Loads elements from a blockchain smart contract into Langchain documents.
The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,
Polygon mainnet, and Polygon Mumbai testnet.
If no BlockchainType is specified, the default is Ethereum mainnet.
The Loader uses the Alchemy API to interact with the blockchain.
ALCHEMY_API_KEY environment variable must be set to use this loader.
The API returns 100 NFTs per request and can be paginated using the
startToken parameter.
If get_all_tokens is set to True, the loader will get all tokens
on the contract. Note that for contracts with a large number of tokens,
this may take a long time (e.g. 10k tokens is 100 requests).
Default value is false for this reason.
The max_execution_time (sec) can be set to limit the execution time
of the loader.
Future versions of this loader can:
Support additional Alchemy APIs (e.g. getTransactions, etc.)
Support additional blockain APIs (e.g. Infura, Opensea, etc.)
Parameters
contract_address – The address of the smart contract.
blockchainType – The blockchain type.
api_key – The Alchemy API key.
startToken – The start token for pagination.
get_all_tokens – Whether to get all tokens on the contract.
max_execution_time – The maximum execution time (sec).
Methods | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainDocumentLoader.html |
1ae2e102faf7-1 | max_execution_time – The maximum execution time (sec).
Methods
__init__(contract_address[, blockchainType, ...])
param contract_address
The address of the smart contract.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BlockchainDocumentLoader¶
Blockchain | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainDocumentLoader.html |
04192ba04bd8-0 | langchain.document_loaders.parsers.pdf.PyPDFium2Parser¶
class langchain.document_loaders.parsers.pdf.PyPDFium2Parser[source]¶
Bases: BaseBlobParser
Parse PDFs with PyPDFium2.
Initialize the parser.
Methods
__init__()
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFium2Parser.html |
e7974ed6230e-0 | langchain.document_loaders.blob_loaders.schema.BlobLoader¶
class langchain.document_loaders.blob_loaders.schema.BlobLoader[source]¶
Bases: ABC
Abstract interface for blob loaders implementation.
Implementer should be able to load raw content from a storage system according
to some criteria and return the raw content lazily as a stream of blobs.
Methods
__init__()
yield_blobs()
A lazy loader for raw data represented by LangChain's Blob object.
abstract yield_blobs() → Iterable[Blob][source]¶
A lazy loader for raw data represented by LangChain’s Blob object.
Returns
A generator over blobs | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.BlobLoader.html |
f8825bbd2b51-0 | langchain.document_loaders.unstructured.UnstructuredFileLoader¶
class langchain.document_loaders.unstructured.UnstructuredFileLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredBaseLoader
Loader that uses Unstructured to load files.
The file loader uses the
unstructured partition function and will automatically detect the file
type. You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredFileLoader
loader = UnstructuredFileLoader(“example.pdf”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileLoader.html |
f8825bbd2b51-1 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredFileLoader¶
Unstructured
Unstructured File | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileLoader.html |
6b16ee2a8086-0 | langchain.document_loaders.toml.TomlLoader¶
class langchain.document_loaders.toml.TomlLoader(source: Union[str, Path])[source]¶
Bases: BaseLoader
A TOML document loader that inherits from the BaseLoader class.
This class can be initialized with either a single source file or a source
directory containing TOML files.
Initialize the TomlLoader with a source file or directory.
Methods
__init__(source)
Initialize the TomlLoader with a source file or directory.
lazy_load()
Lazily load the TOML documents from the source file or directory.
load()
Load and return all documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazily load the TOML documents from the source file or directory.
load() → List[Document][source]¶
Load and return all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TomlLoader¶
TOML | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.toml.TomlLoader.html |
226819b1e5a0-0 | langchain.document_loaders.mediawikidump.MWDumpLoader¶
class langchain.document_loaders.mediawikidump.MWDumpLoader(file_path: Union[str, Path], encoding: Optional[str] = 'utf8', namespaces: Optional[Sequence[int]] = None, skip_redirects: Optional[bool] = False, stop_on_error: Optional[bool] = True)[source]¶
Bases: BaseLoader
Load MediaWiki dump from XML file
.. rubric:: Example
from langchain.document_loaders import MWDumpLoader
loader = MWDumpLoader(
file_path="myWiki.xml",
encoding="utf8"
)
docs = loader.load()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, chunk_overlap=0
)
texts = text_splitter.split_documents(docs)
Parameters
file_path (str) – XML local file path
encoding (str, optional) – Charset encoding, defaults to “utf8”
namespaces (List[int],optional) – The namespace of pages you want to parse.
See https://www.mediawiki.org/wiki/Help:Namespaces#Localisation
for a list of all common namespaces
skip_redirects (bool, optional) – TR=rue to skip pages that redirect to other pages,
False to keep them. False by default
stop_on_error (bool, optional) – False to skip over pages that cause parsing errors,
True to stop. True by default
Methods
__init__(file_path[, encoding, namespaces, ...])
lazy_load()
A lazy loader for Documents.
load()
Load from a file path.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mediawikidump.MWDumpLoader.html |
226819b1e5a0-1 | Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load from a file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MWDumpLoader¶
MediaWikiDump | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mediawikidump.MWDumpLoader.html |
62dc4337a660-0 | langchain.document_loaders.concurrent.ConcurrentLoader¶
class langchain.document_loaders.concurrent.ConcurrentLoader(blob_loader: BlobLoader, blob_parser: BaseBlobParser, num_workers: int = 4)[source]¶
Bases: GenericLoader
A generic document loader that loads and parses documents concurrently.
A generic document loader.
Parameters
blob_loader – A blob loader which knows how to yield blobs
blob_parser – A blob parser which knows how to parse blobs into documents
Methods
__init__(blob_loader, blob_parser[, num_workers])
A generic document loader.
from_filesystem(path, *[, glob, suffixes, ...])
Create a concurrent generic document loader using a filesystem blob loader.
lazy_load()
Load documents lazily with concurrent parsing.
load()
Load all documents.
load_and_split([text_splitter])
Load all documents and split them into sentences.
classmethod from_filesystem(path: Union[str, Path], *, glob: str = '**/[!.]*', suffixes: Optional[Sequence[str]] = None, show_progress: bool = False, parser: Union[Literal['default'], BaseBlobParser] = 'default', num_workers: int = 4) → ConcurrentLoader[source]¶
Create a concurrent generic document loader using a
filesystem blob loader.
lazy_load() → Iterator[Document][source]¶
Load documents lazily with concurrent parsing.
load() → List[Document]¶
Load all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load all documents and split them into sentences. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.concurrent.ConcurrentLoader.html |
1b9ab07552ee-0 | langchain.document_loaders.parsers.registry.get_parser¶
langchain.document_loaders.parsers.registry.get_parser(parser_name: str) → BaseBlobParser[source]¶
Get a parser by parser name. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.registry.get_parser.html |
2bf57a96f283-0 | langchain.document_loaders.telegram.TelegramChatFileLoader¶
class langchain.document_loaders.telegram.TelegramChatFileLoader(path: str)[source]¶
Bases: BaseLoader
Loads Telegram chat json directory dump.
Initialize with a path.
Methods
__init__(path)
Initialize with a path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TelegramChatFileLoader¶
Telegram | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatFileLoader.html |
80b091511bfe-0 | langchain.document_loaders.pdf.UnstructuredPDFLoader¶
class langchain.document_loaders.pdf.UnstructuredPDFLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load PDF files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredPDFLoader
loader = UnstructuredPDFLoader(“example.pdf”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-pdf
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.UnstructuredPDFLoader.html |
98ff803b8707-0 | langchain.document_loaders.gcs_directory.GCSDirectoryLoader¶
class langchain.document_loaders.gcs_directory.GCSDirectoryLoader(project_name: str, bucket: str, prefix: str = '')[source]¶
Bases: BaseLoader
Loads Documents from GCS.
Initialize with bucket and key name.
Parameters
project_name – The name of the project for the GCS bucket.
bucket – The name of the GCS bucket.
prefix – The prefix of the GCS bucket.
Methods
__init__(project_name, bucket[, prefix])
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GCSDirectoryLoader¶
Google Cloud Storage
Google Cloud Storage Directory | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_directory.GCSDirectoryLoader.html |
6afcb191a673-0 | langchain.document_loaders.parsers.pdf.PDFPlumberParser¶
class langchain.document_loaders.parsers.pdf.PDFPlumberParser(text_kwargs: Optional[Mapping[str, Any]] = None)[source]¶
Bases: BaseBlobParser
Parse PDFs with PDFPlumber.
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to pdfplumber.Page.extract_text()
Methods
__init__([text_kwargs])
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PDFPlumberParser.html |
6783235dd84c-0 | langchain.document_loaders.conllu.CoNLLULoader¶
class langchain.document_loaders.conllu.CoNLLULoader(file_path: str)[source]¶
Bases: BaseLoader
Load CoNLL-U files.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load from a file path.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load from a file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using CoNLLULoader¶
CoNLL-U | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.conllu.CoNLLULoader.html |
b3de32334995-0 | langchain.document_loaders.spreedly.SpreedlyLoader¶
class langchain.document_loaders.spreedly.SpreedlyLoader(access_token: str, resource: str)[source]¶
Bases: BaseLoader
Loader that fetches data from Spreedly API.
Initialize with an access token and a resource.
Parameters
access_token – The access token.
resource – The resource.
Methods
__init__(access_token, resource)
Initialize with an access token and a resource.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SpreedlyLoader¶
Spreedly | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.spreedly.SpreedlyLoader.html |
f1cd07cdd4f7-0 | langchain.document_loaders.airtable.AirtableLoader¶
class langchain.document_loaders.airtable.AirtableLoader(api_token: str, table_id: str, base_id: str)[source]¶
Bases: BaseLoader
Loader for Airtable tables.
Initialize with API token and the IDs for table and base
Methods
__init__(api_token, table_id, base_id)
Initialize with API token and the IDs for table and base
lazy_load()
Lazy load Documents from table.
load()
Load Documents from table.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
api_token
Airtable API token.
table_id
Airtable table ID.
base_id
Airtable base ID.
lazy_load() → Iterator[Document][source]¶
Lazy load Documents from table.
load() → List[Document][source]¶
Load Documents from table.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
api_token¶
Airtable API token.
base_id¶
Airtable base ID.
table_id¶
Airtable table ID.
Examples using AirtableLoader¶
Airtable | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airtable.AirtableLoader.html |
633a0ce067d6-0 | langchain.document_loaders.iugu.IuguLoader¶
class langchain.document_loaders.iugu.IuguLoader(resource: str, api_token: Optional[str] = None)[source]¶
Bases: BaseLoader
Loader that fetches data from IUGU.
Initialize the IUGU resource.
Parameters
resource – The name of the resource to fetch.
api_token – The IUGU API token to use.
Methods
__init__(resource[, api_token])
Initialize the IUGU resource.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using IuguLoader¶
Iugu | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.iugu.IuguLoader.html |
a5c1a256ac09-0 | langchain.document_loaders.whatsapp_chat.concatenate_rows¶
langchain.document_loaders.whatsapp_chat.concatenate_rows(date: str, sender: str, text: str) → str[source]¶
Combine message information in a readable format ready to be used. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.concatenate_rows.html |
0a9d60c84111-0 | langchain.document_loaders.html_bs.BSHTMLLoader¶
class langchain.document_loaders.html_bs.BSHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]¶
Bases: BaseLoader
Loader that uses beautiful soup to parse HTML files.
Initialise with path, and optionally, file encoding to use, and any kwargs
to pass to the BeautifulSoup object.
Parameters
file_path – The path to the file to load.
open_encoding – The encoding to use when opening the file.
bs_kwargs – Any kwargs to pass to the BeautifulSoup object.
get_text_separator – The separator to use when calling get_text on the soup.
Methods
__init__(file_path[, open_encoding, ...])
Initialise with path, and optionally, file encoding to use, and any kwargs to pass to the BeautifulSoup object.
lazy_load()
A lazy loader for Documents.
load()
Load HTML document into document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load HTML document into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html_bs.BSHTMLLoader.html |
f1601092e92a-0 | langchain.document_loaders.obs_file.OBSFileLoader¶
class langchain.document_loaders.obs_file.OBSFileLoader(bucket: str, key: str, client: Any = None, endpoint: str = '', config: Optional[dict] = None)[source]¶
Bases: BaseLoader
Loader for Huawei OBS file.
Initialize the OBSFileLoader with the specified settings.
Parameters
bucket (str) – The name of the OBS bucket to be used.
key (str) – The name of the object in the OBS bucket.
client (ObsClient, optional) – An instance of the ObsClient to connect to OBS.
endpoint (str, optional) – The endpoint URL of your OBS bucket. This parameter is mandatory if client is not provided.
config (dict, optional) – The parameters for connecting to OBS, provided as a dictionary. This parameter is ignored if client is provided. The dictionary could have the following keys:
- “ak” (str, optional): Your OBS access key (required if get_token_from_ecs is False and bucket policy is not public read).
- “sk” (str, optional): Your OBS secret key (required if get_token_from_ecs is False and bucket policy is not public read).
- “token” (str, optional): Your security token (required if using temporary credentials).
- “get_token_from_ecs” (bool, optional): Whether to retrieve the security token from ECS. Defaults to False if not provided. If set to True, ak, sk, and token will be ignored.
Raises
ValueError – If the esdk-obs-python package is not installed.
TypeError – If the provided client is not an instance of ObsClient.
ValueError – If client is not provided, but endpoint is missing.
Note | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_file.OBSFileLoader.html |
f1601092e92a-1 | ValueError – If client is not provided, but endpoint is missing.
Note
Before using this class, make sure you have registered with OBS and have the necessary credentials. The ak, sk, and endpoint values are mandatory unless get_token_from_ecs is True or the bucket policy is public read. token is required when using temporary credentials.
Example
To create a new OBSFileLoader with a new client:
```
config = {
“ak”: “your-access-key”,
“sk”: “your-secret-key”
}
obs_loader = OBSFileLoader(“your-bucket-name”, “your-object-key”, config=config)
```
To create a new OBSFileLoader with an existing client:
```
from obs import ObsClient
# Assuming you have an existing ObsClient object ‘obs_client’
obs_loader = OBSFileLoader(“your-bucket-name”, “your-object-key”, client=obs_client)
```
To create a new OBSFileLoader without an existing client:
`
obs_loader = OBSFileLoader("your-bucket-name", "your-object-key", endpoint="your-endpoint-url")
`
Methods
__init__(bucket, key[, client, endpoint, config])
Initialize the OBSFileLoader with the specified settings.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_file.OBSFileLoader.html |
f1601092e92a-2 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_file.OBSFileLoader.html |
80b129e1bfb5-0 | langchain.document_loaders.web_base.WebBaseLoader¶
class langchain.document_loaders.web_base.WebBaseLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None)[source]¶
Bases: BaseLoader
Loader that uses urllib and beautiful soup to load webpages.
Initialize with webpage path.
Methods
__init__(web_path[, header_template, ...])
Initialize with webpage path.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load text from the url(s) in web_path.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
web_paths
aload() → List[Document][source]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any[source]¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document][source]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load text from the url(s) in web_path. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.web_base.WebBaseLoader.html |
80b129e1bfb5-1 | Load text from the url(s) in web_path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any[source]¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any][source]¶
Fetch all urls, then return soups for all results.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
Examples using WebBaseLoader¶
Vectorstore Agent
WebBaseLoader
MergeDocLoader
QA over Documents
Running LLMs locally
Use local LLMs
MultiQueryRetriever
Combine agents and vector stores | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.web_base.WebBaseLoader.html |
5d8efba16a32-0 | langchain.document_loaders.parsers.language.language_parser.LanguageParser¶
class langchain.document_loaders.parsers.language.language_parser.LanguageParser(language: Optional[Language] = None, parser_threshold: int = 0)[source]¶
Bases: BaseBlobParser
Language parser that split code using the respective language syntax.
Each top-level function and class in the code is loaded into separate documents.
Furthermore, an extra document is generated, containing the remaining top-level code
that excludes the already segmented functions and classes.
This approach can potentially improve the accuracy of QA models over source code.
Currently, the supported languages for code parsing are Python and JavaScript.
The language used for parsing can be configured, along with the minimum number of
lines required to activate the splitting based on syntax.
Examples
from langchain.text_splitter.Language
from langchain.document_loaders.generic import GenericLoader
from langchain.document_loaders.parsers import LanguageParser
loader = GenericLoader.from_filesystem(
"./code",
glob="**/*",
suffixes=[".py", ".js"],
parser=LanguageParser()
)
docs = loader.load()
Example instantiations to manually select the language:
… code-block:: python
from langchain.text_splitter import Language
loader = GenericLoader.from_filesystem(“./code”,
glob=”**/*”,
suffixes=[“.py”],
parser=LanguageParser(language=Language.PYTHON)
)
Example instantiations to set number of lines threshold:
… code-block:: python
loader = GenericLoader.from_filesystem(“./code”,
glob=”**/*”,
suffixes=[“.py”],
parser=LanguageParser(parser_threshold=200)
)
Language parser that split code using the respective language syntax.
Parameters
language – If None (default), it will try to infer language from source. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html |
5d8efba16a32-1 | Parameters
language – If None (default), it will try to infer language from source.
parser_threshold – Minimum lines needed to activate parsing (0 by default).
Methods
__init__([language, parser_threshold])
Language parser that split code using the respective language syntax.
lazy_parse(blob)
Lazy parsing interface.
parse(blob)
Eagerly parse the blob into a document or documents.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
Examples using LanguageParser¶
Source Code | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html |
9302d4edfc9d-0 | langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter¶
class langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter(code: str)[source]¶
Bases: CodeSegmenter
The code segmenter for JavaScript.
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
extract_functions_classes() → List[str][source]¶
is_valid() → bool[source]¶
simplify_code() → str[source]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter.html |
5668e97e88fc-0 | langchain.document_loaders.dropbox.DropboxLoader¶
class langchain.document_loaders.dropbox.DropboxLoader(*, dropbox_access_token: str, dropbox_folder_path: Optional[str] = None, dropbox_file_paths: Optional[List[str]] = None, recursive: bool = False)[source]¶
Bases: BaseLoader, BaseModel
Loads files from Dropbox.
In addition to common files such as text and PDF files, it also supports
Dropbox Paper files.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param dropbox_access_token: str [Required]¶
Dropbox access token.
param dropbox_file_paths: Optional[List[str]] = None¶
The file paths to load from.
param dropbox_folder_path: Optional[str] = None¶
The folder path to load from.
param recursive: bool = False¶
Flag to indicate whether to load files recursively from subfolders.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
validator validate_inputs » all fields[source]¶
Validate that either folder_path or file_paths is set, but not both.
Examples using DropboxLoader¶
Dropbox | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dropbox.DropboxLoader.html |
fb7f6cab016d-0 | langchain.document_loaders.s3_file.S3FileLoader¶
class langchain.document_loaders.s3_file.S3FileLoader(bucket: str, key: str)[source]¶
Bases: BaseLoader
Loading logic for loading documents from an AWS S3 file.
Initialize with bucket and key name.
Parameters
bucket – The name of the S3 bucket.
key – The key of the S3 object.
Methods
__init__(bucket, key)
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using S3FileLoader¶
AWS S3 Directory
AWS S3 File | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_file.S3FileLoader.html |
5f7afd3428e0-0 | langchain.document_loaders.azlyrics.AZLyricsLoader¶
class langchain.document_loaders.azlyrics.AZLyricsLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None)[source]¶
Bases: WebBaseLoader
Loads AZLyrics webpages.
Initialize with webpage path.
Methods
__init__(web_path[, header_template, ...])
Initialize with webpage path.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load webpages into Documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load webpages into Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azlyrics.AZLyricsLoader.html |
5f7afd3428e0-1 | load() → List[Document][source]¶
Load webpages into Documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
Examples using AZLyricsLoader¶
AZLyrics | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azlyrics.AZLyricsLoader.html |
9151f7ce3219-0 | langchain.document_loaders.stripe.StripeLoader¶
class langchain.document_loaders.stripe.StripeLoader(resource: str, access_token: Optional[str] = None)[source]¶
Bases: BaseLoader
Loader that fetches data from Stripe.
Initialize with a resource and an access token.
Parameters
resource – The resource.
access_token – The access token.
Methods
__init__(resource[, access_token])
Initialize with a resource and an access token.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using StripeLoader¶
Stripe | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.stripe.StripeLoader.html |
a88ef1fe8827-0 | langchain.document_loaders.obs_directory.OBSDirectoryLoader¶
class langchain.document_loaders.obs_directory.OBSDirectoryLoader(bucket: str, endpoint: str, config: Optional[dict] = None, prefix: str = '')[source]¶
Bases: BaseLoader
Loading logic for loading documents from Huawei OBS.
Initialize the OBSDirectoryLoader with the specified settings.
Parameters
bucket (str) – The name of the OBS bucket to be used.
endpoint (str) – The endpoint URL of your OBS bucket.
config (dict) – The parameters for connecting to OBS, provided as a dictionary. The dictionary could have the following keys:
- “ak” (str, optional): Your OBS access key (required if get_token_from_ecs is False and bucket policy is not public read).
- “sk” (str, optional): Your OBS secret key (required if get_token_from_ecs is False and bucket policy is not public read).
- “token” (str, optional): Your security token (required if using temporary credentials).
- “get_token_from_ecs” (bool, optional): Whether to retrieve the security token from ECS. Defaults to False if not provided. If set to True, ak, sk, and token will be ignored.
prefix (str, optional) – The prefix to be added to the OBS key. Defaults to “”.
Note
Before using this class, make sure you have registered with OBS and have the necessary credentials. The ak, sk, and endpoint values are mandatory unless get_token_from_ecs is True or the bucket policy is public read. token is required when using temporary credentials.
Example
To create a new OBSDirectoryLoader:
```
config = {
“ak”: “your-access-key”,
“sk”: “your-secret-key” | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_directory.OBSDirectoryLoader.html |
a88ef1fe8827-1 | “ak”: “your-access-key”,
“sk”: “your-secret-key”
directory_loader = OBSDirectoryLoader(“your-bucket-name”, “your-end-endpoint”, config, “your-prefix”)
Methods
__init__(bucket, endpoint[, config, prefix])
Initialize the OBSDirectoryLoader with the specified settings.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_directory.OBSDirectoryLoader.html |
322b6fa96170-0 | langchain.document_loaders.blob_loaders.schema.Blob¶
class langchain.document_loaders.blob_loaders.schema.Blob(*, data: Optional[Union[bytes, str]] = None, mimetype: Optional[str] = None, encoding: str = 'utf-8', path: Optional[Union[str, PurePath]] = None)[source]¶
Bases: BaseModel
A blob is used to represent raw data by either reference or value.
Provides an interface to materialize the blob in different representations, and
help to decouple the development of data loaders from the downstream parsing of
the raw data.
Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param data: Optional[Union[bytes, str]] = None¶
param encoding: str = 'utf-8'¶
param mimetype: Optional[str] = None¶
param path: Optional[Union[str, pathlib.PurePath]] = None¶
as_bytes() → bytes[source]¶
Read data as bytes.
as_bytes_io() → Generator[Union[BytesIO, BufferedReader], None, None][source]¶
Read data as a byte stream.
as_string() → str[source]¶
Read data as a string.
validator check_blob_is_valid » all fields[source]¶
Verify that either data or path is provided.
classmethod from_data(data: Union[str, bytes], *, encoding: str = 'utf-8', mime_type: Optional[str] = None, path: Optional[str] = None) → Blob[source]¶
Initialize the blob from in-memory data.
Parameters
data – the in-memory data associated with the blob
encoding – Encoding to use if decoding the bytes into a string | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html |
322b6fa96170-1 | encoding – Encoding to use if decoding the bytes into a string
mime_type – if provided, will be set as the mime-type of the data
path – if provided, will be set as the source from which the data came
Returns
Blob instance
classmethod from_path(path: Union[str, PurePath], *, encoding: str = 'utf-8', mime_type: Optional[str] = None, guess_type: bool = True) → Blob[source]¶
Load the blob from a path like object.
Parameters
path – path like object to file to be read
encoding – Encoding to use if decoding the bytes into a string
mime_type – if provided, will be set as the mime-type of the data
guess_type – If True, the mimetype will be guessed from the file extension,
if a mime-type was not provided
Returns
Blob instance
property source: Optional[str]¶
The source location of the blob as string if known otherwise none.
model Config[source]¶
Bases: object
arbitrary_types_allowed = True¶
frozen = True¶
Examples using Blob¶
Embaas | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html |
f45e60ca83ee-0 | langchain.document_loaders.generic.GenericLoader¶
class langchain.document_loaders.generic.GenericLoader(blob_loader: BlobLoader, blob_parser: BaseBlobParser)[source]¶
Bases: BaseLoader
A generic document loader.
A generic document loader that allows combining an arbitrary blob loader with
a blob parser.
Examples
from langchain.document_loaders import GenericLoader
from langchain.document_loaders.blob_loaders import FileSystemBlobLoader
loader = GenericLoader.from_filesystem(path=”path/to/directory”,
glob=”**/[!.]*”,
suffixes=[“.pdf”],
show_progress=True,
)
docs = loader.lazy_load()
next(docs)
Example instantiations to change which files are loaded:
… code-block:: python
# Recursively load all text files in a directory.
loader = GenericLoader.from_filesystem(“/path/to/dir”, glob=”**/*.txt”)
# Recursively load all non-hidden files in a directory.
loader = GenericLoader.from_filesystem(“/path/to/dir”, glob=”**/[!.]*”)
# Load all files in a directory without recursion.
loader = GenericLoader.from_filesystem(“/path/to/dir”, glob=”*”)
Example instantiations to change which parser is used:
… code-block:: python
from langchain.document_loaders.parsers.pdf import PyPDFParser
# Recursively load all text files in a directory.
loader = GenericLoader.from_filesystem(
“/path/to/dir”,
glob=”**/*.pdf”,
parser=PyPDFParser()
)
A generic document loader.
Parameters
blob_loader – A blob loader which knows how to yield blobs
blob_parser – A blob parser which knows how to parse blobs into documents
Methods
__init__(blob_loader, blob_parser)
A generic document loader. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html |
f45e60ca83ee-1 | Methods
__init__(blob_loader, blob_parser)
A generic document loader.
from_filesystem(path, *[, glob, suffixes, ...])
Create a generic document loader using a filesystem blob loader.
lazy_load()
Load documents lazily.
load()
Load all documents.
load_and_split([text_splitter])
Load all documents and split them into sentences.
classmethod from_filesystem(path: Union[str, Path], *, glob: str = '**/[!.]*', suffixes: Optional[Sequence[str]] = None, show_progress: bool = False, parser: Union[Literal['default'], BaseBlobParser] = 'default') → GenericLoader[source]¶
Create a generic document loader using a filesystem blob loader.
Parameters
path – The path to the directory to load documents from.
glob – The glob pattern to use to find documents.
suffixes – The suffixes to use to filter documents. If None, all files
matching the glob will be loaded.
show_progress – Whether to show a progress bar or not (requires tqdm).
Proxies to the file system loader.
parser – A blob parser which knows how to parse blobs into documents
Returns
A generic document loader.
lazy_load() → Iterator[Document][source]¶
Load documents lazily. Use this when working at a large scale.
load() → List[Document][source]¶
Load all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document][source]¶
Load all documents and split them into sentences.
Examples using GenericLoader¶
Grobid
Loading documents from a YouTube url
Source Code | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html |
7eef2e24499e-0 | langchain.document_loaders.joplin.JoplinLoader¶
class langchain.document_loaders.joplin.JoplinLoader(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost')[source]¶
Bases: BaseLoader
Loader that fetches notes from Joplin.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for “Web Clipper” in the app settings).
To get the access token, you need to go to the Web Clipper options and
under “Advanced Options” you will find the access token.
You can find more information about the Web Clipper service here:
https://joplinapp.org/clipper/
Parameters
access_token – The access token to use.
port – The port where the Web Clipper service is running. Default is 41184.
host – The host where the Web Clipper service is running.
Default is localhost.
Methods
__init__([access_token, port, host])
param access_token
The access token to use.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using JoplinLoader¶
Joplin | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.joplin.JoplinLoader.html |
83e94832b6fd-0 | langchain.document_loaders.readthedocs.ReadTheDocsLoader¶
class langchain.document_loaders.readthedocs.ReadTheDocsLoader(path: Union[str, Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, **kwargs: Optional[Any])[source]¶
Bases: BaseLoader
Loads ReadTheDocs documentation directory dump.
Initialize ReadTheDocsLoader
The loader loops over all files under path and extracts the actual content of
the files by retrieving main html tags. Default main html tags include
<main id=”main-content>, <div role=”main>, and <article role=”main”>. You
can also define your own html tags by passing custom_html_tag, e.g.
(“div”, “class=main”). The loader iterates html tags with the order of
custom html tags (if exists) and default html tags. If any of the tags is not
empty, the loop will break and retrieve the content out of that tag.
Parameters
path – The location of pulled readthedocs folder.
encoding – The encoding with which to open the documents.
errors – Specify how encoding and decoding errors are to be handled—this
cannot be used in binary mode.
custom_html_tag – Optional custom html tag to retrieve the content from
files.
Methods
__init__(path[, encoding, errors, ...])
Initialize ReadTheDocsLoader
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html |
83e94832b6fd-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ReadTheDocsLoader¶
ReadTheDocs Documentation | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html |
172e44db0f00-0 | langchain.document_loaders.python.PythonLoader¶
class langchain.document_loaders.python.PythonLoader(file_path: str)[source]¶
Bases: TextLoader
Load Python files, respecting any non-default encoding if specified.
Initialize with a file path.
Parameters
file_path – The path to the file to load.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load from file path.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load from file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.python.PythonLoader.html |
01331b6cb241-0 | langchain.document_loaders.odt.UnstructuredODTLoader¶
class langchain.document_loaders.odt.UnstructuredODTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load OpenOffice ODT files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredODTLoader
loader = UnstructuredODTLoader(“example.odt”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-odt
Parameters
file_path – The path to the file to load.
mode – The mode to use when loading the file. Can be one of “single”,
“multi”, or “all”. Default is “single”.
**unstructured_kwargs – Any kwargs to pass to the unstructured.
Methods
__init__(file_path[, mode])
param file_path
The path to the file to load.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.odt.UnstructuredODTLoader.html |
01331b6cb241-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredODTLoader¶
Open Document Format (ODT) | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.odt.UnstructuredODTLoader.html |
fbb08f6bb3a1-0 | langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader¶
class langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader(path: Union[str, Path], *, glob: str = '**/[!.]*', suffixes: Optional[Sequence[str]] = None, show_progress: bool = False)[source]¶
Bases: BlobLoader
Blob loader for the local file system.
Example:
from langchain.document_loaders.blob_loaders import FileSystemBlobLoader
loader = FileSystemBlobLoader("/path/to/directory")
for blob in loader.yield_blobs():
print(blob)
Initialize with path to directory and how to glob over it.
Parameters
path – Path to directory to load from
glob – Glob pattern relative to the specified path
by default set to pick up all non-hidden files
suffixes – Provide to keep only files with these suffixes
Useful when wanting to keep files with different suffixes
Suffixes must include the dot, e.g. “.txt”
show_progress – If true, will show a progress bar as the files are loaded.
This forces an iteration through all matching files
to count them prior to loading them.
Examples:
… code-block:: python
# Recursively load all text files in a directory.
loader = FileSystemBlobLoader(“/path/to/directory”, glob=”**/*.txt”)
# Recursively load all non-hidden files in a directory.
loader = FileSystemBlobLoader(“/path/to/directory”, glob=”**/[!.]*”)
# Load all files in a directory without recursion.
loader = FileSystemBlobLoader(“/path/to/directory”, glob=”*”)
Methods
__init__(path, *[, glob, suffixes, ...])
Initialize with path to directory and how to glob over it.
count_matching_files() | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader.html |
fbb08f6bb3a1-1 | Initialize with path to directory and how to glob over it.
count_matching_files()
Count files that match the pattern without loading them.
yield_blobs()
Yield blobs that match the requested pattern.
count_matching_files() → int[source]¶
Count files that match the pattern without loading them.
yield_blobs() → Iterable[Blob][source]¶
Yield blobs that match the requested pattern. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader.html |
747097d741d1-0 | langchain.document_loaders.wikipedia.WikipediaLoader¶
class langchain.document_loaders.wikipedia.WikipediaLoader(query: str, lang: str = 'en', load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False, doc_content_chars_max: Optional[int] = 4000)[source]¶
Bases: BaseLoader
Loads a query result from www.wikipedia.org into a list of Documents.
The hard limit on the number of downloaded Documents is 300 for now.
Each wiki page represents one Document.
Initializes a new instance of the WikipediaLoader class.
Parameters
query (str) – The query string to search on Wikipedia.
lang (str, optional) – The language code for the Wikipedia language edition.
Defaults to “en”.
load_max_docs (int, optional) – The maximum number of documents to load.
Defaults to 100.
load_all_available_meta (bool, optional) – Indicates whether to load all
available metadata for each document. Defaults to False.
doc_content_chars_max (int, optional) – The maximum number of characters
for the document content. Defaults to 4000.
Methods
__init__(query[, lang, load_max_docs, ...])
Initializes a new instance of the WikipediaLoader class.
lazy_load()
A lazy loader for Documents.
load()
Loads the query result from Wikipedia into a list of Documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Loads the query result from Wikipedia into a list of Documents.
Returns
A list of Document objects representing the loadedWikipedia pages.
Return type
List[Document] | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.wikipedia.WikipediaLoader.html |
747097d741d1-1 | Return type
List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using WikipediaLoader¶
Wikipedia | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.wikipedia.WikipediaLoader.html |
8eb50fcd1dac-0 | langchain.document_loaders.facebook_chat.concatenate_rows¶
langchain.document_loaders.facebook_chat.concatenate_rows(row: dict) → str[source]¶
Combine message information in a readable format ready to be used.
Parameters
row – dictionary containing message information. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.facebook_chat.concatenate_rows.html |
49d8cc9116e6-0 | langchain.document_loaders.email.UnstructuredEmailLoader¶
class langchain.document_loaders.email.UnstructuredEmailLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load email files. Works with both
.eml and .msg files. You can process attachments in addition to the
e-mail message itself by passing process_attachments=True into the
constructor for the loader. By default, attachments will be processed
with the unstructured partition function. If you already know the document
types of the attachments, you can specify another partitioning function
with the attachment partitioner kwarg.
Example
from langchain.document_loaders import UnstructuredEmailLoader
loader = UnstructuredEmailLoader(“example_data/fake-email.eml”, mode=”elements”)
loader.load()
Example
from langchain.document_loaders import UnstructuredEmailLoader
loader = UnstructuredEmailLoader(“example_data/fake-email-attachment.eml”,
mode=”elements”,
process_attachments=True,
)
loader.load()
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredEmailLoader¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.UnstructuredEmailLoader.html |
49d8cc9116e6-1 | Returns
List of Documents.
Examples using UnstructuredEmailLoader¶
Email | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.UnstructuredEmailLoader.html |
d2b27009107d-0 | langchain.document_loaders.telegram.concatenate_rows¶
langchain.document_loaders.telegram.concatenate_rows(row: dict) → str[source]¶
Combine message information in a readable format ready to be used. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.concatenate_rows.html |
46ed66a31890-0 | langchain.document_loaders.email.OutlookMessageLoader¶
class langchain.document_loaders.email.OutlookMessageLoader(file_path: str)[source]¶
Bases: BaseLoader
Loads Outlook Message files using extract_msg.
https://github.com/TeamMsgExtractor/msg-extractor
Initialize with a file path.
Parameters
file_path – The path to the Outlook Message file.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load data into document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using OutlookMessageLoader¶
Email | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.OutlookMessageLoader.html |
85f73d0e2f13-0 | langchain.document_loaders.tsv.UnstructuredTSVLoader¶
class langchain.document_loaders.tsv.UnstructuredTSVLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load TSV files. Like other
Unstructured loaders, UnstructuredTSVLoader can be used in both
“single” and “elements” mode. If you use the loader in “elements”
mode, the TSV file will be a single Unstructured Table element.
If you use the loader in “elements” mode, an HTML representation
of the table will be available in the “text_as_html” key in the
document metadata.
Examples
from langchain.document_loaders.tsv import UnstructuredTSVLoader
loader = UnstructuredTSVLoader(“stanley-cups.tsv”, mode=”elements”)
docs = loader.load()
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredTSVLoader¶
TSV | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tsv.UnstructuredTSVLoader.html |
bc5c17b12f4f-0 | langchain.document_loaders.duckdb_loader.DuckDBLoader¶
class langchain.document_loaders.duckdb_loader.DuckDBLoader(query: str, database: str = ':memory:', read_only: bool = False, config: Optional[Dict[str, str]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]¶
Bases: BaseLoader
Loads a query result from DuckDB into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Parameters
query – The query to execute.
database – The database to connect to. Defaults to “:memory:”.
read_only – Whether to open the database in read-only mode.
Defaults to False.
config – A dictionary of configuration options to pass to the database.
Optional.
page_content_columns – The columns to write into the page_content
of the document. Optional.
metadata_columns – The columns to write into the metadata of the document.
Optional.
Methods
__init__(query[, database, read_only, ...])
param query
The query to execute.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.duckdb_loader.DuckDBLoader.html |
bc5c17b12f4f-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using DuckDBLoader¶
DuckDB | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.duckdb_loader.DuckDBLoader.html |
6f99e7428987-0 | langchain.document_loaders.tomarkdown.ToMarkdownLoader¶
class langchain.document_loaders.tomarkdown.ToMarkdownLoader(url: str, api_key: str)[source]¶
Bases: BaseLoader
Loads HTML to markdown using 2markdown.
Initialize with url and api key.
Methods
__init__(url, api_key)
Initialize with url and api key.
lazy_load()
Lazily load the file.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazily load the file.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ToMarkdownLoader¶
2Markdown | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tomarkdown.ToMarkdownLoader.html |
4322b8addc2e-0 | langchain.document_loaders.onedrive_file.OneDriveFileLoader¶
class langchain.document_loaders.onedrive_file.OneDriveFileLoader(*, file: File)[source]¶
Bases: BaseLoader, BaseModel
Loads a file from OneDrive.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param file: File [Required]¶
The file to load.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load Documents
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
model Config[source]¶
Bases: object
arbitrary_types_allowed = True¶
Allow arbitrary types. This is needed for the File type. Default is True.
See https://pydantic-docs.helpmanual.io/usage/types/#arbitrary-types-allowed | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive_file.OneDriveFileLoader.html |
c748f5504ca0-0 | langchain.document_loaders.mhtml.MHTMLLoader¶
class langchain.document_loaders.mhtml.MHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]¶
Bases: BaseLoader
Loader that uses beautiful soup to parse HTML files.
Initialise with path, and optionally, file encoding to use, and any kwargs
to pass to the BeautifulSoup object.
Parameters
file_path – Path to file to load.
open_encoding – The encoding to use when opening the file.
bs_kwargs – Any kwargs to pass to the BeautifulSoup object.
get_text_separator – The separator to use when getting the text
from the soup.
Methods
__init__(file_path[, open_encoding, ...])
Initialise with path, and optionally, file encoding to use, and any kwargs to pass to the BeautifulSoup object.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MHTMLLoader¶
mhtml | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mhtml.MHTMLLoader.html |
2568fe589987-0 | langchain.document_loaders.confluence.ContentFormat¶
class langchain.document_loaders.confluence.ContentFormat(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases: str, Enum
Enumerator of the content formats of Confluence page.
Methods
get_content(page)
__init__(*args, **kwds)
capitalize()
Return a capitalized version of the string.
casefold()
Return a version of the string suitable for caseless comparisons.
center(width[, fillchar])
Return a centered string of length width.
count(sub[, start[, end]])
Return the number of non-overlapping occurrences of substring sub in string S[start:end].
encode([encoding, errors])
Encode the string using the codec registered for encoding.
endswith(suffix[, start[, end]])
Return True if S ends with the specified suffix, False otherwise.
expandtabs([tabsize])
Return a copy where all tab characters are expanded using spaces.
find(sub[, start[, end]])
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end].
format(*args, **kwargs)
Return a formatted version of S, using substitutions from args and kwargs.
format_map(mapping)
Return a formatted version of S, using substitutions from mapping.
index(sub[, start[, end]])
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end].
isalnum()
Return True if the string is an alpha-numeric string, False otherwise.
isalpha()
Return True if the string is an alphabetic string, False otherwise.
isascii()
Return True if all characters in the string are ASCII, False otherwise.
isdecimal() | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html |
2568fe589987-1 | Return True if all characters in the string are ASCII, False otherwise.
isdecimal()
Return True if the string is a decimal string, False otherwise.
isdigit()
Return True if the string is a digit string, False otherwise.
isidentifier()
Return True if the string is a valid Python identifier, False otherwise.
islower()
Return True if the string is a lowercase string, False otherwise.
isnumeric()
Return True if the string is a numeric string, False otherwise.
isprintable()
Return True if the string is printable, False otherwise.
isspace()
Return True if the string is a whitespace string, False otherwise.
istitle()
Return True if the string is a title-cased string, False otherwise.
isupper()
Return True if the string is an uppercase string, False otherwise.
join(iterable, /)
Concatenate any number of strings.
ljust(width[, fillchar])
Return a left-justified string of length width.
lower()
Return a copy of the string converted to lowercase.
lstrip([chars])
Return a copy of the string with leading whitespace removed.
maketrans
Return a translation table usable for str.translate().
partition(sep, /)
Partition the string into three parts using the given separator.
removeprefix(prefix, /)
Return a str with the given prefix string removed if present.
removesuffix(suffix, /)
Return a str with the given suffix string removed if present.
replace(old, new[, count])
Return a copy with all occurrences of substring old replaced by new.
rfind(sub[, start[, end]])
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end].
rindex(sub[, start[, end]]) | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html |
2568fe589987-2 | rindex(sub[, start[, end]])
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end].
rjust(width[, fillchar])
Return a right-justified string of length width.
rpartition(sep, /)
Partition the string into three parts using the given separator.
rsplit([sep, maxsplit])
Return a list of the substrings in the string, using sep as the separator string.
rstrip([chars])
Return a copy of the string with trailing whitespace removed.
split([sep, maxsplit])
Return a list of the substrings in the string, using sep as the separator string.
splitlines([keepends])
Return a list of the lines in the string, breaking at line boundaries.
startswith(prefix[, start[, end]])
Return True if S starts with the specified prefix, False otherwise.
strip([chars])
Return a copy of the string with leading and trailing whitespace removed.
swapcase()
Convert uppercase characters to lowercase and lowercase characters to uppercase.
title()
Return a version of the string where each word is titlecased.
translate(table, /)
Replace each character in the string using the given translation table.
upper()
Return a copy of the string converted to uppercase.
zfill(width, /)
Pad a numeric string with zeros on the left, to fill a field of the given width.
Attributes
STORAGE
VIEW
capitalize()¶
Return a capitalized version of the string.
More specifically, make the first character have upper case and the rest lower
case.
casefold()¶
Return a version of the string suitable for caseless comparisons.
center(width, fillchar=' ', /)¶
Return a centered string of length width.
Padding is done using the specified fill character (default is a space). | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html |
2568fe589987-3 | Padding is done using the specified fill character (default is a space).
count(sub[, start[, end]]) → int¶
Return the number of non-overlapping occurrences of substring sub in
string S[start:end]. Optional arguments start and end are
interpreted as in slice notation.
encode(encoding='utf-8', errors='strict')¶
Encode the string using the codec registered for encoding.
encodingThe encoding in which to encode the string.
errorsThe error handling scheme to use for encoding errors.
The default is ‘strict’ meaning that encoding errors raise a
UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and
‘xmlcharrefreplace’ as well as any other name registered with
codecs.register_error that can handle UnicodeEncodeErrors.
endswith(suffix[, start[, end]]) → bool¶
Return True if S ends with the specified suffix, False otherwise.
With optional start, test S beginning at that position.
With optional end, stop comparing S at that position.
suffix can also be a tuple of strings to try.
expandtabs(tabsize=8)¶
Return a copy where all tab characters are expanded using spaces.
If tabsize is not given, a tab size of 8 characters is assumed.
find(sub[, start[, end]]) → int¶
Return the lowest index in S where substring sub is found,
such that sub is contained within S[start:end]. Optional
arguments start and end are interpreted as in slice notation.
Return -1 on failure.
format(*args, **kwargs) → str¶
Return a formatted version of S, using substitutions from args and kwargs.
The substitutions are identified by braces (‘{’ and ‘}’).
format_map(mapping) → str¶
Return a formatted version of S, using substitutions from mapping.
The substitutions are identified by braces (‘{’ and ‘}’). | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html |
2568fe589987-4 | The substitutions are identified by braces (‘{’ and ‘}’).
get_content(page: dict) → str[source]¶
index(sub[, start[, end]]) → int¶
Return the lowest index in S where substring sub is found,
such that sub is contained within S[start:end]. Optional
arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
isalnum()¶
Return True if the string is an alpha-numeric string, False otherwise.
A string is alpha-numeric if all characters in the string are alpha-numeric and
there is at least one character in the string.
isalpha()¶
Return True if the string is an alphabetic string, False otherwise.
A string is alphabetic if all characters in the string are alphabetic and there
is at least one character in the string.
isascii()¶
Return True if all characters in the string are ASCII, False otherwise.
ASCII characters have code points in the range U+0000-U+007F.
Empty string is ASCII too.
isdecimal()¶
Return True if the string is a decimal string, False otherwise.
A string is a decimal string if all characters in the string are decimal and
there is at least one character in the string.
isdigit()¶
Return True if the string is a digit string, False otherwise.
A string is a digit string if all characters in the string are digits and there
is at least one character in the string.
isidentifier()¶
Return True if the string is a valid Python identifier, False otherwise.
Call keyword.iskeyword(s) to test whether string s is a reserved identifier,
such as “def” or “class”.
islower()¶
Return True if the string is a lowercase string, False otherwise. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html |
2568fe589987-5 | islower()¶
Return True if the string is a lowercase string, False otherwise.
A string is lowercase if all cased characters in the string are lowercase and
there is at least one cased character in the string.
isnumeric()¶
Return True if the string is a numeric string, False otherwise.
A string is numeric if all characters in the string are numeric and there is at
least one character in the string.
isprintable()¶
Return True if the string is printable, False otherwise.
A string is printable if all of its characters are considered printable in
repr() or if it is empty.
isspace()¶
Return True if the string is a whitespace string, False otherwise.
A string is whitespace if all characters in the string are whitespace and there
is at least one character in the string.
istitle()¶
Return True if the string is a title-cased string, False otherwise.
In a title-cased string, upper- and title-case characters may only
follow uncased characters and lowercase characters only cased ones.
isupper()¶
Return True if the string is an uppercase string, False otherwise.
A string is uppercase if all cased characters in the string are uppercase and
there is at least one cased character in the string.
join(iterable, /)¶
Concatenate any number of strings.
The string whose method is called is inserted in between each given string.
The result is returned as a new string.
Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’
ljust(width, fillchar=' ', /)¶
Return a left-justified string of length width.
Padding is done using the specified fill character (default is a space).
lower()¶
Return a copy of the string converted to lowercase. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html |
2568fe589987-6 | lower()¶
Return a copy of the string converted to lowercase.
lstrip(chars=None, /)¶
Return a copy of the string with leading whitespace removed.
If chars is given and not None, remove characters in chars instead.
static maketrans()¶
Return a translation table usable for str.translate().
If there is only one argument, it must be a dictionary mapping Unicode
ordinals (integers) or characters to Unicode ordinals, strings or None.
Character keys will be then converted to ordinals.
If there are two arguments, they must be strings of equal length, and
in the resulting dictionary, each character in x will be mapped to the
character at the same position in y. If there is a third argument, it
must be a string, whose characters will be mapped to None in the result.
partition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string. If the separator is found,
returns a 3-tuple containing the part before the separator, the separator
itself, and the part after it.
If the separator is not found, returns a 3-tuple containing the original string
and two empty strings.
removeprefix(prefix, /)¶
Return a str with the given prefix string removed if present.
If the string starts with the prefix string, return string[len(prefix):].
Otherwise, return a copy of the original string.
removesuffix(suffix, /)¶
Return a str with the given suffix string removed if present.
If the string ends with the suffix string and that suffix is not empty,
return string[:-len(suffix)]. Otherwise, return a copy of the original
string.
replace(old, new, count=- 1, /)¶
Return a copy with all occurrences of substring old replaced by new. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html |
2568fe589987-7 | Return a copy with all occurrences of substring old replaced by new.
countMaximum number of occurrences to replace.
-1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences are
replaced.
rfind(sub[, start[, end]]) → int¶
Return the highest index in S where substring sub is found,
such that sub is contained within S[start:end]. Optional
arguments start and end are interpreted as in slice notation.
Return -1 on failure.
rindex(sub[, start[, end]]) → int¶
Return the highest index in S where substring sub is found,
such that sub is contained within S[start:end]. Optional
arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
rjust(width, fillchar=' ', /)¶
Return a right-justified string of length width.
Padding is done using the specified fill character (default is a space).
rpartition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string, starting at the end. If
the separator is found, returns a 3-tuple containing the part before the
separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing two empty strings
and the original string.
rsplit(sep=None, maxsplit=- 1)¶
Return a list of the substrings in the string, using sep as the separator string.
sepThe separator used to split the string.
When set to None (the default value), will split on any whitespace
character (including \n \r \t \f and spaces) and will discard
empty strings from the result.
maxsplitMaximum number of splits (starting from the left). | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html |
2568fe589987-8 | empty strings from the result.
maxsplitMaximum number of splits (starting from the left).
-1 (the default value) means no limit.
Splitting starts at the end of the string and works to the front.
rstrip(chars=None, /)¶
Return a copy of the string with trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
split(sep=None, maxsplit=- 1)¶
Return a list of the substrings in the string, using sep as the separator string.
sepThe separator used to split the string.
When set to None (the default value), will split on any whitespace
character (including \n \r \t \f and spaces) and will discard
empty strings from the result.
maxsplitMaximum number of splits (starting from the left).
-1 (the default value) means no limit.
Note, str.split() is mainly useful for data that has been intentionally
delimited. With natural text that includes punctuation, consider using
the regular expression module.
splitlines(keepends=False)¶
Return a list of the lines in the string, breaking at line boundaries.
Line breaks are not included in the resulting list unless keepends is given and
true.
startswith(prefix[, start[, end]]) → bool¶
Return True if S starts with the specified prefix, False otherwise.
With optional start, test S beginning at that position.
With optional end, stop comparing S at that position.
prefix can also be a tuple of strings to try.
strip(chars=None, /)¶
Return a copy of the string with leading and trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
swapcase()¶
Convert uppercase characters to lowercase and lowercase characters to uppercase.
title()¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html |
2568fe589987-9 | Convert uppercase characters to lowercase and lowercase characters to uppercase.
title()¶
Return a version of the string where each word is titlecased.
More specifically, words start with uppercased characters and all remaining
cased characters have lower case.
translate(table, /)¶
Replace each character in the string using the given translation table.
tableTranslation table, which must be a mapping of Unicode ordinals to
Unicode ordinals, strings, or None.
The table must implement lookup/indexing via __getitem__, for instance a
dictionary or list. If this operation raises LookupError, the character is
left untouched. Characters mapped to None are deleted.
upper()¶
Return a copy of the string converted to uppercase.
zfill(width, /)¶
Pad a numeric string with zeros on the left, to fill a field of the given width.
The string is never truncated.
STORAGE = 'body.storage'¶
VIEW = 'body.view'¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html |
66b087532d38-0 | langchain.document_loaders.hn.HNLoader¶
class langchain.document_loaders.hn.HNLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None)[source]¶
Bases: WebBaseLoader
Load Hacker News data from either main page results or the comments page.
Initialize with webpage path.
Methods
__init__(web_path[, header_template, ...])
Initialize with webpage path.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Get important HN webpage information.
load_and_split([text_splitter])
Load Documents and split into chunks.
load_comments(soup_info)
Load comments from a HN post.
load_results(soup)
Load items from an HN page.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hn.HNLoader.html |
66b087532d38-1 | Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Get important HN webpage information.
HN webpage components are:
title
content
source url,
time of post
author of the post
number of comments
rank of the post
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
load_comments(soup_info: Any) → List[Document][source]¶
Load comments from a HN post.
load_results(soup: Any) → List[Document][source]¶
Load items from an HN page.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
Examples using HNLoader¶
Hacker News | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hn.HNLoader.html |
2ea39afc0cfe-0 | langchain.document_loaders.figma.FigmaFileLoader¶
class langchain.document_loaders.figma.FigmaFileLoader(access_token: str, ids: str, key: str)[source]¶
Bases: BaseLoader
Loads Figma file json.
Initialize with access token, ids, and key.
Parameters
access_token – The access token for the Figma REST API.
ids – The ids of the Figma file.
key – The key for the Figma file
Methods
__init__(access_token, ids, key)
Initialize with access token, ids, and key.
lazy_load()
A lazy loader for Documents.
load()
Load file
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using FigmaFileLoader¶
Figma | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.figma.FigmaFileLoader.html |
8644a61e14eb-0 | langchain.document_loaders.rtf.UnstructuredRTFLoader¶
class langchain.document_loaders.rtf.UnstructuredRTFLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load RTF files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredRTFLoader
loader = UnstructuredRTFLoader(“example.rtf”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-rtf
Initialize with a file path.
Parameters
file_path – The path to the file to load.
mode – The mode to use for partitioning. See unstructured for details.
Defaults to “single”.
**unstructured_kwargs – Additional keyword arguments to pass
to unstructured.
Methods
__init__(file_path[, mode])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rtf.UnstructuredRTFLoader.html |
8644a61e14eb-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rtf.UnstructuredRTFLoader.html |
4327698e3954-0 | langchain.document_loaders.datadog_logs.DatadogLogsLoader¶
class langchain.document_loaders.datadog_logs.DatadogLogsLoader(query: str, api_key: str, app_key: str, from_time: Optional[int] = None, to_time: Optional[int] = None, limit: int = 100)[source]¶
Bases: BaseLoader
Loads a query result from Datadog into a list of documents.
Logs are written into the page_content and into the metadata.
Initialize Datadog document loader.
Requirements:
Must have datadog_api_client installed. Install with pip install datadog_api_client.
Parameters
query – The query to run in Datadog.
api_key – The Datadog API key.
app_key – The Datadog APP key.
from_time – Optional. The start of the time range to query.
Supports date math and regular timestamps (milliseconds) like ‘1688732708951’
Defaults to 20 minutes ago.
to_time – Optional. The end of the time range to query.
Supports date math and regular timestamps (milliseconds) like ‘1688732708951’
Defaults to now.
limit – The maximum number of logs to return.
Defaults to 100.
Methods
__init__(query, api_key, app_key[, ...])
Initialize Datadog document loader.
lazy_load()
A lazy loader for Documents.
load()
Get logs from Datadog.
load_and_split([text_splitter])
Load Documents and split into chunks.
parse_log(log)
Create Document objects from Datadog log items.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Get logs from Datadog.
Returns | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.datadog_logs.DatadogLogsLoader.html |
4327698e3954-1 | Get logs from Datadog.
Returns
A list of Document objects.
page_content
metadata
id
service
status
tags
timestamp
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
parse_log(log: dict) → Document[source]¶
Create Document objects from Datadog log items.
Examples using DatadogLogsLoader¶
Datadog Logs | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.datadog_logs.DatadogLogsLoader.html |
2f72b2fab2dc-0 | langchain.document_loaders.onedrive.OneDriveLoader¶
class langchain.document_loaders.onedrive.OneDriveLoader(*, settings: _OneDriveSettings = None, drive_id: str, folder_path: Optional[str] = None, object_ids: Optional[List[str]] = None, auth_with_token: bool = False)[source]¶
Bases: BaseLoader, BaseModel
Loads data from OneDrive.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param auth_with_token: bool = False¶
Whether to authenticate with a token or not. Defaults to False.
param drive_id: str [Required]¶
The ID of the OneDrive drive to load data from.
param folder_path: Optional[str] = None¶
The path to the folder to load data from.
param object_ids: Optional[List[str]] = None¶
The IDs of the objects to load data from.
param settings: langchain.document_loaders.onedrive._OneDriveSettings [Optional]¶
The settings for the OneDrive API client.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Loads all supported document files from the specified OneDrive drive
and return a list of Document objects.
Returns
A list of Document objects
representing the loaded documents.
Return type
List[Document]
Raises
ValueError – If the specified drive ID
does not correspond to a drive in the OneDrive storage. –
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive.OneDriveLoader.html |
2f72b2fab2dc-1 | Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using OneDriveLoader¶
Microsoft OneDrive | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive.OneDriveLoader.html |
0017f2dcb01c-0 | langchain.document_loaders.srt.SRTLoader¶
class langchain.document_loaders.srt.SRTLoader(file_path: str)[source]¶
Bases: BaseLoader
Loader for .srt (subtitle) files.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load using pysrt file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load using pysrt file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SRTLoader¶
Subtitle | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.srt.SRTLoader.html |
4ed8ee54d8f0-0 | langchain.document_loaders.git.GitLoader¶
class langchain.document_loaders.git.GitLoader(repo_path: str, clone_url: Optional[str] = None, branch: Optional[str] = 'main', file_filter: Optional[Callable[[str], bool]] = None)[source]¶
Bases: BaseLoader
Loads files from a Git repository into a list of documents.
The Repository can be local on disk available at repo_path,
or remote at clone_url that will be cloned to repo_path.
Currently, supports only text files.
Each document represents one file in the repository. The path points to
the local Git repository, and the branch specifies the branch to load
files from. By default, it loads from the main branch.
Parameters
repo_path – The path to the Git repository.
clone_url – Optional. The URL to clone the repository from.
branch – Optional. The branch to load files from. Defaults to main.
file_filter – Optional. A function that takes a file path and returns
a boolean indicating whether to load the file. Defaults to None.
Methods
__init__(repo_path[, clone_url, branch, ...])
param repo_path
The path to the Git repository.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.git.GitLoader.html |
4ed8ee54d8f0-1 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GitLoader¶
Git | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.git.GitLoader.html |
b7e652fb0e0e-0 | langchain.document_loaders.unstructured.UnstructuredFileIOLoader¶
class langchain.document_loaders.unstructured.UnstructuredFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredBaseLoader
Loader that uses Unstructured to load files.
The file loader
uses the unstructured partition function and will automatically detect the file
type. You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredFileIOLoader
with open(“example.pdf”, “rb”) as f:
loader = UnstructuredFileIOLoader(f, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
Initialize with file path.
Methods
__init__(file[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileIOLoader.html |
b7e652fb0e0e-1 | Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredFileIOLoader¶
Google Drive | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileIOLoader.html |
e5f65089d8cc-0 | langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload¶
class langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload[source]¶
Bases: EmbaasDocumentExtractionParameters
Payload for the Embaas document extraction API.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
Attributes
bytes
The base64 encoded bytes of the document to extract text from.
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload.html |
e5f65089d8cc-1 | items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶
bytes: str¶
The base64 encoded bytes of the document to extract text from.
chunk_overlap: int¶
chunk_size: int¶
chunk_splitter: str¶
file_extension: str¶
file_name: str¶
instruction: str¶
mime_type: str¶
model: str¶
separators: List[str]¶
should_chunk: bool¶
should_embed: bool¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload.html |
451a1a04e741-0 | langchain.document_loaders.pdf.PyPDFDirectoryLoader¶
class langchain.document_loaders.pdf.PyPDFDirectoryLoader(path: str, glob: str = '**/[!.]*.pdf', silent_errors: bool = False, load_hidden: bool = False, recursive: bool = False)[source]¶
Bases: BaseLoader
Loads a directory with PDF files with pypdf and chunks at character level.
Loader also stores page numbers in metadata.
Methods
__init__(path[, glob, silent_errors, ...])
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFDirectoryLoader.html |
60e2d933815d-0 | langchain.document_loaders.base.BaseBlobParser¶
class langchain.document_loaders.base.BaseBlobParser[source]¶
Bases: ABC
Abstract interface for blob parsers.
A blob parser provides a way to parse raw data stored in a blob into one
or more documents.
The parser can be composed with blob loaders, making it easy to re-use
a parser independent of how the blob was originally loaded.
Methods
__init__()
lazy_parse(blob)
Lazy parsing interface.
parse(blob)
Eagerly parse the blob into a document or documents.
abstract lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
parse(blob: Blob) → List[Document][source]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base.BaseBlobParser.html |
b5e566abdcf1-0 | langchain.document_loaders.url.UnstructuredURLLoader¶
class langchain.document_loaders.url.UnstructuredURLLoader(urls: List[str], continue_on_failure: bool = True, mode: str = 'single', show_progress_bar: bool = False, **unstructured_kwargs: Any)[source]¶
Bases: BaseLoader
Loader that use Unstructured to load files from remote URLs.
Use the unstructured partition function to detect the MIME type
and route the file to the appropriate partitioner.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredURLLoader
loader = UnstructuredURLLoader(ursl=[“<url-1>”, “<url-2>”], mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
Initialize with file path.
Methods
__init__(urls[, continue_on_failure, mode, ...])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url.UnstructuredURLLoader.html |
b5e566abdcf1-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredURLLoader¶
URL | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url.UnstructuredURLLoader.html |
1864e625c00c-0 | langchain.document_loaders.youtube.YoutubeLoader¶
class langchain.document_loaders.youtube.YoutubeLoader(video_id: str, add_video_info: bool = False, language: Union[str, Sequence[str]] = 'en', translation: str = 'en', continue_on_failure: bool = False)[source]¶
Bases: BaseLoader
Loads Youtube transcripts.
Initialize with YouTube video ID.
Methods
__init__(video_id[, add_video_info, ...])
Initialize with YouTube video ID.
extract_video_id(youtube_url)
Extract video id from common YT urls.
from_youtube_url(youtube_url, **kwargs)
Given youtube URL, load video.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
static extract_video_id(youtube_url: str) → str[source]¶
Extract video id from common YT urls.
classmethod from_youtube_url(youtube_url: str, **kwargs: Any) → YoutubeLoader[source]¶
Given youtube URL, load video.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using YoutubeLoader¶
YouTube
YouTube transcripts | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.YoutubeLoader.html |
348d86259dc3-0 | langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader¶
class langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileIOLoader
Loader that uses the Unstructured API to load files.
By default, the loader makes a call to the hosted Unstructured API.
If you are running the unstructured API locally, you can change the
API rule by passing in the url parameter when you initialize the loader.
The hosted Unstructured API requires an API key. See
https://www.unstructured.io/api-key/ if you need to generate a key.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredAPIFileLoader
with open(“example.pdf”, “rb”) as f:
loader = UnstructuredFileAPILoader(f, mode=”elements”, strategy=”fast”, api_key=”MY_API_KEY”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
https://www.unstructured.io/api-key/
https://github.com/Unstructured-IO/unstructured-api
Initialize with file path.
Methods
__init__(file[, mode, url, api_key])
Initialize with file path. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader.html |
348d86259dc3-1 | Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader.html |
daa986cf684e-0 | langchain.document_loaders.max_compute.MaxComputeLoader¶
class langchain.document_loaders.max_compute.MaxComputeLoader(query: str, api_wrapper: MaxComputeAPIWrapper, *, page_content_columns: Optional[Sequence[str]] = None, metadata_columns: Optional[Sequence[str]] = None)[source]¶
Bases: BaseLoader
Loads a query result from Alibaba Cloud MaxCompute table into documents.
Initialize Alibaba Cloud MaxCompute document loader.
Parameters
query – SQL query to execute.
api_wrapper – MaxCompute API wrapper.
page_content_columns – The columns to write into the page_content of the
Document. If unspecified, all columns will be written to page_content.
metadata_columns – The columns to write into the metadata of the Document.
If unspecified, all columns not added to page_content will be written.
Methods
__init__(query, api_wrapper, *[, ...])
Initialize Alibaba Cloud MaxCompute document loader.
from_params(query, endpoint, project, *[, ...])
Convenience constructor that builds the MaxCompute API wrapper from
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
classmethod from_params(query: str, endpoint: str, project: str, *, access_id: Optional[str] = None, secret_access_key: Optional[str] = None, **kwargs: Any) → MaxComputeLoader[source]¶
Convenience constructor that builds the MaxCompute API wrapper fromgiven parameters.
Parameters
query – SQL query to execute.
endpoint – MaxCompute endpoint.
project – A project is a basic organizational unit of MaxCompute, which is
similar to a database.
access_id – MaxCompute access ID. Should be passed in directly or set as the
environment variable MAX_COMPUTE_ACCESS_ID. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.max_compute.MaxComputeLoader.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.