id
stringlengths 14
15
| text
stringlengths 22
2.51k
| source
stringlengths 61
160
|
---|---|---|
daa986cf684e-1
|
environment variable MAX_COMPUTE_ACCESS_ID.
secret_access_key – MaxCompute secret access key. Should be passed in
directly or set as the environment variable
MAX_COMPUTE_SECRET_ACCESS_KEY.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MaxComputeLoader¶
Alibaba Cloud MaxCompute
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.max_compute.MaxComputeLoader.html
|
38c8f3d5460e-0
|
langchain.document_loaders.gutenberg.GutenbergLoader¶
class langchain.document_loaders.gutenberg.GutenbergLoader(file_path: str)[source]¶
Bases: BaseLoader
Loader that uses urllib to load .txt web files.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GutenbergLoader¶
Gutenberg
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gutenberg.GutenbergLoader.html
|
d57dc4121a11-0
|
langchain.document_loaders.psychic.PsychicLoader¶
class langchain.document_loaders.psychic.PsychicLoader(api_key: str, account_id: str, connector_id: Optional[str] = None)[source]¶
Bases: BaseLoader
Loads documents from Psychic.dev.
Initialize with API key, connector id, and account id.
Parameters
api_key – The Psychic API key.
account_id – The Psychic account id.
connector_id – The Psychic connector id.
Methods
__init__(api_key, account_id[, connector_id])
Initialize with API key, connector id, and account id.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using PsychicLoader¶
Psychic
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.psychic.PsychicLoader.html
|
400c115c6f25-0
|
langchain.document_loaders.larksuite.LarkSuiteDocLoader¶
class langchain.document_loaders.larksuite.LarkSuiteDocLoader(domain: str, access_token: str, document_id: str)[source]¶
Bases: BaseLoader
Loads LarkSuite (FeiShu) document.
Initialize with domain, access_token (tenant / user), and document_id.
Parameters
domain – The domain to load the LarkSuite.
access_token – The access_token to use.
document_id – The document_id to load.
Methods
__init__(domain, access_token, document_id)
Initialize with domain, access_token (tenant / user), and document_id.
lazy_load()
Lazy load LarkSuite (FeiShu) document.
load()
Load LarkSuite (FeiShu) document.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazy load LarkSuite (FeiShu) document.
load() → List[Document][source]¶
Load LarkSuite (FeiShu) document.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using LarkSuiteDocLoader¶
LarkSuite (FeiShu)
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.larksuite.LarkSuiteDocLoader.html
|
beb5feb4fb9e-0
|
langchain.document_loaders.parsers.pdf.PyMuPDFParser¶
class langchain.document_loaders.parsers.pdf.PyMuPDFParser(text_kwargs: Optional[Mapping[str, Any]] = None)[source]¶
Bases: BaseBlobParser
Parse PDFs with PyMuPDF.
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to fitz.Page.get_text().
Methods
__init__([text_kwargs])
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyMuPDFParser.html
|
fa06ec6f3b61-0
|
langchain.document_loaders.pdf.MathpixPDFLoader¶
class langchain.document_loaders.pdf.MathpixPDFLoader(file_path: str, processed_file_format: str = 'mmd', max_wait_time_seconds: int = 500, should_clean_pdf: bool = False, **kwargs: Any)[source]¶
Bases: BasePDFLoader
This class uses Mathpix service to load PDF files.
Initialize with a file path.
Parameters
file_path – a file for loading.
processed_file_format – a format of the processed file. Default is “mmd”.
max_wait_time_seconds – a maximum time to wait for the response from
the server. Default is 500.
should_clean_pdf – a flag to clean the PDF file. Default is False.
**kwargs – additional keyword arguments.
Methods
__init__(file_path[, processed_file_format, ...])
Initialize with a file path.
clean_pdf(contents)
Clean the PDF file.
get_processed_pdf(pdf_id)
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
send_pdf()
wait_for_processing(pdf_id)
Wait for processing to complete.
Attributes
data
headers
source
url
clean_pdf(contents: str) → str[source]¶
Clean the PDF file.
Parameters
contents – a PDF file contents.
Returns:
get_processed_pdf(pdf_id: str) → str[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.MathpixPDFLoader.html
|
fa06ec6f3b61-1
|
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
send_pdf() → str[source]¶
wait_for_processing(pdf_id: str) → None[source]¶
Wait for processing to complete.
Parameters
pdf_id – a PDF id.
Returns: None
property data: dict¶
property headers: dict¶
property source: str¶
property url: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.MathpixPDFLoader.html
|
6cec8286fa29-0
|
langchain.document_loaders.base.BaseLoader¶
class langchain.document_loaders.base.BaseLoader[source]¶
Bases: ABC
Interface for loading Documents.
Implementations should implement the lazy-loading method using generators
to avoid loading all Documents into memory at once.
The load method will remain as is for backwards compatibility, but its
implementation should be just list(self.lazy_load()).
Methods
__init__()
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
abstract load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document][source]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base.BaseLoader.html
|
fadf06db1b86-0
|
langchain.document_loaders.bibtex.BibtexLoader¶
class langchain.document_loaders.bibtex.BibtexLoader(file_path: str, *, parser: Optional[BibtexparserWrapper] = None, max_docs: Optional[int] = None, max_content_chars: Optional[int] = 4000, load_extra_metadata: bool = False, file_pattern: str = '[^:]+\\.pdf')[source]¶
Bases: BaseLoader
Loads a bibtex file into a list of Documents.
Each document represents one entry from the bibtex file.
If a PDF file is present in the file bibtex field, the original PDF
is loaded into the document text. If no such file entry is present,
the abstract field is used instead.
Initialize the BibtexLoader.
Parameters
file_path – Path to the bibtex file.
parser – The parser to use. If None, a default parser is used.
max_docs – Max number of associated documents to load. Use -1 means
no limit.
max_content_chars – Maximum number of characters to load from the PDF.
load_extra_metadata – Whether to load extra metadata from the PDF.
file_pattern – Regex pattern to match the file name in the bibtex.
Methods
__init__(file_path, *[, parser, max_docs, ...])
Initialize the BibtexLoader.
lazy_load()
Load bibtex file using bibtexparser and get the article texts plus the article metadata.
load()
Load bibtex file documents from the given bibtex file path.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Load bibtex file using bibtexparser and get the article texts plus the
article metadata.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bibtex.BibtexLoader.html
|
fadf06db1b86-1
|
article metadata.
See https://bibtexparser.readthedocs.io/en/master/
Returns
a list of documents with the document.page_content in text format
load() → List[Document][source]¶
Load bibtex file documents from the given bibtex file path.
See https://bibtexparser.readthedocs.io/en/master/
Parameters
file_path – the path to the bibtex file
Returns
a list of documents with the document.page_content in text format
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BibtexLoader¶
BibTeX
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bibtex.BibtexLoader.html
|
1ccc78609dfb-0
|
langchain.document_loaders.helpers.detect_file_encodings¶
langchain.document_loaders.helpers.detect_file_encodings(file_path: str, timeout: int = 5) → List[FileEncoding][source]¶
Try to detect the file encoding.
Returns a list of FileEncoding tuples with the detected encodings ordered
by confidence.
Parameters
file_path – The path to the file to detect the encoding for.
timeout – The timeout in seconds for the encoding detection.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.helpers.detect_file_encodings.html
|
4240fc200845-0
|
langchain.document_loaders.arxiv.ArxivLoader¶
class langchain.document_loaders.arxiv.ArxivLoader(query: str, load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]¶
Bases: BaseLoader
Loads a query result from arxiv.org into a list of Documents.
Each document represents one Document.
The loader converts the original PDF format into the text.
Methods
__init__(query[, load_max_docs, ...])
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
query
The query to be passed to the arxiv.org API.
load_max_docs
The maximum number of documents to load.
load_all_available_meta
Whether to load all available metadata.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
load_all_available_meta¶
Whether to load all available metadata.
load_max_docs¶
The maximum number of documents to load.
query¶
The query to be passed to the arxiv.org API.
Examples using ArxivLoader¶
Arxiv
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.arxiv.ArxivLoader.html
|
44642d94b52e-0
|
langchain.document_loaders.chatgpt.ChatGPTLoader¶
class langchain.document_loaders.chatgpt.ChatGPTLoader(log_file: str, num_logs: int = - 1)[source]¶
Bases: BaseLoader
Load conversations from exported ChatGPT data.
Initialize a class object.
Parameters
log_file – Path to the log file
num_logs – Number of logs to load. If 0, load all logs.
Methods
__init__(log_file[, num_logs])
Initialize a class object.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ChatGPTLoader¶
OpenAI
ChatGPT Data
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chatgpt.ChatGPTLoader.html
|
04e3eaf25037-0
|
langchain.document_loaders.xml.UnstructuredXMLLoader¶
class langchain.document_loaders.xml.UnstructuredXMLLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load XML files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredXMLLoader
loader = UnstructuredXMLLoader(“example.xml”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-xml
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredXMLLoader¶
XML
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.xml.UnstructuredXMLLoader.html
|
5eb4c5ae9858-0
|
langchain.document_loaders.diffbot.DiffbotLoader¶
class langchain.document_loaders.diffbot.DiffbotLoader(api_token: str, urls: List[str], continue_on_failure: bool = True)[source]¶
Bases: BaseLoader
Loads Diffbot file json.
Initialize with API token, ids, and key.
Parameters
api_token – Diffbot API token.
urls – List of URLs to load.
continue_on_failure – Whether to continue loading other URLs if one fails.
Defaults to True.
Methods
__init__(api_token, urls[, continue_on_failure])
Initialize with API token, ids, and key.
lazy_load()
A lazy loader for Documents.
load()
Extract text from Diffbot on all the URLs and return Documents
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Extract text from Diffbot on all the URLs and return Documents
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using DiffbotLoader¶
Diffbot
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.diffbot.DiffbotLoader.html
|
aa10a4580668-0
|
langchain.document_loaders.helpers.FileEncoding¶
class langchain.document_loaders.helpers.FileEncoding(encoding: Optional[str], confidence: float, language: Optional[str])[source]¶
Bases: NamedTuple
A file encoding as the NamedTuple.
Create new instance of FileEncoding(encoding, confidence, language)
Methods
__init__()
count(value, /)
Return number of occurrences of value.
index(value[, start, stop])
Return first index of value.
Attributes
confidence
The confidence of the encoding.
encoding
The encoding of the file.
language
The language of the file.
count(value, /)¶
Return number of occurrences of value.
index(value, start=0, stop=9223372036854775807, /)¶
Return first index of value.
Raises ValueError if the value is not present.
confidence: float¶
The confidence of the encoding.
encoding: Optional[str]¶
The encoding of the file.
language: Optional[str]¶
The language of the file.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.helpers.FileEncoding.html
|
51b7243b878a-0
|
langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader¶
class langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader(conn_str: str, container: str, blob_name: str)[source]¶
Bases: BaseLoader
Loading Documents from Azure Blob Storage.
Initialize with connection string, container and blob name.
Methods
__init__(conn_str, container, blob_name)
Initialize with connection string, container and blob name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
conn_str
Connection string for Azure Blob Storage.
container
Container name.
blob
Blob name.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
blob¶
Blob name.
conn_str¶
Connection string for Azure Blob Storage.
container¶
Container name.
Examples using AzureBlobStorageFileLoader¶
Azure Blob Storage
Azure Blob Storage File
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader.html
|
87c4af31bbac-0
|
langchain.document_loaders.pdf.OnlinePDFLoader¶
class langchain.document_loaders.pdf.OnlinePDFLoader(file_path: str)[source]¶
Bases: BasePDFLoader
Loads online PDFs.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.OnlinePDFLoader.html
|
7bc56e6debaf-0
|
langchain.document_loaders.cube_semantic.CubeSemanticLoader¶
class langchain.document_loaders.cube_semantic.CubeSemanticLoader(cube_api_url: str, cube_api_token: str, load_dimension_values: bool = True, dimension_values_limit: int = 10000, dimension_values_max_retries: int = 10, dimension_values_retry_delay: int = 3)[source]¶
Bases: BaseLoader
Load Cube semantic layer metadata.
Parameters
cube_api_url – REST API endpoint.
Use the REST API of your Cube’s deployment.
Please find out more information here:
https://cube.dev/docs/http-api/rest#configuration-base-path
cube_api_token – Cube API token.
Authentication tokens are generated based on your Cube’s API secret.
Please find out more information here:
https://cube.dev/docs/security#generating-json-web-tokens-jwt
load_dimension_values – Whether to load dimension values for every string
dimension or not.
dimension_values_limit – Maximum number of dimension values to load.
dimension_values_max_retries – Maximum number of retries to load dimension
values.
dimension_values_retry_delay – Delay between retries to load dimension values.
Methods
__init__(cube_api_url, cube_api_token[, ...])
lazy_load()
A lazy loader for Documents.
load()
Makes a call to Cube's REST API metadata endpoint.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Makes a call to Cube’s REST API metadata endpoint.
Returns
page_content=column_title + column_description
metadata
table_name
column_name
column_data_type
column_member_type
column_title
column_description
column_values
Return type
A list of documents with attributes
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.cube_semantic.CubeSemanticLoader.html
|
7bc56e6debaf-1
|
column_title
column_description
column_values
Return type
A list of documents with attributes
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using CubeSemanticLoader¶
Cube Semantic Layer
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.cube_semantic.CubeSemanticLoader.html
|
20b2398bdb52-0
|
langchain.document_loaders.parsers.pdf.PyPDFParser¶
class langchain.document_loaders.parsers.pdf.PyPDFParser(password: Optional[Union[str, bytes]] = None)[source]¶
Bases: BaseBlobParser
Loads a PDF with pypdf and chunks at character level.
Methods
__init__([password])
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFParser.html
|
be4c87e56b27-0
|
langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters¶
class langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters[source]¶
Bases: TypedDict
Parameters for the embaas document extraction API.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
Attributes
mime_type
The mime type of the document.
file_extension
The file extension of the document.
file_name
The file name of the document.
should_chunk
Whether to chunk the document into pages.
chunk_size
The maximum size of the text chunks.
chunk_overlap
The maximum overlap allowed between chunks.
chunk_splitter
The text splitter class name for creating chunks.
separators
The separators for chunks.
should_embed
Whether to create embeddings for the document in the response.
model
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html
|
be4c87e56b27-1
|
should_embed
Whether to create embeddings for the document in the response.
model
The model to pass to the Embaas document extraction API.
instruction
The instruction to pass to the Embaas document extraction API.
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html
|
be4c87e56b27-2
|
values() → an object providing a view on D's values¶
chunk_overlap: int¶
The maximum overlap allowed between chunks.
chunk_size: int¶
The maximum size of the text chunks.
chunk_splitter: str¶
The text splitter class name for creating chunks.
file_extension: str¶
The file extension of the document.
file_name: str¶
The file name of the document.
instruction: str¶
The instruction to pass to the Embaas document extraction API.
mime_type: str¶
The mime type of the document.
model: str¶
The model to pass to the Embaas document extraction API.
separators: List[str]¶
The separators for chunks.
should_chunk: bool¶
Whether to chunk the document into pages.
should_embed: bool¶
Whether to create embeddings for the document in the response.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html
|
89788040dd7e-0
|
langchain.document_loaders.notebook.remove_newlines¶
langchain.document_loaders.notebook.remove_newlines(x: Any) → Any[source]¶
Recursively removes newlines, no matter the data structure they are stored in.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.remove_newlines.html
|
c49a4e94f724-0
|
langchain.document_loaders.pdf.BasePDFLoader¶
class langchain.document_loaders.pdf.BasePDFLoader(file_path: str)[source]¶
Bases: BaseLoader, ABC
Base loader class for PDF files.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, use it, then clean up the temporary file after completion
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
abstract load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.BasePDFLoader.html
|
69f465d537e4-0
|
langchain.document_loaders.brave_search.BraveSearchLoader¶
class langchain.document_loaders.brave_search.BraveSearchLoader(query: str, api_key: str, search_kwargs: Optional[dict] = None)[source]¶
Bases: BaseLoader
Loads a query result from Brave Search engine into a list of Documents.
Initializes the BraveLoader.
Parameters
query – The query to search for.
api_key – The API key to use.
search_kwargs – The search kwargs to use.
Methods
__init__(query, api_key[, search_kwargs])
Initializes the BraveLoader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BraveSearchLoader¶
Brave Search
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.brave_search.BraveSearchLoader.html
|
c4243449bb68-0
|
langchain.document_loaders.discord.DiscordChatLoader¶
class langchain.document_loaders.discord.DiscordChatLoader(chat_log: pd.DataFrame, user_id_col: str = 'ID')[source]¶
Bases: BaseLoader
Load Discord chat logs.
Initialize with a Pandas DataFrame containing chat logs.
Parameters
chat_log – Pandas DataFrame containing chat logs.
user_id_col – Name of the column containing the user ID. Defaults to “ID”.
Methods
__init__(chat_log[, user_id_col])
Initialize with a Pandas DataFrame containing chat logs.
lazy_load()
A lazy loader for Documents.
load()
Load all chat messages.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load all chat messages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using DiscordChatLoader¶
Discord
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.discord.DiscordChatLoader.html
|
d485978611e8-0
|
langchain.document_loaders.rst.UnstructuredRSTLoader¶
class langchain.document_loaders.rst.UnstructuredRSTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load RST files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredRSTLoader
loader = UnstructuredRSTLoader(“example.rst”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-rst
Initialize with a file path.
Parameters
file_path – The path to the file to load.
mode – The mode to use for partitioning. See unstructured for details.
Defaults to “single”.
**unstructured_kwargs – Additional keyword arguments to pass
to unstructured.
Methods
__init__(file_path[, mode])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rst.UnstructuredRSTLoader.html
|
d485978611e8-1
|
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredRSTLoader¶
RST
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rst.UnstructuredRSTLoader.html
|
47835b6253e3-0
|
langchain.document_loaders.merge.MergedDataLoader¶
class langchain.document_loaders.merge.MergedDataLoader(loaders: List)[source]¶
Bases: BaseLoader
Merge documents from a list of loaders
Initialize with a list of loaders
Methods
__init__(loaders)
Initialize with a list of loaders
lazy_load()
Lazy load docs from each individual loader.
load()
Load docs.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazy load docs from each individual loader.
load() → List[Document][source]¶
Load docs.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MergedDataLoader¶
MergeDocLoader
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.merge.MergedDataLoader.html
|
7abb9c52cec3-0
|
langchain.document_loaders.acreom.AcreomLoader¶
class langchain.document_loaders.acreom.AcreomLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]¶
Bases: BaseLoader
Loader that loads acreom vault from a directory.
Methods
__init__(path[, encoding, collect_metadata])
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
FRONT_MATTER_REGEX
Regex to match front matter metadata in markdown files.
file_path
Path to the directory containing the markdown files.
encoding
Encoding to use when reading the files.
collect_metadata
Whether to collect metadata from the front matter.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)¶
Regex to match front matter metadata in markdown files.
collect_metadata¶
Whether to collect metadata from the front matter.
encoding¶
Encoding to use when reading the files.
file_path¶
Path to the directory containing the markdown files.
Examples using AcreomLoader¶
acreom
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.acreom.AcreomLoader.html
|
efad8d7390d9-0
|
langchain.document_loaders.sitemap.SitemapLoader¶
class langchain.document_loaders.sitemap.SitemapLoader(web_path: str, filter_urls: Optional[List[str]] = None, parsing_function: Optional[Callable] = None, blocksize: Optional[int] = None, blocknum: int = 0, meta_function: Optional[Callable] = None, is_local: bool = False)[source]¶
Bases: WebBaseLoader
Loader that fetches a sitemap and loads those URLs.
Initialize with webpage path and optional filter URLs.
Parameters
web_path – url of the sitemap. can also be a local path
filter_urls – list of strings or regexes that will be applied to filter the
urls that are parsed and loaded
parsing_function – Function to parse bs4.Soup output
blocksize – number of sitemap locations per block
blocknum – the number of the block that should be loaded - zero indexed.
Default: 0
meta_function – Function to parse bs4.Soup output for metadata
remember when setting this method to also copy metadata[“loc”]
to metadata[“source”] if you are using this field
is_local – whether the sitemap is a local file. Default: False
Methods
__init__(web_path[, filter_urls, ...])
Initialize with webpage path and optional filter URLs.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load sitemap.
load_and_split([text_splitter])
Load Documents and split into chunks.
parse_sitemap(soup)
Parse sitemap xml and load into a list of dicts.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html
|
efad8d7390d9-1
|
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load sitemap.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
parse_sitemap(soup: Any) → List[dict][source]¶
Parse sitemap xml and load into a list of dicts.
Parameters
soup – BeautifulSoup object.
Returns
List of dicts.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html
|
efad8d7390d9-2
|
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
Examples using SitemapLoader¶
Sitemap
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html
|
f38c86240414-0
|
langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader¶
class langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader(file_path: str)[source]¶
Bases: BasePDFLoader
Loader that uses PDFMiner to load PDF files as HTML content.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader.html
|
989dc15904f4-0
|
langchain.document_loaders.parsers.grobid.GrobidParser¶
class langchain.document_loaders.parsers.grobid.GrobidParser(segment_sentences: bool, grobid_server: str = 'http://localhost:8070/api/processFulltextDocument')[source]¶
Bases: BaseBlobParser
Loader that uses Grobid to load article PDF files.
Methods
__init__(segment_sentences[, grobid_server])
lazy_parse(blob)
Lazy parsing interface.
parse(blob)
Eagerly parse the blob into a document or documents.
process_xml(file_path, xml_data, ...)
Process the XML file from Grobin.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
process_xml(file_path: str, xml_data: str, segment_sentences: bool) → Iterator[Document][source]¶
Process the XML file from Grobin.
Examples using GrobidParser¶
Grobid
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.grobid.GrobidParser.html
|
f69645bc5ca8-0
|
langchain.document_loaders.reddit.RedditPostsLoader¶
class langchain.document_loaders.reddit.RedditPostsLoader(client_id: str, client_secret: str, user_agent: str, search_queries: Sequence[str], mode: str, categories: Sequence[str] = ['new'], number_posts: Optional[int] = 10)[source]¶
Bases: BaseLoader
Reddit posts loader.
Read posts on a subreddit.
First, you need to go to
https://www.reddit.com/prefs/apps/
and create your application
Initialize with client_id, client_secret, user_agent, search_queries, mode,categories, number_posts.
Example: https://www.reddit.com/r/learnpython/
Parameters
client_id – Reddit client id.
client_secret – Reddit client secret.
user_agent – Reddit user agent.
search_queries – The search queries.
mode – The mode.
categories – The categories. Default: [“new”]
number_posts – The number of posts. Default: 10
Methods
__init__(client_id, client_secret, ...[, ...])
Initialize with client_id, client_secret, user_agent, search_queries, mode,
lazy_load()
A lazy loader for Documents.
load()
Load reddits.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load reddits.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using RedditPostsLoader¶
Reddit
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.reddit.RedditPostsLoader.html
|
6c8497487cec-0
|
langchain.document_loaders.image.UnstructuredImageLoader¶
class langchain.document_loaders.image.UnstructuredImageLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses Unstructured to load PNG and JPG files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredImageLoader
loader = UnstructuredImageLoader(“example.png”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-image
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredImageLoader¶
Images
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image.UnstructuredImageLoader.html
|
7b57cac78a2e-0
|
langchain.document_loaders.url_playwright.PlaywrightURLLoader¶
class langchain.document_loaders.url_playwright.PlaywrightURLLoader(urls: List[str], continue_on_failure: bool = True, headless: bool = True, remove_selectors: Optional[List[str]] = None)[source]¶
Bases: BaseLoader
Loader that uses Playwright and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
urls¶
List of URLs to load.
Type
List[str]
continue_on_failure¶
If True, continue loading other URLs on failure.
Type
bool
headless¶
If True, the browser will run in headless mode.
Type
bool
Load a list of URLs using Playwright and unstructured.
Methods
__init__(urls[, continue_on_failure, ...])
Load a list of URLs using Playwright and unstructured.
aload()
Load the specified URLs with Playwright and create Documents asynchronously.
lazy_load()
A lazy loader for Documents.
load()
Load the specified URLs using Playwright and create Document instances.
load_and_split([text_splitter])
Load Documents and split into chunks.
async aload() → List[Document][source]¶
Load the specified URLs with Playwright and create Documents asynchronously.
Use this function when in a jupyter notebook environment.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load the specified URLs using Playwright and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_playwright.PlaywrightURLLoader.html
|
7b57cac78a2e-1
|
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using PlaywrightURLLoader¶
URL
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_playwright.PlaywrightURLLoader.html
|
34a35c89b4e6-0
|
langchain.document_loaders.word_document.UnstructuredWordDocumentLoader¶
class langchain.document_loaders.word_document.UnstructuredWordDocumentLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load word documents.
Works with both .docx and .doc files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredWordDocumentLoader
loader = UnstructuredWordDocumentLoader(“example.docx”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-docx
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.UnstructuredWordDocumentLoader.html
|
34a35c89b4e6-1
|
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredWordDocumentLoader¶
Microsoft Word
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.UnstructuredWordDocumentLoader.html
|
72136a66d224-0
|
langchain.document_loaders.confluence.ConfluenceLoader¶
class langchain.document_loaders.confluence.ConfluenceLoader(url: str, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None, cloud: Optional[bool] = True, number_of_retries: Optional[int] = 3, min_retry_seconds: Optional[int] = 2, max_retry_seconds: Optional[int] = 10, confluence_kwargs: Optional[dict] = None)[source]¶
Bases: BaseLoader
Load Confluence pages.
Port of https://llamahub.ai/l/confluence
This currently supports username/api_key, Oauth2 login or personal access token
authentication.
Specify a list page_ids and/or space_key to load in the corresponding pages into
Document objects, if both are specified the union of both sets will be returned.
You can also specify a boolean include_attachments to include attachments, this
is set to False by default, if set to True all attachments will be downloaded and
ConfluenceReader will extract the text from the attachments and add it to the
Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,
SVG, Word and Excel.
Confluence API supports difference format of page content. The storage format is the
raw XML representation for storage. The view format is the HTML representation for
viewing with macros are rendered as though it is viewed by users. You can pass
a enum content_format argument to load() to specify the content format, this is
set to ContentFormat.STORAGE by default.
Hint: space_key and page_id can both be found in the URL of a page in Confluence
- https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>
Example
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
|
72136a66d224-1
|
Example
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
username="me",
api_key="12345"
)
documents = loader.load(space_key="SPACE",limit=50)
Parameters
url (str) – _description_
api_key (str, optional) – _description_, defaults to None
username (str, optional) – _description_, defaults to None
oauth2 (dict, optional) – _description_, defaults to {}
token (str, optional) – _description_, defaults to None
cloud (bool, optional) – _description_, defaults to True
number_of_retries (Optional[int], optional) – How many times to retry, defaults to 3
min_retry_seconds (Optional[int], optional) – defaults to 2
max_retry_seconds (Optional[int], optional) – defaults to 10
confluence_kwargs (dict, optional) – additional kwargs to initialize confluence with
Raises
ValueError – Errors while validating input
ImportError – Required dependencies not installed.
Methods
__init__(url[, api_key, username, oauth2, ...])
is_public_page(page)
Check if a page is publicly accessible.
lazy_load()
A lazy loader for Documents.
load([space_key, page_ids, label, cql, ...])
param space_key
Space key retrieved from a confluence URL, defaults to None
load_and_split([text_splitter])
Load Documents and split into chunks.
paginate_request(retrieval_method, **kwargs)
Paginate the various methods to retrieve groups of pages.
process_attachment(page_id[, ocr_languages])
process_doc(link)
process_image(link[, ocr_languages])
process_page(page, include_attachments, ...)
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
|
72136a66d224-2
|
process_image(link[, ocr_languages])
process_page(page, include_attachments, ...)
process_pages(pages, ...[, ocr_languages, ...])
Process a list of pages into a list of documents.
process_pdf(link[, ocr_languages])
process_svg(link[, ocr_languages])
process_xls(link)
validate_init_args([url, api_key, username, ...])
Validates proper combinations of init arguments
is_public_page(page: dict) → bool[source]¶
Check if a page is publicly accessible.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load(space_key: Optional[str] = None, page_ids: Optional[List[str]] = None, label: Optional[str] = None, cql: Optional[str] = None, include_restricted_content: bool = False, include_archived_content: bool = False, include_attachments: bool = False, include_comments: bool = False, content_format: ContentFormat = ContentFormat.STORAGE, limit: Optional[int] = 50, max_pages: Optional[int] = 1000, ocr_languages: Optional[str] = None, keep_markdown_format: bool = False) → List[Document][source]¶
Parameters
space_key (Optional[str], optional) – Space key retrieved from a confluence URL, defaults to None
page_ids (Optional[List[str]], optional) – List of specific page IDs to load, defaults to None
label (Optional[str], optional) – Get all pages with this label, defaults to None
cql (Optional[str], optional) – CQL Expression, defaults to None
include_restricted_content (bool, optional) – defaults to False
include_archived_content (bool, optional) – Whether to include archived content,
defaults to False
include_attachments (bool, optional) – defaults to False
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
|
72136a66d224-3
|
defaults to False
include_attachments (bool, optional) – defaults to False
include_comments (bool, optional) – defaults to False
content_format (ContentFormat) – Specify content format, defaults to ContentFormat.STORAGE
limit (int, optional) – Maximum number of pages to retrieve per request, defaults to 50
max_pages (int, optional) – Maximum number of pages to retrieve in total, defaults 1000
ocr_languages (str, optional) – The languages to use for the Tesseract agent. To use a
language, you’ll first need to install the appropriate
Tesseract language pack.
keep_markdown_format (bool) – Whether to keep the markdown format, defaults to
False
Raises
ValueError – _description_
ImportError – _description_
Returns
_description_
Return type
List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
paginate_request(retrieval_method: Callable, **kwargs: Any) → List[source]¶
Paginate the various methods to retrieve groups of pages.
Unfortunately, due to page size, sometimes the Confluence API
doesn’t match the limit value. If limit is >100 confluence
seems to cap the response to 100. Also, due to the Atlassian Python
package, we don’t get the “next” values from the “_links” key because
they only return the value from the result key. So here, the pagination
starts from 0 and goes until the max_pages, getting the limit number
of pages with each request. We have to manually check if there
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
|
72136a66d224-4
|
of pages with each request. We have to manually check if there
are more docs based on the length of the returned list of pages, rather than
just checking for the presence of a next key in the response like this page
would have you do:
https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/
Parameters
retrieval_method (callable) – Function used to retrieve docs
Returns
List of documents
Return type
List
process_attachment(page_id: str, ocr_languages: Optional[str] = None) → List[str][source]¶
process_doc(link: str) → str[source]¶
process_image(link: str, ocr_languages: Optional[str] = None) → str[source]¶
process_page(page: dict, include_attachments: bool, include_comments: bool, content_format: ContentFormat, ocr_languages: Optional[str] = None, keep_markdown_format: Optional[bool] = False) → Document[source]¶
process_pages(pages: List[dict], include_restricted_content: bool, include_attachments: bool, include_comments: bool, content_format: ContentFormat, ocr_languages: Optional[str] = None, keep_markdown_format: Optional[bool] = False) → List[Document][source]¶
Process a list of pages into a list of documents.
process_pdf(link: str, ocr_languages: Optional[str] = None) → str[source]¶
process_svg(link: str, ocr_languages: Optional[str] = None) → str[source]¶
process_xls(link: str) → str[source]¶
static validate_init_args(url: Optional[str] = None, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None) → Optional[List][source]¶
Validates proper combinations of init arguments
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
|
72136a66d224-5
|
Validates proper combinations of init arguments
Examples using ConfluenceLoader¶
Confluence
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
|
835d1f35f32a-0
|
langchain.document_loaders.docugami.DocugamiLoader¶
class langchain.document_loaders.docugami.DocugamiLoader(*, api: str = 'https://api.docugami.com/v1preview1', access_token: Optional[str] = None, docset_id: Optional[str] = None, document_ids: Optional[Sequence[str]] = None, file_paths: Optional[Sequence[Union[Path, str]]] = None, min_chunk_size: int = 32)[source]¶
Bases: BaseLoader, BaseModel
Loads processed docs from Docugami.
To use, you should have the lxml python package installed.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param access_token: Optional[str] = None¶
The Docugami API access token to use.
param api: str = 'https://api.docugami.com/v1preview1'¶
The Docugami API endpoint to use.
param docset_id: Optional[str] = None¶
The Docugami API docset ID to use.
param document_ids: Optional[Sequence[str]] = None¶
The Docugami API document IDs to use.
param file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None¶
The local file paths to use.
param min_chunk_size: int = 32¶
The minimum chunk size to use when parsing DGML. Defaults to 32.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html
|
835d1f35f32a-1
|
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
validator validate_local_or_remote » all fields[source]¶
Validate that either local file paths are given, or remote API docset ID.
Parameters
values – The values to validate.
Returns
The validated values.
Examples using DocugamiLoader¶
Docugami
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html
|
dfce24623b7a-0
|
langchain.document_loaders.pdf.PyPDFium2Loader¶
class langchain.document_loaders.pdf.PyPDFium2Loader(file_path: str)[source]¶
Bases: BasePDFLoader
Loads a PDF with pypdfium2 and chunks at character level.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
Lazy load given path as pages.
load()
Load given path as pages.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document][source]¶
Lazy load given path as pages.
load() → List[Document][source]¶
Load given path as pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFium2Loader.html
|
95d1c698bad7-0
|
langchain.document_loaders.evernote.EverNoteLoader¶
class langchain.document_loaders.evernote.EverNoteLoader(file_path: str, load_single_document: bool = True)[source]¶
Bases: BaseLoader
EverNote Loader.
Loads an EverNote notebook export file e.g. my_notebook.enex into Documents.
Instructions on producing this file can be found at
https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML
Currently only the plain text in the note is extracted and stored as the contents
of the Document, any non content metadata (e.g. ‘author’, ‘created’, ‘updated’ etc.
but not ‘content-raw’ or ‘resource’) tags on the note will be extracted and stored
as metadata on the Document.
Parameters
file_path (str) – The path to the notebook export with a .enex extension
load_single_document (bool) – Whether or not to concatenate the content of all
notes into a single long Document.
True (If this is set to) – the ‘source’ which contains the file name of the export.
Initialize with file path.
Methods
__init__(file_path[, load_single_document])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load documents from EverNote export file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents from EverNote export file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.evernote.EverNoteLoader.html
|
95d1c698bad7-1
|
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using EverNoteLoader¶
EverNote
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.evernote.EverNoteLoader.html
|
f13fcb42e802-0
|
langchain.document_loaders.blockchain.BlockchainType¶
class langchain.document_loaders.blockchain.BlockchainType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases: Enum
Enumerator of the supported blockchains.
Attributes
ETH_MAINNET
ETH_GOERLI
POLYGON_MAINNET
POLYGON_MUMBAI
ETH_GOERLI = 'eth-goerli'¶
ETH_MAINNET = 'eth-mainnet'¶
POLYGON_MAINNET = 'polygon-mainnet'¶
POLYGON_MUMBAI = 'polygon-mumbai'¶
Examples using BlockchainType¶
Blockchain
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainType.html
|
7abfc5157393-0
|
langchain.document_loaders.pdf.PyMuPDFLoader¶
class langchain.document_loaders.pdf.PyMuPDFLoader(file_path: str)[source]¶
Bases: BasePDFLoader
Loader that uses PyMuPDF to load PDF files.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load(**kwargs)
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load(**kwargs: Optional[Any]) → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyMuPDFLoader.html
|
d2bf7a1d3d4d-0
|
langchain.document_loaders.parsers.language.python.PythonSegmenter¶
class langchain.document_loaders.parsers.language.python.PythonSegmenter(code: str)[source]¶
Bases: CodeSegmenter
The code segmenter for Python.
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
extract_functions_classes() → List[str][source]¶
is_valid() → bool[source]¶
simplify_code() → str[source]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.python.PythonSegmenter.html
|
8f03f779a579-0
|
langchain.document_loaders.mastodon.MastodonTootsLoader¶
class langchain.document_loaders.mastodon.MastodonTootsLoader(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]¶
Bases: BaseLoader
Mastodon toots loader.
Instantiate Mastodon toots loader.
Parameters
mastodon_accounts – The list of Mastodon accounts to query.
number_toots – How many toots to pull for each account. Defaults to 100.
exclude_replies – Whether to exclude reply toots from the load.
Defaults to False.
access_token – An access token if toots are loaded as a Mastodon app. Can
also be specified via the environment variables “MASTODON_ACCESS_TOKEN”.
api_base_url – A Mastodon API base URL to talk to, if not using the default.
Defaults to “https://mastodon.social”.
Methods
__init__(mastodon_accounts[, number_toots, ...])
Instantiate Mastodon toots loader.
lazy_load()
A lazy loader for Documents.
load()
Load toots into documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load toots into documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mastodon.MastodonTootsLoader.html
|
8f03f779a579-1
|
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MastodonTootsLoader¶
Mastodon
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mastodon.MastodonTootsLoader.html
|
7ab57476ad4d-0
|
langchain.document_loaders.weather.WeatherDataLoader¶
class langchain.document_loaders.weather.WeatherDataLoader(client: OpenWeatherMapAPIWrapper, places: Sequence[str])[source]¶
Bases: BaseLoader
Weather Reader.
Reads the forecast & current weather of any location using OpenWeatherMap’s free
API. Checkout ‘https://openweathermap.org/appid’ for more on how to generate a free
OpenWeatherMap API.
Initialize with parameters.
Methods
__init__(client, places)
Initialize with parameters.
from_params(places, *[, openweathermap_api_key])
lazy_load()
Lazily load weather data for the given locations.
load()
Load weather data for the given locations.
load_and_split([text_splitter])
Load Documents and split into chunks.
classmethod from_params(places: Sequence[str], *, openweathermap_api_key: Optional[str] = None) → WeatherDataLoader[source]¶
lazy_load() → Iterator[Document][source]¶
Lazily load weather data for the given locations.
load() → List[Document][source]¶
Load weather data for the given locations.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using WeatherDataLoader¶
Weather
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.weather.WeatherDataLoader.html
|
4757795d315f-0
|
langchain.document_loaders.blackboard.BlackboardLoader¶
class langchain.document_loaders.blackboard.BlackboardLoader(blackboard_course_url: str, bbrouter: str, load_all_recursively: bool = True, basic_auth: Optional[Tuple[str, str]] = None, cookies: Optional[dict] = None)[source]¶
Bases: WebBaseLoader
Loads all documents from a Blackboard course.
This loader is not compatible with all Blackboard courses. It is only
compatible with courses that use the new Blackboard interface.
To use this loader, you must have the BbRouter cookie. You can get this
cookie by logging into the course and then copying the value of the
BbRouter cookie from the browser’s developer tools.
Example
from langchain.document_loaders import BlackboardLoader
loader = BlackboardLoader(
blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1",
bbrouter="expires:12345...",
)
documents = loader.load()
Initialize with blackboard course url.
The BbRouter cookie is required for most blackboard courses.
Parameters
blackboard_course_url – Blackboard course url.
bbrouter – BbRouter cookie.
load_all_recursively – If True, load all documents recursively.
basic_auth – Basic auth credentials.
cookies – Cookies.
Raises
ValueError – If blackboard course url is invalid.
Methods
__init__(blackboard_course_url, bbrouter[, ...])
Initialize with blackboard course url.
aload()
Load text from the urls in web_path async into Documents.
check_bs4()
Check if BeautifulSoup4 is installed.
download(path)
Download a file from an url.
fetch_all(urls)
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html
|
4757795d315f-1
|
download(path)
Download a file from an url.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
parse_filename(url)
Parse the filename from an url.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
base_url
Base url of the blackboard course.
folder_path
Path to the folder containing the documents.
load_all_recursively
If True, load all documents recursively.
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
check_bs4() → None[source]¶
Check if BeautifulSoup4 is installed.
Raises
ImportError – If BeautifulSoup4 is not installed.
download(path: str) → None[source]¶
Download a file from an url.
Parameters
path – Path to the file.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load data into Document objects.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html
|
4757795d315f-2
|
Load data into Document objects.
Returns
List of Documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
parse_filename(url: str) → str[source]¶
Parse the filename from an url.
Parameters
url – Url to parse the filename from.
Returns
The filename.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
base_url: str¶
Base url of the blackboard course.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
folder_path: str¶
Path to the folder containing the documents.
load_all_recursively: bool¶
If True, load all documents recursively.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
Examples using BlackboardLoader¶
Blackboard
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html
|
a390451e74a2-0
|
langchain.document_loaders.markdown.UnstructuredMarkdownLoader¶
class langchain.document_loaders.markdown.UnstructuredMarkdownLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses Unstructured to load markdown files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredMarkdownLoader
loader = UnstructuredMarkdownLoader(“example.md”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-md
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredMarkdownLoader¶
StarRocks
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.markdown.UnstructuredMarkdownLoader.html
|
fab5950dc53e-0
|
langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader¶
class langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load PowerPoint files.
Works with both .ppt and .pptx files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredPowerPointLoader
loader = UnstructuredPowerPointLoader(“example.pptx”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-pptx
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader.html
|
fab5950dc53e-1
|
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredPowerPointLoader¶
Microsoft PowerPoint
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader.html
|
b1af2e8aa432-0
|
langchain.document_loaders.json_loader.JSONLoader¶
class langchain.document_loaders.json_loader.JSONLoader(file_path: Union[str, Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True, json_lines: bool = False)[source]¶
Bases: BaseLoader
Loads a JSON file using a jq schema.
Example
[{“text”: …}, {“text”: …}, {“text”: …}] -> schema = .[].text
{“key”: [{“text”: …}, {“text”: …}, {“text”: …}]} -> schema = .key[].text
[“”, “”, “”] -> schema = .[]
Initialize the JSONLoader.
Parameters
file_path (Union[str, Path]) – The path to the JSON or JSON Lines file.
jq_schema (str) – The jq schema to use to extract the data or text from
the JSON.
content_key (str) – The key to use to extract the content from the JSON if
the jq_schema results to a list of objects (dict).
metadata_func (Callable[Dict, Dict]) – A function that takes in the JSON
object extracted by the jq_schema and the default metadata and returns
a dict of the updated metadata.
text_content (bool) – Boolean flag to indicate whether the content is in
string format, default to True.
json_lines (bool) – Boolean flag to indicate whether the input is in
JSON Lines format.
Methods
__init__(file_path, jq_schema[, ...])
Initialize the JSONLoader.
lazy_load()
A lazy loader for Documents.
load()
Load and return documents from the JSON file.
load_and_split([text_splitter])
Load Documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.json_loader.JSONLoader.html
|
b1af2e8aa432-1
|
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load and return documents from the JSON file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.json_loader.JSONLoader.html
|
8ca7b0db928e-0
|
langchain.document_loaders.chatgpt.concatenate_rows¶
langchain.document_loaders.chatgpt.concatenate_rows(message: dict, title: str) → str[source]¶
Combine message information in a readable format ready to be used.
:param message: Message to be concatenated
:param title: Title of the conversation
Returns
Concatenated message
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chatgpt.concatenate_rows.html
|
a26a0848d830-0
|
langchain.document_loaders.parsers.audio.OpenAIWhisperParser¶
class langchain.document_loaders.parsers.audio.OpenAIWhisperParser(api_key: Optional[str] = None)[source]¶
Bases: BaseBlobParser
Transcribe and parse audio files.
Audio transcription is with OpenAI Whisper model.
Methods
__init__([api_key])
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
Examples using OpenAIWhisperParser¶
Loading documents from a YouTube url
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParser.html
|
b03829a5b86b-0
|
langchain.document_loaders.snowflake_loader.SnowflakeLoader¶
class langchain.document_loaders.snowflake_loader.SnowflakeLoader(query: str, user: str, password: str, account: str, warehouse: str, role: str, database: str, schema: str, parameters: Optional[Dict[str, Any]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]¶
Bases: BaseLoader
Loads a query result from Snowflake into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Initialize Snowflake document loader.
Parameters
query – The query to run in Snowflake.
user – Snowflake user.
password – Snowflake password.
account – Snowflake account.
warehouse – Snowflake warehouse.
role – Snowflake role.
database – Snowflake database
schema – Snowflake schema
parameters – Optional. Parameters to pass to the query.
page_content_columns – Optional. Columns written to Document page_content.
metadata_columns – Optional. Columns written to Document metadata.
Methods
__init__(query, user, password, account, ...)
Initialize Snowflake document loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into document objects.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.snowflake_loader.SnowflakeLoader.html
|
b03829a5b86b-1
|
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SnowflakeLoader¶
Snowflake
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.snowflake_loader.SnowflakeLoader.html
|
c4aae80d58b9-0
|
langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader¶
class langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader(path: str)[source]¶
Bases: BaseLoader
Loads WhatsApp messages text file.
Initialize with path.
Methods
__init__(path)
Initialize with path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using WhatsAppChatLoader¶
WhatsApp
WhatsApp Chat
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader.html
|
e6c952d16221-0
|
langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader¶
class langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader(conf: Any, bucket: str, key: str)[source]¶
Bases: BaseLoader
Loader for Tencent Cloud COS file.
Initialize with COS config, bucket and key name.
:param conf(CosConfig): COS config.
:param bucket(str): COS bucket.
:param key(str): COS file key.
Methods
__init__(conf, bucket, key)
Initialize with COS config, bucket and key name.
lazy_load()
Load documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Load documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TencentCOSFileLoader¶
Tencent COS File
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader.html
|
27809e0ca56a-0
|
langchain.document_loaders.unstructured.get_elements_from_api¶
langchain.document_loaders.unstructured.get_elements_from_api(file_path: Optional[Union[str, List[str]]] = None, file: Optional[Union[IO, Sequence[IO]]] = None, api_url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any) → List[source]¶
Retrieves a list of elements from the Unstructured API.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.get_elements_from_api.html
|
6aabf28b7c47-0
|
langchain.document_loaders.open_city_data.OpenCityDataLoader¶
class langchain.document_loaders.open_city_data.OpenCityDataLoader(city_id: str, dataset_id: str, limit: int)[source]¶
Bases: BaseLoader
Loads Open City data.
Initialize with dataset_id.
Example: https://dev.socrata.com/foundry/data.sfgov.org/vw6y-z8j6
e.g., city_id = data.sfgov.org
e.g., dataset_id = vw6y-z8j6
Parameters
city_id – The Open City city identifier.
dataset_id – The Open City dataset identifier.
limit – The maximum number of documents to load.
Methods
__init__(city_id, dataset_id, limit)
Initialize with dataset_id.
lazy_load()
Lazy load records.
load()
Load records.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazy load records.
load() → List[Document][source]¶
Load records.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using OpenCityDataLoader¶
Geopandas
Open City Data
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.open_city_data.OpenCityDataLoader.html
|
29a8d6dbfdbb-0
|
langchain.document_loaders.geodataframe.GeoDataFrameLoader¶
class langchain.document_loaders.geodataframe.GeoDataFrameLoader(data_frame: Any, page_content_column: str = 'geometry')[source]¶
Bases: BaseLoader
Load geopandas Dataframe.
Initialize with geopandas Dataframe.
Parameters
data_frame – geopandas DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “geometry”.
Methods
__init__(data_frame[, page_content_column])
Initialize with geopandas Dataframe.
lazy_load()
Lazy load records from dataframe.
load()
Load full dataframe.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazy load records from dataframe.
load() → List[Document][source]¶
Load full dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GeoDataFrameLoader¶
Geopandas
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.geodataframe.GeoDataFrameLoader.html
|
83eaae737ba6-0
|
langchain.document_loaders.image_captions.ImageCaptionLoader¶
class langchain.document_loaders.image_captions.ImageCaptionLoader(path_images: Union[str, List[str]], blip_processor: str = 'Salesforce/blip-image-captioning-base', blip_model: str = 'Salesforce/blip-image-captioning-base')[source]¶
Bases: BaseLoader
Loads the captions of an image
Initialize with a list of image paths
Parameters
path_images – A list of image paths.
blip_processor – The name of the pre-trained BLIP processor.
blip_model – The name of the pre-trained BLIP model.
Methods
__init__(path_images[, blip_processor, ...])
Initialize with a list of image paths
lazy_load()
A lazy loader for Documents.
load()
Load from a list of image files
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load from a list of image files
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ImageCaptionLoader¶
Image captions
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image_captions.ImageCaptionLoader.html
|
5aecf6192008-0
|
langchain.document_loaders.unstructured.UnstructuredAPIFileLoader¶
class langchain.document_loaders.unstructured.UnstructuredAPIFileLoader(file_path: Union[str, List[str]] = '', mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses the Unstructured API to load files.
By default, the loader makes a call to the hosted Unstructured API.
If you are running the unstructured API locally, you can change the
API rule by passing in the url parameter when you initialize the loader.
The hosted Unstructured API requires an API key. See
https://www.unstructured.io/api-key/ if you need to generate a key.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
```python
from langchain.document_loaders import UnstructuredAPIFileLoader
loader = UnstructuredFileAPILoader(“example.pdf”, mode=”elements”, strategy=”fast”, api_key=”MY_API_KEY”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
https://www.unstructured.io/api-key/
https://github.com/Unstructured-IO/unstructured-api
Initialize with file path.
Methods
__init__([file_path, mode, url, api_key])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileLoader.html
|
5aecf6192008-1
|
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredAPIFileLoader¶
Unstructured File
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileLoader.html
|
46e722de4248-0
|
langchain.document_loaders.telegram.text_to_docs¶
langchain.document_loaders.telegram.text_to_docs(text: Union[str, List[str]]) → List[Document][source]¶
Converts a string or list of strings to a list of Documents with metadata.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.text_to_docs.html
|
73e5e897c3b2-0
|
langchain.document_loaders.notebook.NotebookLoader¶
class langchain.document_loaders.notebook.NotebookLoader(path: str, include_outputs: bool = False, max_output_length: int = 10, remove_newline: bool = False, traceback: bool = False)[source]¶
Bases: BaseLoader
Loads .ipynb notebook files.
Initialize with path.
Parameters
path – The path to load the notebook from.
include_outputs – Whether to include the outputs of the cell.
Defaults to False.
max_output_length – Maximum length of the output to be displayed.
Defaults to 10.
remove_newline – Whether to remove newlines from the notebook.
Defaults to False.
traceback – Whether to return a traceback of the error.
Defaults to False.
Methods
__init__(path[, include_outputs, ...])
Initialize with path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using NotebookLoader¶
Jupyter Notebook
Notebook
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.NotebookLoader.html
|
964bba5fc936-0
|
langchain.document_loaders.unstructured.validate_unstructured_version¶
langchain.document_loaders.unstructured.validate_unstructured_version(min_unstructured_version: str) → None[source]¶
Raises an error if the unstructured version does not exceed the
specified minimum.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.validate_unstructured_version.html
|
b79831e9666a-0
|
langchain.document_loaders.pdf.PDFMinerLoader¶
class langchain.document_loaders.pdf.PDFMinerLoader(file_path: str)[source]¶
Bases: BasePDFLoader
Loader that uses PDFMiner to load PDF files.
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
Lazily load documents.
load()
Eagerly load the content.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document][source]¶
Lazily load documents.
load() → List[Document][source]¶
Eagerly load the content.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFMinerLoader.html
|
4ac4a930bf03-0
|
langchain.document_loaders.unstructured.UnstructuredBaseLoader¶
class langchain.document_loaders.unstructured.UnstructuredBaseLoader(mode: str = 'single', post_processors: List[Callable] = [], **unstructured_kwargs: Any)[source]¶
Bases: BaseLoader, ABC
Loader that uses Unstructured to load files.
Initialize with file path.
Methods
__init__([mode, post_processors])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredBaseLoader.html
|
48e969bfbfe9-0
|
langchain.document_loaders.airbyte_json.AirbyteJSONLoader¶
class langchain.document_loaders.airbyte_json.AirbyteJSONLoader(file_path: str)[source]¶
Bases: BaseLoader
Loads local airbyte json files.
Initialize with a file path. This should start with ‘/tmp/airbyte_local/’.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
file_path
Path to the directory containing the json files.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
file_path¶
Path to the directory containing the json files.
Examples using AirbyteJSONLoader¶
Airbyte
Airbyte JSON
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte_json.AirbyteJSONLoader.html
|
9553a5684aac-0
|
langchain.document_loaders.github.BaseGitHubLoader¶
class langchain.document_loaders.github.BaseGitHubLoader(*, repo: str, access_token: str)[source]¶
Bases: BaseLoader, BaseModel, ABC
Load issues of a GitHub repository.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param access_token: str [Required]¶
Personal access token - see https://github.com/settings/tokens?type=beta
param repo: str [Required]¶
Name of repository
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
abstract load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
validator validate_environment » all fields[source]¶
Validate that access token exists in environment.
property headers: Dict[str, str]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.BaseGitHubLoader.html
|
58038dde5ee6-0
|
langchain.document_loaders.gcs_file.GCSFileLoader¶
class langchain.document_loaders.gcs_file.GCSFileLoader(project_name: str, bucket: str, blob: str)[source]¶
Bases: BaseLoader
Load Documents from a GCS file.
Initialize with bucket and key name.
Parameters
project_name – The name of the project to load
bucket – The name of the GCS bucket.
blob – The name of the GCS blob to load.
Methods
__init__(project_name, bucket, blob)
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GCSFileLoader¶
Google Cloud Storage
Google Cloud Storage File
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_file.GCSFileLoader.html
|
219b316f8449-0
|
langchain.document_loaders.xorbits.XorbitsLoader¶
class langchain.document_loaders.xorbits.XorbitsLoader(data_frame: Any, page_content_column: str = 'text')[source]¶
Bases: BaseLoader
Load Xorbits DataFrame.
Initialize with dataframe object.
Requirements:Must have xorbits installed. You can install with pip install xorbits.
Parameters
data_frame – Xorbits DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
Methods
__init__(data_frame[, page_content_column])
Initialize with dataframe object.
lazy_load()
Lazy load records from dataframe.
load()
Load full dataframe.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazy load records from dataframe.
load() → List[Document][source]¶
Load full dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using XorbitsLoader¶
Xorbits Pandas DataFrame
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.xorbits.XorbitsLoader.html
|
3b105e1b9326-0
|
langchain.document_loaders.pdf.PDFPlumberLoader¶
class langchain.document_loaders.pdf.PDFPlumberLoader(file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None)[source]¶
Bases: BasePDFLoader
Loader that uses pdfplumber to load PDF files.
Initialize with a file path.
Methods
__init__(file_path[, text_kwargs])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFPlumberLoader.html
|
54be95756594-0
|
langchain.document_loaders.college_confidential.CollegeConfidentialLoader¶
class langchain.document_loaders.college_confidential.CollegeConfidentialLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None)[source]¶
Bases: WebBaseLoader
Loads College Confidential webpages.
Initialize with webpage path.
Methods
__init__(web_path[, header_template, ...])
Initialize with webpage path.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load webpages as Documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load webpages as Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.college_confidential.CollegeConfidentialLoader.html
|
54be95756594-1
|
load() → List[Document][source]¶
Load webpages as Documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
Examples using CollegeConfidentialLoader¶
College Confidential
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.college_confidential.CollegeConfidentialLoader.html
|
c660a6e92f7b-0
|
langchain.document_loaders.rocksetdb.default_joiner¶
langchain.document_loaders.rocksetdb.default_joiner(docs: List[Tuple[str, Any]]) → str[source]¶
Default joiner for content columns.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.default_joiner.html
|
ba73e3be17fe-0
|
langchain.document_loaders.html.UnstructuredHTMLLoader¶
class langchain.document_loaders.html.UnstructuredHTMLLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses Unstructured to load HTML files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredHTMLLoader
loader = UnstructuredHTMLLoader(“example.html”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-html
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html.UnstructuredHTMLLoader.html
|
3717ed138c19-0
|
langchain.document_loaders.s3_directory.S3DirectoryLoader¶
class langchain.document_loaders.s3_directory.S3DirectoryLoader(bucket: str, prefix: str = '')[source]¶
Bases: BaseLoader
Loading logic for loading documents from an AWS S3.
Initialize with bucket and key name.
Parameters
bucket – The name of the S3 bucket.
prefix – The prefix of the S3 key. Defaults to “”.
Methods
__init__(bucket[, prefix])
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using S3DirectoryLoader¶
AWS S3 Directory
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_directory.S3DirectoryLoader.html
|
8aeecf0674eb-0
|
langchain.document_loaders.word_document.Docx2txtLoader¶
class langchain.document_loaders.word_document.Docx2txtLoader(file_path: str)[source]¶
Bases: BaseLoader, ABC
Loads a DOCX with docx2txt and chunks at character level.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, and use that, then clean up the temporary file after completion
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load given path as single page.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load given path as single page.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using Docx2txtLoader¶
Microsoft Word
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.Docx2txtLoader.html
|
c9c0ab714553-0
|
langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader¶
class langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader(conn_str: str, container: str, prefix: str = '')[source]¶
Bases: BaseLoader
Loading Documents from Azure Blob Storage.
Initialize with connection string, container and blob prefix.
Methods
__init__(conn_str, container[, prefix])
Initialize with connection string, container and blob prefix.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
conn_str
Connection string for Azure Blob Storage.
container
Container name.
prefix
Prefix for blob names.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
conn_str¶
Connection string for Azure Blob Storage.
container¶
Container name.
prefix¶
Prefix for blob names.
Examples using AzureBlobStorageContainerLoader¶
Azure Blob Storage
Azure Blob Storage Container
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader.html
|
44db4ac426ba-0
|
langchain.document_loaders.excel.UnstructuredExcelLoader¶
class langchain.document_loaders.excel.UnstructuredExcelLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load Excel files. Like other
Unstructured loaders, UnstructuredExcelLoader can be used in both
“single” and “elements” mode. If you use the loader in “elements”
mode, each sheet in the Excel file will be a an Unstructured Table
element. If you use the loader in “elements” mode, an
HTML representation of the table will be available in the
“text_as_html” key in the document metadata.
Examples
from langchain.document_loaders.excel import UnstructuredExcelLoader
loader = UnstructuredExcelLoader(“stanley-cups.xlsd”, mode=”elements”)
docs = loader.load()
Parameters
file_path – The path to the Microsoft Excel file.
mode – The mode to use when partitioning the file. See unstructured docs
for more info. Optional. Defaults to “single”.
**unstructured_kwargs – Keyword arguments to pass to unstructured.
Methods
__init__(file_path[, mode])
param file_path
The path to the Microsoft Excel file.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.excel.UnstructuredExcelLoader.html
|
44db4ac426ba-1
|
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredExcelLoader¶
Microsoft Excel
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.excel.UnstructuredExcelLoader.html
|
23c9ecb19961-0
|
langchain.document_loaders.fauna.FaunaLoader¶
class langchain.document_loaders.fauna.FaunaLoader(query: str, page_content_field: str, secret: str, metadata_fields: Optional[Sequence[str]] = None)[source]¶
Bases: BaseLoader
FaunaDB Loader.
query¶
The FQL query string to execute.
Type
str
page_content_field¶
The field that contains the content of each page.
Type
str
secret¶
The secret key for authenticating to FaunaDB.
Type
str
metadata_fields¶
Optional list of field names to include in metadata.
Type
Optional[Sequence[str]]
Methods
__init__(query, page_content_field, secret)
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using FaunaLoader¶
Fauna
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.fauna.FaunaLoader.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.