id
stringlengths 14
16
| text
stringlengths 31
2.41k
| source
stringlengths 54
121
|
---|---|---|
1a644f703b6b-28 | None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Run when chain errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_tool_end(output, **kwargs)[source]ο
Run when tool ends running.
Parameters
output (str) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Run when tool errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Run when agent is ending.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Run when agent ends running.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Run on agent action.
Parameters | https://api.python.langchain.com/en/latest/modules/callbacks.html |
1a644f703b6b-29 | Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
flush_tracker(langchain_asset=None, reset=True, finish=False, job_type=None, project=None, entity=None, tags=None, group=None, name=None, notes=None, visualize=None, complexity_metrics=None)[source]ο
Flush the tracker and reset the session.
Parameters
langchain_asset (Any) β The langchain asset to save.
reset (bool) β Whether to reset the session.
finish (bool) β Whether to finish the run.
job_type (Optional[str]) β The job type.
project (Optional[str]) β The project.
entity (Optional[str]) β The entity.
tags (Optional[Sequence]) β The tags.
group (Optional[str]) β The group.
name (Optional[str]) β The name.
notes (Optional[str]) β The notes.
visualize (Optional[bool]) β Whether to visualize.
complexity_metrics (Optional[bool]) β Whether to compute complexity metrics.
Returns β None
Return type
None
class langchain.callbacks.WhyLabsCallbackHandler(logger)[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
WhyLabs CallbackHandler.
Parameters
logger (Logger) β
on_llm_start(serialized, prompts, **kwargs)[source]ο
Pass the input prompts to the logger
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Pass the generated response to the logger.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None | https://api.python.langchain.com/en/latest/modules/callbacks.html |
1a644f703b6b-30 | kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Do nothing.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Do nothing.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Do nothing.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Do nothing.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_agent_action(action, color=None, **kwargs)[source]ο
Do nothing.
Parameters
action (langchain.schema.AgentAction) β
color (Optional[str]) β
kwargs (Any) β
Return type
Any
on_tool_end(output, color=None, observation_prefix=None, llm_prefix=None, **kwargs)[source]ο
Do nothing.
Parameters
output (str) β
color (Optional[str]) β
observation_prefix (Optional[str]) β | https://api.python.langchain.com/en/latest/modules/callbacks.html |
1a644f703b6b-31 | color (Optional[str]) β
observation_prefix (Optional[str]) β
llm_prefix (Optional[str]) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Do nothing.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, color=None, **kwargs)[source]ο
Run on agent end.
Parameters
finish (langchain.schema.AgentFinish) β
color (Optional[str]) β
kwargs (Any) β
Return type
None
flush()[source]ο
Return type
None
close()[source]ο
Return type
None
classmethod from_params(*, api_key=None, org_id=None, dataset_id=None, sentiment=False, toxicity=False, themes=False)[source]ο
Instantiate whylogs Logger from params.
Parameters
api_key (Optional[str]) β WhyLabs API key. Optional because the preferred
way to specify the API key is with environment variable
WHYLABS_API_KEY.
org_id (Optional[str]) β WhyLabs organization id to write profiles to.
If not set must be specified in environment variable
WHYLABS_DEFAULT_ORG_ID.
dataset_id (Optional[str]) β The model or dataset this callback is gathering
telemetry for. If not set must be specified in environment variable
WHYLABS_DEFAULT_DATASET_ID.
sentiment (bool) β If True will initialize a model to perform
sentiment analysis compound score. Defaults to False and will not gather
this metric. | https://api.python.langchain.com/en/latest/modules/callbacks.html |
1a644f703b6b-32 | sentiment analysis compound score. Defaults to False and will not gather
this metric.
toxicity (bool) β If True will initialize a model to score
toxicity. Defaults to False and will not gather this metric.
themes (bool) β If True will initialize a model to calculate
distance to configured themes. Defaults to None and will not gather this
metric.
Return type
Logger
langchain.callbacks.get_openai_callback()[source]ο
Get the OpenAI callback handler in a context manager.
which conveniently exposes token and cost information.
Returns
The OpenAI callback handler.
Return type
OpenAICallbackHandler
Example
>>> with get_openai_callback() as cb:
... # Use the OpenAI callback handler
langchain.callbacks.tracing_enabled(session_name='default')[source]ο
Get the Deprecated LangChainTracer in a context manager.
Parameters
session_name (str, optional) β The name of the session.
Defaults to βdefaultβ.
Returns
The LangChainTracer session.
Return type
TracerSessionV1
Example
>>> with tracing_enabled() as session:
... # Use the LangChainTracer session
langchain.callbacks.wandb_tracing_enabled(session_name='default')[source]ο
Get the WandbTracer in a context manager.
Parameters
session_name (str, optional) β The name of the session.
Defaults to βdefaultβ.
Returns
None
Return type
Generator[None, None, None]
Example
>>> with wandb_tracing_enabled() as session:
... # Use the WandbTracer session | https://api.python.langchain.com/en/latest/modules/callbacks.html |
98c621fca5f3-0 | Document Loadersο
All different types of document loaders.
class langchain.document_loaders.AcreomLoader(path, encoding='UTF-8', collect_metadata=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Parameters
path (str) β
encoding (str) β
collect_metadata (bool) β
FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)ο
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AZLyricsLoader(web_path, header_template=None, verify=True)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that loads AZLyrics webpages.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β
load()[source]ο
Load webpage.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AirbyteJSONLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads local airbyte json files.
Parameters
file_path (str) β
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AirtableLoader(api_token, table_id, base_id)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader for Airtable tables.
Parameters
api_token (str) β
table_id (str) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-1 | Parameters
api_token (str) β
table_id (str) β
base_id (str) β
lazy_load()[source]ο
Lazy load records from table.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load Table.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ApifyDatasetLoader(dataset_id, dataset_mapping_function)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Logic for loading documents from Apify datasets.
Parameters
dataset_id (str) β
dataset_mapping_function (Callable[[Dict], langchain.schema.Document]) β
Return type
None
attribute apify_client: Any = Noneο
attribute dataset_id: str [Required]ο
The ID of the dataset on the Apify platform.
attribute dataset_mapping_function: Callable[[Dict], langchain.schema.Document] [Required]ο
A custom function that takes a single dictionary (an Apify dataset item)
and converts it to an instance of the Document class.
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ArxivLoader(query, load_max_docs=100, load_all_available_meta=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from arxiv.org into a list of Documents.
Each document represents one Document.
The loader converts the original PDF format into the text.
Parameters
query (str) β
load_max_docs (Optional[int]) β
load_all_available_meta (Optional[bool]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-2 | Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AzureBlobStorageContainerLoader(conn_str, container, prefix='')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from Azure Blob Storage.
Parameters
conn_str (str) β
container (str) β
prefix (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AzureBlobStorageFileLoader(conn_str, container, blob_name)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from Azure Blob Storage.
Parameters
conn_str (str) β
container (str) β
blob_name (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BSHTMLLoader(file_path, open_encoding=None, bs_kwargs=None, get_text_separator='')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses beautiful soup to parse HTML files.
Parameters
file_path (str) β
open_encoding (Optional[str]) β
bs_kwargs (Optional[dict]) β
get_text_separator (str) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BibtexLoader(file_path, *, parser=None, max_docs=None, max_content_chars=4000, load_extra_metadata=False, file_pattern='[^:]+\\.pdf')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a bibtex file into a list of Documents. | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-3 | Loads a bibtex file into a list of Documents.
Each document represents one entry from the bibtex file.
If a PDF file is present in the file bibtex field, the original PDF
is loaded into the document text. If no such file entry is present,
the abstract field is used instead.
Parameters
file_path (str) β
parser (Optional[langchain.utilities.bibtex.BibtexparserWrapper]) β
max_docs (Optional[int]) β
max_content_chars (Optional[int]) β
load_extra_metadata (bool) β
file_pattern (str) β
lazy_load()[source]ο
Load bibtex file using bibtexparser and get the article texts plus the
article metadata.
See https://bibtexparser.readthedocs.io/en/master/
Returns
a list of documents with the document.page_content in text format
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load bibtex file documents from the given bibtex file path.
See https://bibtexparser.readthedocs.io/en/master/
Parameters
file_path β the path to the bibtex file
Returns
a list of documents with the document.page_content in text format
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BigQueryLoader(query, project=None, page_content_columns=None, metadata_columns=None, credentials=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from BigQuery into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Parameters | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-4 | are written into the page_content and none into the metadata.
Parameters
query (str) β
project (Optional[str]) β
page_content_columns (Optional[List[str]]) β
metadata_columns (Optional[List[str]]) β
credentials (Optional[Credentials]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BiliBiliLoader(video_urls)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads bilibili transcripts.
Parameters
video_urls (List[str]) β
load()[source]ο
Load from bilibili url.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BlackboardLoader(blackboard_course_url, bbrouter, load_all_recursively=True, basic_auth=None, cookies=None)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that loads all documents from a Blackboard course.
This loader is not compatible with all Blackboard courses. It is only
compatible with courses that use the new Blackboard interface.
To use this loader, you must have the BbRouter cookie. You can get this
cookie by logging into the course and then copying the value of the
BbRouter cookie from the browserβs developer tools.
Example
from langchain.document_loaders import BlackboardLoader
loader = BlackboardLoader(
blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1",
bbrouter="expires:12345...",
)
documents = loader.load()
Parameters
blackboard_course_url (str) β
bbrouter (str) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-5 | blackboard_course_url (str) β
bbrouter (str) β
load_all_recursively (bool) β
basic_auth (Optional[Tuple[str, str]]) β
cookies (Optional[dict]) β
folder_path: strο
base_url: strο
load_all_recursively: boolο
check_bs4()[source]ο
Check if BeautifulSoup4 is installed.
Raises
ImportError β If BeautifulSoup4 is not installed.
Return type
None
load()[source]ο
Load data into document objects.
Returns
List of documents.
Return type
List[langchain.schema.Document]
download(path)[source]ο
Download a file from a url.
Parameters
path (str) β Path to the file.
Return type
None
parse_filename(url)[source]ο
Parse the filename from a url.
Parameters
url (str) β Url to parse the filename from.
Returns
The filename.
Return type
str
class langchain.document_loaders.Blob(*, data=None, mimetype=None, encoding='utf-8', path=None)[source]ο
Bases: pydantic.main.BaseModel
A blob is used to represent raw data by either reference or value.
Provides an interface to materialize the blob in different representations, and
help to decouple the development of data loaders from the downstream parsing of
the raw data.
Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob
Parameters
data (Optional[Union[bytes, str]]) β
mimetype (Optional[str]) β
encoding (str) β
path (Optional[Union[str, pathlib.PurePath]]) β
Return type
None
attribute data: Optional[Union[bytes, str]] = Noneο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-6 | None
attribute data: Optional[Union[bytes, str]] = Noneο
attribute encoding: str = 'utf-8'ο
attribute mimetype: Optional[str] = Noneο
attribute path: Optional[Union[str, pathlib.PurePath]] = Noneο
as_bytes()[source]ο
Read data as bytes.
Return type
bytes
as_bytes_io()[source]ο
Read data as a byte stream.
Return type
Generator[Union[_io.BytesIO, _io.BufferedReader], None, None]
as_string()[source]ο
Read data as a string.
Return type
str
classmethod from_data(data, *, encoding='utf-8', mime_type=None, path=None)[source]ο
Initialize the blob from in-memory data.
Parameters
data (Union[str, bytes]) β the in-memory data associated with the blob
encoding (str) β Encoding to use if decoding the bytes into a string
mime_type (Optional[str]) β if provided, will be set as the mime-type of the data
path (Optional[str]) β if provided, will be set as the source from which the data came
Returns
Blob instance
Return type
langchain.document_loaders.blob_loaders.schema.Blob
classmethod from_path(path, *, encoding='utf-8', mime_type=None, guess_type=True)[source]ο
Load the blob from a path like object.
Parameters
path (Union[str, pathlib.PurePath]) β path like object to file to be read
encoding (str) β Encoding to use if decoding the bytes into a string
mime_type (Optional[str]) β if provided, will be set as the mime-type of the data
guess_type (bool) β If True, the mimetype will be guessed from the file extension,
if a mime-type was not provided
Returns
Blob instance
Return type | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-7 | if a mime-type was not provided
Returns
Blob instance
Return type
langchain.document_loaders.blob_loaders.schema.Blob
property source: Optional[str]ο
The source location of the blob as string if known otherwise none.
class langchain.document_loaders.BlobLoader[source]ο
Bases: abc.ABC
Abstract interface for blob loaders implementation.
Implementer should be able to load raw content from a storage system according
to some criteria and return the raw content lazily as a stream of blobs.
abstract yield_blobs()[source]ο
A lazy loader for raw data represented by LangChainβs Blob object.
Returns
A generator over blobs
Return type
Iterable[langchain.document_loaders.blob_loaders.schema.Blob]
class langchain.document_loaders.BlockchainDocumentLoader(contract_address, blockchainType=BlockchainType.ETH_MAINNET, api_key='docs-demo', startToken='', get_all_tokens=False, max_execution_time=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads elements from a blockchain smart contract into Langchain documents.
The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,
Polygon mainnet, and Polygon Mumbai testnet.
If no BlockchainType is specified, the default is Ethereum mainnet.
The Loader uses the Alchemy API to interact with the blockchain.
ALCHEMY_API_KEY environment variable must be set to use this loader.
The API returns 100 NFTs per request and can be paginated using the
startToken parameter.
If get_all_tokens is set to True, the loader will get all tokens
on the contract. Note that for contracts with a large number of tokens,
this may take a long time (e.g. 10k tokens is 100 requests).
Default value is false for this reason. | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-8 | Default value is false for this reason.
The max_execution_time (sec) can be set to limit the execution time
of the loader.
Future versions of this loader can:
Support additional Alchemy APIs (e.g. getTransactions, etc.)
Support additional blockain APIs (e.g. Infura, Opensea, etc.)
Parameters
contract_address (str) β
blockchainType (langchain.document_loaders.blockchain.BlockchainType) β
api_key (str) β
startToken (str) β
get_all_tokens (bool) β
max_execution_time (Optional[int]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.CSVLoader(file_path, source_column=None, csv_args=None, encoding=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a CSV file into a list of documents.
Each document represents one row of the CSV file. Every row is converted into a
key/value pair and outputted to a new line in the documentβs page_content.
The source for each document loaded from csv is set to the value of the
file_path argument for all doucments by default.
You can override this by setting the source_column argument to the
name of a column in the CSV file.
The source of each document will then be set to the value of the column
with the name specified in source_column.
Output Example:column1: value1
column2: value2
column3: value3
Parameters
file_path (str) β
source_column (Optional[str]) β
csv_args (Optional[Dict]) β
encoding (Optional[str]) β
load()[source]ο
Load data into document objects.
Return type | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-9 | load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ChatGPTLoader(log_file, num_logs=- 1)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads conversations from exported ChatGPT data.
Parameters
log_file (str) β
num_logs (int) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.CoNLLULoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load CoNLL-U files.
Parameters
file_path (str) β
load()[source]ο
Load from file path.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.CollegeConfidentialLoader(web_path, header_template=None, verify=True)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that loads College Confidential webpages.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β
load()[source]ο
Load webpage.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ConfluenceLoader(url, api_key=None, username=None, oauth2=None, token=None, cloud=True, number_of_retries=3, min_retry_seconds=2, max_retry_seconds=10, confluence_kwargs=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load Confluence pages. Port of https://llamahub.ai/l/confluence
This currently supports username/api_key, Oauth2 login or personal access token
authentication. | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-10 | This currently supports username/api_key, Oauth2 login or personal access token
authentication.
Specify a list page_ids and/or space_key to load in the corresponding pages into
Document objects, if both are specified the union of both sets will be returned.
You can also specify a boolean include_attachments to include attachments, this
is set to False by default, if set to True all attachments will be downloaded and
ConfluenceReader will extract the text from the attachments and add it to the
Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,
SVG, Word and Excel.
Confluence API supports difference format of page content. The storage format is the
raw XML representation for storage. The view format is the HTML representation for
viewing with macros are rendered as though it is viewed by users. You can pass
a enum content_format argument to load() to specify the content format, this is
set to ContentFormat.STORAGE by default.
Hint: space_key and page_id can both be found in the URL of a page in Confluence
- https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>
Example
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
username="me",
api_key="12345"
)
documents = loader.load(space_key="SPACE",limit=50)
Parameters
url (str) β _description_
api_key (str, optional) β _description_, defaults to None
username (str, optional) β _description_, defaults to None
oauth2 (dict, optional) β _description_, defaults to {}
token (str, optional) β _description_, defaults to None
cloud (bool, optional) β _description_, defaults to True | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-11 | cloud (bool, optional) β _description_, defaults to True
number_of_retries (Optional[int], optional) β How many times to retry, defaults to 3
min_retry_seconds (Optional[int], optional) β defaults to 2
max_retry_seconds (Optional[int], optional) β defaults to 10
confluence_kwargs (dict, optional) β additional kwargs to initialize confluence with
Raises
ValueError β Errors while validating input
ImportError β Required dependencies not installed.
static validate_init_args(url=None, api_key=None, username=None, oauth2=None, token=None)[source]ο
Validates proper combinations of init arguments
Parameters
url (Optional[str]) β
api_key (Optional[str]) β
username (Optional[str]) β
oauth2 (Optional[dict]) β
token (Optional[str]) β
Return type
Optional[List]
load(space_key=None, page_ids=None, label=None, cql=None, include_restricted_content=False, include_archived_content=False, include_attachments=False, include_comments=False, content_format=ContentFormat.STORAGE, limit=50, max_pages=1000, ocr_languages=None)[source]ο
Parameters
space_key (Optional[str], optional) β Space key retrieved from a confluence URL, defaults to None
page_ids (Optional[List[str]], optional) β List of specific page IDs to load, defaults to None
label (Optional[str], optional) β Get all pages with this label, defaults to None
cql (Optional[str], optional) β CQL Expression, defaults to None
include_restricted_content (bool, optional) β defaults to False
include_archived_content (bool, optional) β Whether to include archived content,
defaults to False
include_attachments (bool, optional) β defaults to False
include_comments (bool, optional) β defaults to False | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-12 | include_comments (bool, optional) β defaults to False
content_format (ContentFormat) β Specify content format, defaults to ContentFormat.STORAGE
limit (int, optional) β Maximum number of pages to retrieve per request, defaults to 50
max_pages (int, optional) β Maximum number of pages to retrieve in total, defaults 1000
ocr_languages (str, optional) β The languages to use for the Tesseract agent. To use a
language, youβll first need to install the appropriate
Tesseract language pack.
Raises
ValueError β _description_
ImportError β _description_
Returns
_description_
Return type
List[Document]
paginate_request(retrieval_method, **kwargs)[source]ο
Paginate the various methods to retrieve groups of pages.
Unfortunately, due to page size, sometimes the Confluence API
doesnβt match the limit value. If limit is >100 confluence
seems to cap the response to 100. Also, due to the Atlassian Python
package, we donβt get the βnextβ values from the β_linksβ key because
they only return the value from the results key. So here, the pagination
starts from 0 and goes until the max_pages, getting the limit number
of pages with each request. We have to manually check if there
are more docs based on the length of the returned list of pages, rather than
just checking for the presence of a next key in the response like this page
would have you do:
https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/
Parameters
retrieval_method (callable) β Function used to retrieve docs
kwargs (Any) β
Returns
List of documents
Return type
List
is_public_page(page)[source]ο
Check if a page is publicly accessible.
Parameters
page (dict) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-13 | Check if a page is publicly accessible.
Parameters
page (dict) β
Return type
bool
process_pages(pages, include_restricted_content, include_attachments, include_comments, content_format, ocr_languages=None)[source]ο
Process a list of pages into a list of documents.
Parameters
pages (List[dict]) β
include_restricted_content (bool) β
include_attachments (bool) β
include_comments (bool) β
content_format (langchain.document_loaders.confluence.ContentFormat) β
ocr_languages (Optional[str]) β
Return type
List[langchain.schema.Document]
process_page(page, include_attachments, include_comments, content_format, ocr_languages=None)[source]ο
Parameters
page (dict) β
include_attachments (bool) β
include_comments (bool) β
content_format (langchain.document_loaders.confluence.ContentFormat) β
ocr_languages (Optional[str]) β
Return type
langchain.schema.Document
process_attachment(page_id, ocr_languages=None)[source]ο
Parameters
page_id (str) β
ocr_languages (Optional[str]) β
Return type
List[str]
process_pdf(link, ocr_languages=None)[source]ο
Parameters
link (str) β
ocr_languages (Optional[str]) β
Return type
str
process_image(link, ocr_languages=None)[source]ο
Parameters
link (str) β
ocr_languages (Optional[str]) β
Return type
str
process_doc(link)[source]ο
Parameters
link (str) β
Return type
str
process_xls(link)[source]ο
Parameters
link (str) β
Return type
str
process_svg(link, ocr_languages=None)[source]ο
Parameters | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-14 | str
process_svg(link, ocr_languages=None)[source]ο
Parameters
link (str) β
ocr_languages (Optional[str]) β
Return type
str
class langchain.document_loaders.DataFrameLoader(data_frame, page_content_column='text')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load Pandas DataFrames.
Parameters
data_frame (Any) β
page_content_column (str) β
lazy_load()[source]ο
Lazy load records from dataframe.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load full dataframe.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DiffbotLoader(api_token, urls, continue_on_failure=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Diffbot file json.
Parameters
api_token (str) β
urls (List[str]) β
continue_on_failure (bool) β
load()[source]ο
Extract text from Diffbot on all the URLs and return Document instances
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DirectoryLoader(path, glob='**/[!.]*', silent_errors=False, load_hidden=False, loader_cls=<class 'langchain.document_loaders.unstructured.UnstructuredFileLoader'>, loader_kwargs=None, recursive=False, show_progress=False, use_multithreading=False, max_concurrency=4)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from a directory.
Parameters
path (str) β
glob (str) β
silent_errors (bool) β
load_hidden (bool) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-15 | silent_errors (bool) β
load_hidden (bool) β
loader_cls (Union[Type[langchain.document_loaders.unstructured.UnstructuredFileLoader], Type[langchain.document_loaders.text.TextLoader], Type[langchain.document_loaders.html_bs.BSHTMLLoader]]) β
loader_kwargs (Optional[dict]) β
recursive (bool) β
show_progress (bool) β
use_multithreading (bool) β
max_concurrency (int) β
load_file(item, path, docs, pbar)[source]ο
Parameters
item (pathlib.Path) β
path (pathlib.Path) β
docs (List[langchain.schema.Document]) β
pbar (Optional[Any]) β
Return type
None
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DiscordChatLoader(chat_log, user_id_col='ID')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load Discord chat logs.
Parameters
chat_log (pd.DataFrame) β
user_id_col (str) β
load()[source]ο
Load all chat messages.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DocugamiLoader(*, api='https://api.docugami.com/v1preview1', access_token=None, docset_id=None, document_ids=None, file_paths=None, min_chunk_size=32)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Loader that loads processed docs from Docugami.
To use, you should have the lxml python package installed.
Parameters
api (str) β
access_token (Optional[str]) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-16 | Parameters
api (str) β
access_token (Optional[str]) β
docset_id (Optional[str]) β
document_ids (Optional[Sequence[str]]) β
file_paths (Optional[Sequence[Union[pathlib.Path, str]]]) β
min_chunk_size (int) β
Return type
None
attribute access_token: Optional[str] = Noneο
attribute api: str = 'https://api.docugami.com/v1preview1'ο
attribute docset_id: Optional[str] = Noneο
attribute document_ids: Optional[Sequence[str]] = Noneο
attribute file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = Noneο
attribute min_chunk_size: int = 32ο
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.Docx2txtLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, abc.ABC
Loads a DOCX with docx2txt and chunks at character level.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, and use that, then clean up the temporary file after completion
Parameters
file_path (str) β
load()[source]ο
Load given path as single page.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DuckDBLoader(query, database=':memory:', read_only=False, config=None, page_content_columns=None, metadata_columns=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from DuckDB into a list of documents.
Each document represents one row of the result. The page_content_columns | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-17 | Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Parameters
query (str) β
database (str) β
read_only (bool) β
config (Optional[Dict[str, str]]) β
page_content_columns (Optional[List[str]]) β
metadata_columns (Optional[List[str]]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.EmbaasBlobLoader(*, embaas_api_key=None, api_url='https://api.embaas.io/v1/document/extract-text/bytes/', params={})[source]ο
Bases: langchain.document_loaders.embaas.BaseEmbaasLoader, langchain.document_loaders.base.BaseBlobParser
Wrapper around embaasβs document byte loader service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader()
blob = Blob.from_path(path="example.mp3")
documents = loader.parse(blob=blob)
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
) | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-18 | "chunk_splitter": "CharacterTextSplitter"
}
)
blob = Blob.from_path(path="example.pdf")
documents = loader.parse(blob=blob)
Parameters
embaas_api_key (Optional[str]) β
api_url (str) β
params (langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters) β
Return type
None
lazy_parse(blob)[source]ο
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob (langchain.document_loaders.blob_loaders.schema.Blob) β Blob instance
Returns
Generator of documents
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.EmbaasLoader(*, embaas_api_key=None, api_url='https://api.embaas.io/v1/document/extract-text/bytes/', params={}, file_path, blob_loader=None)[source]ο
Bases: langchain.document_loaders.embaas.BaseEmbaasLoader, langchain.document_loaders.base.BaseLoader
Wrapper around embaasβs document loader service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasLoader
loader = EmbaasLoader(file_path="example.mp3")
documents = loader.load()
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
file_path="example.pdf",
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256, | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-19 | "chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
)
documents = loader.load()
Parameters
embaas_api_key (Optional[str]) β
api_url (str) β
params (langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters) β
file_path (str) β
blob_loader (Optional[langchain.document_loaders.embaas.EmbaasBlobLoader]) β
Return type
None
attribute blob_loader: Optional[langchain.document_loaders.embaas.EmbaasBlobLoader] = Noneο
The blob loader to use. If not provided, a default one will be created.
attribute file_path: str [Required]ο
The path to the file to load.
lazy_load()[source]ο
Load the documents from the file path lazily.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
load_and_split(text_splitter=None)[source]ο
Load documents and split into chunks.
Parameters
text_splitter (Optional[langchain.text_splitter.TextSplitter]) β
Return type
List[langchain.schema.Document]
class langchain.document_loaders.EverNoteLoader(file_path, load_single_document=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
EverNote Loader.
Loads an EverNote notebook export file e.g. my_notebook.enex into Documents.
Instructions on producing this file can be found at
https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML
Currently only the plain text in the note is extracted and stored as the contents | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-20 | Currently only the plain text in the note is extracted and stored as the contents
of the Document, any non content metadata (e.g. βauthorβ, βcreatedβ, βupdatedβ etc.
but not βcontent-rawβ or βresourceβ) tags on the note will be extracted and stored
as metadata on the Document.
Parameters
file_path (str) β The path to the notebook export with a .enex extension
load_single_document (bool) β Whether or not to concatenate the content of all
notes into a single long Document.
True (If this is set to) β the βsourceβ which contains the file name of the export.
load()[source]ο
Load documents from EverNote export file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.FacebookChatLoader(path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Facebook messages json directory dump.
Parameters
path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.FaunaLoader(query, page_content_field, secret, metadata_fields=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
FaunaDB Loader.
Parameters
query (str) β
page_content_field (str) β
secret (str) β
metadata_fields (Optional[Sequence[str]]) β
queryο
The FQL query string to execute.
Type
str
page_content_fieldο
The field that contains the content of each page.
Type
str
secretο
The secret key for authenticating to FaunaDB.
Type
str
metadata_fieldsο
Optional list of field names to include in metadata.
Type
Optional[Sequence[str]] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-21 | Optional list of field names to include in metadata.
Type
Optional[Sequence[str]]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.FigmaFileLoader(access_token, ids, key)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Figma file json.
Parameters
access_token (str) β
ids (str) β
key (str) β
load()[source]ο
Load file
Return type
List[langchain.schema.Document]
class langchain.document_loaders.FileSystemBlobLoader(path, *, glob='**/[!.]*', suffixes=None, show_progress=False)[source]ο
Bases: langchain.document_loaders.blob_loaders.schema.BlobLoader
Blob loader for the local file system.
Example:
from langchain.document_loaders.blob_loaders import FileSystemBlobLoader
loader = FileSystemBlobLoader("/path/to/directory")
for blob in loader.yield_blobs():
print(blob)
Parameters
path (Union[str, pathlib.Path]) β
glob (str) β
suffixes (Optional[Sequence[str]]) β
show_progress (bool) β
Return type
None
yield_blobs()[source]ο
Yield blobs that match the requested pattern.
Return type
Iterable[langchain.document_loaders.blob_loaders.schema.Blob]
count_matching_files()[source]ο
Count files that match the pattern without loading them.
Return type
int
class langchain.document_loaders.GCSDirectoryLoader(project_name, bucket, prefix='')[source]ο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-22 | Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from GCS.
Parameters
project_name (str) β
bucket (str) β
prefix (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GCSFileLoader(project_name, bucket, blob)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from GCS.
Parameters
project_name (str) β
bucket (str) β
blob (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GitHubIssuesLoader(*, repo, access_token, include_prs=True, milestone=None, state=None, assignee=None, creator=None, mentioned=None, labels=None, sort=None, direction=None, since=None)[source]ο
Bases: langchain.document_loaders.github.BaseGitHubLoader
Parameters
repo (str) β
access_token (str) β
include_prs (bool) β
milestone (Optional[Union[int, Literal['*', 'none']]]) β
state (Optional[Literal['open', 'closed', 'all']]) β
assignee (Optional[str]) β
creator (Optional[str]) β
mentioned (Optional[str]) β
labels (Optional[List[str]]) β
sort (Optional[Literal['created', 'updated', 'comments']]) β
direction (Optional[Literal['asc', 'desc']]) β
since (Optional[str]) β
Return type
None
attribute assignee: Optional[str] = Noneο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-23 | Return type
None
attribute assignee: Optional[str] = Noneο
Filter on assigned user. Pass βnoneβ for no user and β*β for any user.
attribute creator: Optional[str] = Noneο
Filter on the user that created the issue.
attribute direction: Optional[Literal['asc', 'desc']] = Noneο
The direction to sort the results by. Can be one of: βascβ, βdescβ.
attribute include_prs: bool = Trueο
If True include Pull Requests in results, otherwise ignore them.
attribute labels: Optional[List[str]] = Noneο
Label names to filter one. Example: bug,ui,@high.
attribute mentioned: Optional[str] = Noneο
Filter on a user thatβs mentioned in the issue.
attribute milestone: Optional[Union[int, Literal['*', 'none']]] = Noneο
If integer is passed, it should be a milestoneβs number field.
If the string β*β is passed, issues with any milestone are accepted.
If the string βnoneβ is passed, issues without milestones are returned.
attribute since: Optional[str] = Noneο
Only show notifications updated after the given time.
This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.
attribute sort: Optional[Literal['created', 'updated', 'comments']] = Noneο
What to sort results by. Can be one of: βcreatedβ, βupdatedβ, βcommentsβ.
Default is βcreatedβ.
attribute state: Optional[Literal['open', 'closed', 'all']] = Noneο
Filter on issue state. Can be one of: βopenβ, βclosedβ, βallβ.
lazy_load()[source]ο
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-24 | Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
load()[source]ο
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
parse_issue(issue)[source]ο
Create Document objects from a list of GitHub issues.
Parameters
issue (dict) β
Return type
langchain.schema.Document
property query_params: strο
property url: strο
class langchain.document_loaders.GitLoader(repo_path, clone_url=None, branch='main', file_filter=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads files from a Git repository into a list of documents.
Repository can be local on disk available at repo_path,
or remote at clone_url that will be cloned to repo_path.
Currently supports only text files.
Each document represents one file in the repository. The path points to
the local Git repository, and the branch specifies the branch to load
files from. By default, it loads from the main branch.
Parameters
repo_path (str) β
clone_url (Optional[str]) β
branch (Optional[str]) β
file_filter (Optional[Callable[[str], bool]]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-25 | Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GitbookLoader(web_page, load_all_paths=False, base_url=None, content_selector='main')[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Load GitBook data.
load from either a single page, or
load all (relative) paths in the navbar.
Parameters
web_page (str) β
load_all_paths (bool) β
base_url (Optional[str]) β
content_selector (str) β
load()[source]ο
Fetch text from one single GitBook page.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GoogleApiClient(credentials_path=PosixPath('/home/docs/.credentials/credentials.json'), service_account_path=PosixPath('/home/docs/.credentials/credentials.json'), token_path=PosixPath('/home/docs/.credentials/token.json'))[source]ο
Bases: object
A Generic Google Api Client.
To use, you should have the google_auth_oauthlib,youtube_transcript_api,google
python package installed.
As the google api expects credentials you need to set up a google account and
register your Service. βhttps://developers.google.com/docs/api/quickstart/pythonβ
Example
from langchain.document_loaders import GoogleApiClient
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
Parameters
credentials_path (pathlib.Path) β
service_account_path (pathlib.Path) β
token_path (pathlib.Path) β
Return type
None
credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')ο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-26 | service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')ο
token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')ο
classmethod validate_channel_or_videoIds_is_set(values)[source]ο
Validate that either folder_id or document_ids is set, but not both.
Parameters
values (Dict[str, Any]) β
Return type
Dict[str, Any]
class langchain.document_loaders.GoogleApiYoutubeLoader(google_api_client, channel_name=None, video_ids=None, add_video_info=True, captions_language='en', continue_on_failure=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads all Videos from a Channel
To use, you should have the googleapiclient,youtube_transcript_api
python package installed.
As the service needs a google_api_client, you first have to initialize
the GoogleApiClient.
Additionally you have to either provide a channel name or a list of videoids
βhttps://developers.google.com/docs/api/quickstart/pythonβ
Example
from langchain.document_loaders import GoogleApiClient
from langchain.document_loaders import GoogleApiYoutubeLoader
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
loader = GoogleApiYoutubeLoader(
google_api_client=google_api_client,
channel_name = "CodeAesthetic"
)
load.load()
Parameters
google_api_client (langchain.document_loaders.youtube.GoogleApiClient) β
channel_name (Optional[str]) β
video_ids (Optional[List[str]]) β
add_video_info (bool) β
captions_language (str) β
continue_on_failure (bool) β
Return type
None
google_api_client: langchain.document_loaders.youtube.GoogleApiClientο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-27 | Return type
None
google_api_client: langchain.document_loaders.youtube.GoogleApiClientο
channel_name: Optional[str] = Noneο
video_ids: Optional[List[str]] = Noneο
add_video_info: bool = Trueο
captions_language: str = 'en'ο
continue_on_failure: bool = Falseο
classmethod validate_channel_or_videoIds_is_set(values)[source]ο
Validate that either folder_id or document_ids is set, but not both.
Parameters
values (Dict[str, Any]) β
Return type
Dict[str, Any]
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GoogleDriveLoader(*, service_account_key=PosixPath('/home/docs/.credentials/keys.json'), credentials_path=PosixPath('/home/docs/.credentials/credentials.json'), token_path=PosixPath('/home/docs/.credentials/token.json'), folder_id=None, document_ids=None, file_ids=None, recursive=False, file_types=None, load_trashed_files=False, file_loader_cls=None, file_loader_kwargs={})[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Loader that loads Google Docs from Google Drive.
Parameters
service_account_key (pathlib.Path) β
credentials_path (pathlib.Path) β
token_path (pathlib.Path) β
folder_id (Optional[str]) β
document_ids (Optional[List[str]]) β
file_ids (Optional[List[str]]) β
recursive (bool) β
file_types (Optional[Sequence[str]]) β
load_trashed_files (bool) β
file_loader_cls (Any) β
file_loader_kwargs (Dict[str, Any]) β
Return type
None | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-28 | file_loader_kwargs (Dict[str, Any]) β
Return type
None
attribute credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')ο
attribute document_ids: Optional[List[str]] = Noneο
attribute file_ids: Optional[List[str]] = Noneο
attribute file_loader_cls: Any = Noneο
attribute file_loader_kwargs: Dict[str, Any] = {}ο
attribute file_types: Optional[Sequence[str]] = Noneο
attribute folder_id: Optional[str] = Noneο
attribute load_trashed_files: bool = Falseο
attribute recursive: bool = Falseο
attribute service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json')ο
attribute token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')ο
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GutenbergLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses urllib to load .txt web files.
Parameters
file_path (str) β
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.HNLoader(web_path, header_template=None, verify=True)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Load Hacker News data from either main page results or the comments page.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β
load()[source]ο
Get important HN webpage information.
Components are:
title
content
source url,
time of post
author of the post | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-29 | title
content
source url,
time of post
author of the post
number of comments
rank of the post
Return type
List[langchain.schema.Document]
load_comments(soup_info)[source]ο
Load comments from a HN post.
Parameters
soup_info (Any) β
Return type
List[langchain.schema.Document]
load_results(soup)[source]ο
Load items from an HN page.
Parameters
soup (Any) β
Return type
List[langchain.schema.Document]
class langchain.document_loaders.HuggingFaceDatasetLoader(path, page_content_column='text', name=None, data_dir=None, data_files=None, cache_dir=None, keep_in_memory=None, save_infos=False, use_auth_token=None, num_proc=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from the Hugging Face Hub.
Parameters
path (str) β
page_content_column (str) β
name (Optional[str]) β
data_dir (Optional[str]) β
data_files (Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]]) β
cache_dir (Optional[str]) β
keep_in_memory (Optional[bool]) β
save_infos (bool) β
use_auth_token (Optional[Union[bool, str]]) β
num_proc (Optional[int]) β
lazy_load()[source]ο
Load documents lazily.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.IFixitLoader(web_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-30 | Bases: langchain.document_loaders.base.BaseLoader
Load iFixit repair guides, device wikis and answers.
iFixit is the largest, open repair community on the web. The site contains nearly
100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is
licensed under CC-BY.
This loader will allow you to download the text of a repair guide, text of Q&Aβs
and wikis from devices on iFixit using their open APIs and web scraping.
Parameters
web_path (str) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
static load_suggestions(query='', doc_type='all')[source]ο
Parameters
query (str) β
doc_type (str) β
Return type
List[langchain.schema.Document]
load_questions_and_answers(url_override=None)[source]ο
Parameters
url_override (Optional[str]) β
Return type
List[langchain.schema.Document]
load_device(url_override=None, include_guides=True)[source]ο
Parameters
url_override (Optional[str]) β
include_guides (bool) β
Return type
List[langchain.schema.Document]
load_guide(url_override=None)[source]ο
Parameters
url_override (Optional[str]) β
Return type
List[langchain.schema.Document]
class langchain.document_loaders.IMSDbLoader(web_path, header_template=None, verify=True)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that loads IMSDb webpages.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β
load()[source]ο
Load webpage. | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-31 | verify (Optional[bool]) β
load()[source]ο
Load webpage.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ImageCaptionLoader(path_images, blip_processor='Salesforce/blip-image-captioning-base', blip_model='Salesforce/blip-image-captioning-base')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads the captions of an image
Parameters
path_images (Union[str, List[str]]) β
blip_processor (str) β
blip_model (str) β
load()[source]ο
Load from a list of image files
Return type
List[langchain.schema.Document]
class langchain.document_loaders.IuguLoader(resource, api_token=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that fetches data from IUGU.
Parameters
resource (str) β
api_token (Optional[str]) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.JSONLoader(file_path, jq_schema, content_key=None, metadata_func=None, text_content=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a JSON file and references a jq schema provided to load the text into
documents.
Example
[{βtextβ: β¦}, {βtextβ: β¦}, {βtextβ: β¦}] -> schema = .[].text
{βkeyβ: [{βtextβ: β¦}, {βtextβ: β¦}, {βtextβ: β¦}]} -> schema = .key[].text
[ββ, ββ, ββ] -> schema = .[]
Parameters
file_path (Union[str, pathlib.Path]) β
jq_schema (str) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-32 | file_path (Union[str, pathlib.Path]) β
jq_schema (str) β
content_key (Optional[str]) β
metadata_func (Optional[Callable[[Dict, Dict], Dict]]) β
text_content (bool) β
load()[source]ο
Load and return documents from the JSON file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.JoplinLoader(access_token=None, port=41184, host='localhost')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that fetches notes from Joplin.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for βWeb Clipperβ in the app settings).
To get the access token, you need to go to the Web Clipper options and
under βAdvanced Optionsβ you will find the access token.
You can find more information about the Web Clipper service here:
https://joplinapp.org/clipper/
Parameters
access_token (Optional[str]) β
port (int) β
host (str) β
Return type
None
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MWDumpLoader(file_path, encoding='utf8')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load MediaWiki dump from XML file
.. rubric:: Example
from langchain.document_loaders import MWDumpLoader
loader = MWDumpLoader(
file_path="myWiki.xml",
encoding="utf8"
)
docs = loader.load() | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-33 | encoding="utf8"
)
docs = loader.load()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, chunk_overlap=0
)
texts = text_splitter.split_documents(docs)
Parameters
file_path (str) β XML local file path
encoding (str, optional) β Charset encoding, defaults to βutf8β
load()[source]ο
Load from file path.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MastodonTootsLoader(mastodon_accounts, number_toots=100, exclude_replies=False, access_token=None, api_base_url='https://mastodon.social')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Mastodon toots loader.
Parameters
mastodon_accounts (Sequence[str]) β
number_toots (Optional[int]) β
exclude_replies (bool) β
access_token (Optional[str]) β
api_base_url (str) β
load()[source]ο
Load toots into documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MathpixPDFLoader(file_path, processed_file_format='mmd', max_wait_time_seconds=500, should_clean_pdf=False, **kwargs)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Parameters
file_path (str) β
processed_file_format (str) β
max_wait_time_seconds (int) β
should_clean_pdf (bool) β
kwargs (Any) β
Return type
None
property headers: dictο
property url: strο
property data: dictο
send_pdf()[source]ο
Return type
str | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-34 | property data: dictο
send_pdf()[source]ο
Return type
str
wait_for_processing(pdf_id)[source]ο
Parameters
pdf_id (str) β
Return type
None
get_processed_pdf(pdf_id)[source]ο
Parameters
pdf_id (str) β
Return type
str
clean_pdf(contents)[source]ο
Parameters
contents (str) β
Return type
str
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MaxComputeLoader(query, api_wrapper, *, page_content_columns=None, metadata_columns=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from Alibaba Cloud MaxCompute table into documents.
Parameters
query (str) β
api_wrapper (MaxComputeAPIWrapper) β
page_content_columns (Optional[Sequence[str]]) β
metadata_columns (Optional[Sequence[str]]) β
classmethod from_params(query, endpoint, project, *, access_id=None, secret_access_key=None, **kwargs)[source]ο
Convenience constructor that builds the MaxCompute API wrapper fromgiven parameters.
Parameters
query (str) β SQL query to execute.
endpoint (str) β MaxCompute endpoint.
project (str) β A project is a basic organizational unit of MaxCompute, which is
similar to a database.
access_id (Optional[str]) β MaxCompute access ID. Should be passed in directly or set as the
environment variable MAX_COMPUTE_ACCESS_ID.
secret_access_key (Optional[str]) β MaxCompute secret access key. Should be passed in
directly or set as the environment variable
MAX_COMPUTE_SECRET_ACCESS_KEY.
kwargs (Any) β
Return type
langchain.document_loaders.max_compute.MaxComputeLoader | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-35 | Return type
langchain.document_loaders.max_compute.MaxComputeLoader
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MergedDataLoader(loaders)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Merge documents from a list of loaders
Parameters
loaders (List) β
lazy_load()[source]ο
Lazy load docs from each individual loader.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load docs.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MHTMLLoader(file_path, open_encoding=None, bs_kwargs=None, get_text_separator='')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses beautiful soup to parse HTML files.
Parameters
file_path (str) β
open_encoding (Optional[str]) β
bs_kwargs (Optional[dict]) β
get_text_separator (str) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ModernTreasuryLoader(resource, organization_id=None, api_key=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that fetches data from Modern Treasury.
Parameters
resource (str) β
organization_id (Optional[str]) β
api_key (Optional[str]) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-36 | Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.NotebookLoader(path, include_outputs=False, max_output_length=10, remove_newline=False, traceback=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads .ipynb notebook files.
Parameters
path (str) β
include_outputs (bool) β
max_output_length (int) β
remove_newline (bool) β
traceback (bool) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.NotionDBLoader(integration_token, database_id, request_timeout_sec=10)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Notion DB Loader.
Reads content from pages within a Noton Database.
:param integration_token: Notion integration token.
:type integration_token: str
:param database_id: Notion database id.
:type database_id: str
:param request_timeout_sec: Timeout for Notion requests in seconds.
:type request_timeout_sec: int
Parameters
integration_token (str) β
database_id (str) β
request_timeout_sec (Optional[int]) β
Return type
None
load()[source]ο
Load documents from the Notion database.
:returns: List of documents.
:rtype: List[Document]
Return type
List[langchain.schema.Document]
load_page(page_summary)[source]ο
Read a page.
Parameters
page_summary (Dict[str, Any]) β
Return type
langchain.schema.Document
class langchain.document_loaders.NotionDirectoryLoader(path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-37 | Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Notion directory dump.
Parameters
path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ObsidianLoader(path, encoding='UTF-8', collect_metadata=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Obsidian files from disk.
Parameters
path (str) β
encoding (str) β
collect_metadata (bool) β
FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)ο
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.OneDriveFileLoader(*, file)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Parameters
file (File) β
Return type
None
attribute file: File [Required]ο
load()[source]ο
Load Documents
Return type
List[langchain.schema.Document]
class langchain.document_loaders.OneDriveLoader(*, settings=None, drive_id, folder_path=None, object_ids=None, auth_with_token=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Parameters
settings (langchain.document_loaders.onedrive._OneDriveSettings) β
drive_id (str) β
folder_path (Optional[str]) β
object_ids (Optional[List[str]]) β
auth_with_token (bool) β
Return type
None
attribute auth_with_token: bool = Falseο
attribute drive_id: str [Required]ο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-38 | attribute drive_id: str [Required]ο
attribute folder_path: Optional[str] = Noneο
attribute object_ids: Optional[List[str]] = Noneο
attribute settings: langchain.document_loaders.onedrive._OneDriveSettings [Optional]ο
load()[source]ο
Loads all supported document files from the specified OneDrive drive a
nd returns a list of Document objects.
Returns
A list of Document objects
representing the loaded documents.
Return type
List[Document]
Raises
ValueError β If the specified drive ID
does not correspond to a drive in the OneDrive storage. β
class langchain.document_loaders.OnlinePDFLoader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loader that loads online PDFs.
Parameters
file_path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.OutlookMessageLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Outlook Message files using extract_msg.
https://github.com/TeamMsgExtractor/msg-extractor
Parameters
file_path (str) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.OpenCityDataLoader(city_id, dataset_id, limit)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Open city data.
Parameters
city_id (str) β
dataset_id (str) β
limit (int) β
lazy_load()[source]ο
Lazy load records.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load records.
Return type | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-39 | load()[source]ο
Load records.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PDFMinerLoader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loader that uses PDFMiner to load PDF files.
Parameters
file_path (str) β
Return type
None
load()[source]ο
Eagerly load the content.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
Lazily lod documents.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.PDFMinerPDFasHTMLLoader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loader that uses PDFMiner to load PDF files as HTML content.
Parameters
file_path (str) β
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PDFPlumberLoader(file_path, text_kwargs=None)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loader that uses pdfplumber to load PDF files.
Parameters
file_path (str) β
text_kwargs (Optional[Mapping[str, Any]]) β
Return type
None
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
langchain.document_loaders.PagedPDFSplitterο
alias of langchain.document_loaders.pdf.PyPDFLoader
class langchain.document_loaders.PlaywrightURLLoader(urls, continue_on_failure=True, headless=True, remove_selectors=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-40 | Bases: langchain.document_loaders.base.BaseLoader
Loader that uses Playwright and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
Parameters
urls (List[str]) β
continue_on_failure (bool) β
headless (bool) β
remove_selectors (Optional[List[str]]) β
urlsο
List of URLs to load.
Type
List[str]
continue_on_failureο
If True, continue loading other URLs on failure.
Type
bool
headlessο
If True, the browser will run in headless mode.
Type
bool
load()[source]ο
Load the specified URLs using Playwright and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
class langchain.document_loaders.PsychicLoader(api_key, connector_id, connection_id)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads documents from Psychic.dev.
Parameters
api_key (str) β
connector_id (str) β
connection_id (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PyMuPDFLoader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loader that uses PyMuPDF to load PDF files.
Parameters
file_path (str) β
Return type
None
load(**kwargs)[source]ο
Load file.
Parameters
kwargs (Optional[Any]) β
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-41 | kwargs (Optional[Any]) β
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PyPDFDirectoryLoader(path, glob='**/[!.]*.pdf', silent_errors=False, load_hidden=False, recursive=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a directory with PDF files with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
Parameters
path (str) β
glob (str) β
silent_errors (bool) β
load_hidden (bool) β
recursive (bool) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PyPDFLoader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loads a PDF with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
Parameters
file_path (str) β
Return type
None
load()[source]ο
Load given path as pages.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
Lazy load given path as pages.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.PyPDFium2Loader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loads a PDF with pypdfium2 and chunks at character level.
Parameters
file_path (str) β
load()[source]ο
Load given path as pages.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
Lazy load given path as pages.
Return type
Iterator[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-42 | Lazy load given path as pages.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.PySparkDataFrameLoader(spark_session=None, df=None, page_content_column='text', fraction_of_memory=0.1)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load PySpark DataFrames
Parameters
spark_session (Optional[SparkSession]) β
df (Optional[Any]) β
page_content_column (str) β
fraction_of_memory (float) β
get_num_rows()[source]ο
Gets the amount of βfeasibleβ rows for the DataFrame
Return type
Tuple[int, int]
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load from the dataframe.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PythonLoader(file_path)[source]ο
Bases: langchain.document_loaders.text.TextLoader
Load Python files, respecting any non-default encoding if specified.
Parameters
file_path (str) β
class langchain.document_loaders.ReadTheDocsLoader(path, encoding=None, errors=None, custom_html_tag=None, **kwargs)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads ReadTheDocs documentation directory dump.
Parameters
path (Union[str, pathlib.Path]) β
encoding (Optional[str]) β
errors (Optional[str]) β
custom_html_tag (Optional[Tuple[str, dict]]) β
kwargs (Optional[Any]) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.RecursiveUrlLoader(url, exclude_dirs=None)[source]ο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-43 | Bases: langchain.document_loaders.base.BaseLoader
Loader that loads all child links from a given url.
Parameters
url (str) β
exclude_dirs (Optional[str]) β
Return type
None
get_child_links_recursive(url, visited=None)[source]ο
Recursively get all child links starting with the path of the input URL.
Parameters
url (str) β
visited (Optional[Set[str]]) β
Return type
Set[str]
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load web pages.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.RedditPostsLoader(client_id, client_secret, user_agent, search_queries, mode, categories=['new'], number_posts=10)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Reddit posts loader.
Read posts on a subreddit.
First you need to go to
https://www.reddit.com/prefs/apps/
and create your application
Parameters
client_id (str) β
client_secret (str) β
user_agent (str) β
search_queries (Sequence[str]) β
mode (str) β
categories (Sequence[str]) β
number_posts (Optional[int]) β
load()[source]ο
Load reddits.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.RoamLoader(path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Roam files from disk.
Parameters
path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-44 | Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.S3DirectoryLoader(bucket, prefix='')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from s3.
Parameters
bucket (str) β
prefix (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.S3FileLoader(bucket, key)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from s3.
Parameters
bucket (str) β
key (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.SRTLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader for .srt (subtitle) files.
Parameters
file_path (str) β
load()[source]ο
Load using pysrt file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.SeleniumURLLoader(urls, continue_on_failure=True, browser='chrome', binary_location=None, executable_path=None, headless=True, arguments=[])[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses Selenium and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
Parameters
urls (List[str]) β
continue_on_failure (bool) β
browser (Literal['chrome', 'firefox']) β
binary_location (Optional[str]) β
executable_path (Optional[str]) β
headless (bool) β
arguments (List[str]) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-45 | headless (bool) β
arguments (List[str]) β
urlsο
List of URLs to load.
Type
List[str]
continue_on_failureο
If True, continue loading other URLs on failure.
Type
bool
browserο
The browser to use, either βchromeβ or βfirefoxβ.
Type
str
binary_locationο
The location of the browser binary.
Type
Optional[str]
executable_pathο
The path to the browser executable.
Type
Optional[str]
headlessο
If True, the browser will run in headless mode.
Type
bool
arguments [List[str]]
List of arguments to pass to the browser.
load()[source]ο
Load the specified URLs using Selenium and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
class langchain.document_loaders.SitemapLoader(web_path, filter_urls=None, parsing_function=None, blocksize=None, blocknum=0, meta_function=None, is_local=False)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that fetches a sitemap and loads those URLs.
Parameters
web_path (str) β
filter_urls (Optional[List[str]]) β
parsing_function (Optional[Callable]) β
blocksize (Optional[int]) β
blocknum (int) β
meta_function (Optional[Callable]) β
is_local (bool) β
parse_sitemap(soup)[source]ο
Parse sitemap xml and load into a list of dicts.
Parameters
soup (Any) β
Return type
List[dict]
load()[source]ο
Load sitemap.
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-46 | Load sitemap.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.SlackDirectoryLoader(zip_path, workspace_url=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader for loading documents from a Slack directory dump.
Parameters
zip_path (str) β
workspace_url (Optional[str]) β
load()[source]ο
Load and return documents from the Slack directory dump.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.SnowflakeLoader(query, user, password, account, warehouse, role, database, schema, parameters=None, page_content_columns=None, metadata_columns=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from Snowflake into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Parameters
query (str) β
user (str) β
password (str) β
account (str) β
warehouse (str) β
role (str) β
database (str) β
schema (str) β
parameters (Optional[Dict[str, Any]]) β
page_content_columns (Optional[List[str]]) β
metadata_columns (Optional[List[str]]) β
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.SpreedlyLoader(access_token, resource)[source]ο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-47 | class langchain.document_loaders.SpreedlyLoader(access_token, resource)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that fetches data from Spreedly API.
Parameters
access_token (str) β
resource (str) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.StripeLoader(resource, access_token=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that fetches data from Stripe.
Parameters
resource (str) β
access_token (Optional[str]) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.TelegramChatApiLoader(chat_entity=None, api_id=None, api_hash=None, username=None, file_path='telegram_data.json')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Telegram chat json directory dump.
Parameters
chat_entity (Optional[EntityLike]) β
api_id (Optional[int]) β
api_hash (Optional[str]) β
username (Optional[str]) β
file_path (str) β
async fetch_data_from_telegram()[source]ο
Fetch data from Telegram API and save it as a JSON file.
Return type
None
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.TelegramChatFileLoader(path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Telegram chat json directory dump.
Parameters
path (str) β
load()[source]ο
Load documents. | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-48 | Parameters
path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
langchain.document_loaders.TelegramChatLoaderο
alias of langchain.document_loaders.telegram.TelegramChatFileLoader
class langchain.document_loaders.TextLoader(file_path, encoding=None, autodetect_encoding=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load text files.
Parameters
file_path (str) β Path to the file to load.
encoding (Optional[str]) β File encoding to use. If None, the file will be loaded
encoding. (with the default system) β
autodetect_encoding (bool) β Whether to try to autodetect the file encoding
if the specified encoding fails.
load()[source]ο
Load from file path.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ToMarkdownLoader(url, api_key)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads HTML to markdown using 2markdown.
Parameters
url (str) β
api_key (str) β
lazy_load()[source]ο
Lazily load the file.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.TomlLoader(source)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
A TOML document loader that inherits from the BaseLoader class.
This class can be initialized with either a single source file or a source
directory containing TOML files.
Parameters
source (Union[str, pathlib.Path]) β
load()[source]ο
Load and return all documents.
Return type | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-49 | load()[source]ο
Load and return all documents.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
Lazily load the TOML documents from the source file or directory.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.TrelloLoader(client, board_name, *, include_card_name=True, include_comments=True, include_checklist=True, card_filter='all', extra_metadata=('due_date', 'labels', 'list', 'closed'))[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Trello loader. Reads all cards from a Trello board.
Parameters
client (TrelloClient) β
board_name (str) β
include_card_name (bool) β
include_comments (bool) β
include_checklist (bool) β
card_filter (Literal['closed', 'open', 'all']) β
extra_metadata (Tuple[str, ...]) β
classmethod from_credentials(board_name, *, api_key=None, token=None, **kwargs)[source]ο
Convenience constructor that builds TrelloClient init param for you.
Parameters
board_name (str) β The name of the Trello board.
api_key (Optional[str]) β Trello API key. Can also be specified as environment variable
TRELLO_API_KEY.
token (Optional[str]) β Trello token. Can also be specified as environment variable
TRELLO_TOKEN.
include_card_name β Whether to include the name of the card in the document.
include_comments β Whether to include the comments on the card in the
document.
include_checklist β Whether to include the checklist on the card in the
document.
card_filter β Filter on card status. Valid values are βclosedβ, βopenβ,
βallβ. | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-50 | βallβ.
extra_metadata β List of additional metadata fields to include as document
metadata.Valid values are βdue_dateβ, βlabelsβ, βlistβ, βclosedβ.
kwargs (Any) β
Return type
langchain.document_loaders.trello.TrelloLoader
load()[source]ο
Loads all cards from the specified Trello board.
You can filter the cards, metadata and text included by using the optional
parameters.
Returns:A list of documents, one for each card in the board.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.TwitterTweetLoader(auth_handler, twitter_users, number_tweets=100)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Twitter tweets loader.
Read tweets of user twitter handle.
First you need to go to
https://developer.twitter.com/en/docs/twitter-api
/getting-started/getting-access-to-the-twitter-api
to get your token. And create a v2 version of the app.
Parameters
auth_handler (Union[OAuthHandler, OAuth2BearerHandler]) β
twitter_users (Sequence[str]) β
number_tweets (Optional[int]) β
load()[source]ο
Load tweets.
Return type
List[langchain.schema.Document]
classmethod from_bearer_token(oauth2_bearer_token, twitter_users, number_tweets=100)[source]ο
Create a TwitterTweetLoader from OAuth2 bearer token.
Parameters
oauth2_bearer_token (str) β
twitter_users (Sequence[str]) β
number_tweets (Optional[int]) β
Return type
langchain.document_loaders.twitter.TwitterTweetLoader
classmethod from_secrets(access_token, access_token_secret, consumer_key, consumer_secret, twitter_users, number_tweets=100)[source]ο
Create a TwitterTweetLoader from access tokens and secrets.
Parameters | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-51 | Create a TwitterTweetLoader from access tokens and secrets.
Parameters
access_token (str) β
access_token_secret (str) β
consumer_key (str) β
consumer_secret (str) β
twitter_users (Sequence[str]) β
number_tweets (Optional[int]) β
Return type
langchain.document_loaders.twitter.TwitterTweetLoader
class langchain.document_loaders.UnstructuredAPIFileIOLoader(file, mode='single', url='https://api.unstructured.io/general/v0/general', api_key='', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileIOLoader
Loader that uses the unstructured web API to load file IO objects.
Parameters
file (Union[IO, Sequence[IO]]) β
mode (str) β
url (str) β
api_key (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredAPIFileLoader(file_path='', mode='single', url='https://api.unstructured.io/general/v0/general', api_key='', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses the unstructured web API to load files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
url (str) β
api_key (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredCSVLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load CSV files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-52 | mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredEPubLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load epub files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredEmailLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load email files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredExcelLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load Microsoft Excel files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredFileIOLoader(file, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredBaseLoader
Loader that uses unstructured to load file IO objects.
Parameters
file (Union[IO, Sequence[IO]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredFileLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredBaseLoader | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-53 | Bases: langchain.document_loaders.unstructured.UnstructuredBaseLoader
Loader that uses unstructured to load files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredHTMLLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load HTML files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredImageLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load image files, such as PNGs and JPGs.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredMarkdownLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load markdown files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredODTLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load open office ODT files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-54 | mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredPDFLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load PDF files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredPowerPointLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load powerpoint files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredRSTLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load RST files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredRTFLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load rtf files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredURLLoader(urls, continue_on_failure=True, mode='single', show_progress_bar=False, **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.base.BaseLoader | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-55 | Bases: langchain.document_loaders.base.BaseLoader
Loader that uses unstructured to load HTML files.
Parameters
urls (List[str]) β
continue_on_failure (bool) β
mode (str) β
show_progress_bar (bool) β
unstructured_kwargs (Any) β
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.UnstructuredWordDocumentLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load word documents.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredXMLLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load XML files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.WeatherDataLoader(client, places)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Weather Reader.
Reads the forecast & current weather of any location using OpenWeatherMapβs free
API. Checkout βhttps://openweathermap.org/appidβ for more on how to generate a free
OpenWeatherMap API.
Parameters
client (OpenWeatherMapAPIWrapper) β
places (Sequence[str]) β
Return type
None
classmethod from_params(places, *, openweathermap_api_key=None)[source]ο
Parameters
places (Sequence[str]) β
openweathermap_api_key (Optional[str]) β
Return type | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-56 | openweathermap_api_key (Optional[str]) β
Return type
langchain.document_loaders.weather.WeatherDataLoader
lazy_load()[source]ο
Lazily load weather data for the given locations.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load weather data for the given locations.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.WebBaseLoader(web_path, header_template=None, verify=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses urllib and beautiful soup to load webpages.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β
requests_per_second: int = 2ο
Max number of concurrent requests to make.
default_parser: str = 'html.parser'ο
Default parser to use for BeautifulSoup.
requests_kwargs: Dict[str, Any] = {}ο
kwargs for requests
bs_get_text_kwargs: Dict[str, Any] = {}ο
kwargs for beatifulsoup4 get_text
web_paths: List[str]ο
property web_path: strο
async fetch_all(urls)[source]ο
Fetch all urls concurrently with rate limiting.
Parameters
urls (List[str]) β
Return type
Any
scrape_all(urls, parser=None)[source]ο
Fetch all urls, then return soups for all results.
Parameters
urls (List[str]) β
parser (Optional[str]) β
Return type
List[Any]
scrape(parser=None)[source]ο
Scrape data from webpage and return it in BeautifulSoup format.
Parameters
parser (Optional[str]) β
Return type
Any
lazy_load()[source]ο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-57 | Return type
Any
lazy_load()[source]ο
Lazy load text from the url(s) in web_path.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load text from the url(s) in web_path.
Return type
List[langchain.schema.Document]
aload()[source]ο
Load text from the urls in web_path async into Documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.WhatsAppChatLoader(path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads WhatsApp messages text file.
Parameters
path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.WikipediaLoader(query, lang='en', load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from www.wikipedia.org into a list of Documents.
The hard limit on the number of downloaded Documents is 300 for now.
Each wiki page represents one Document.
Parameters
query (str) β
lang (str) β
load_max_docs (Optional[int]) β
load_all_available_meta (Optional[bool]) β
doc_content_chars_max (Optional[int]) β
load()[source]ο
Loads the query result from Wikipedia into a list of Documents.
Returns
A list of Document objects representing the loadedWikipedia pages.
Return type
List[Document]
class langchain.document_loaders.YoutubeAudioLoader(urls, save_dir)[source]ο
Bases: langchain.document_loaders.blob_loaders.schema.BlobLoader
Load YouTube urls as audio file(s).
Parameters | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
98c621fca5f3-58 | Load YouTube urls as audio file(s).
Parameters
urls (List[str]) β
save_dir (str) β
yield_blobs()[source]ο
Yield audio blobs for each url.
Return type
Iterable[langchain.document_loaders.blob_loaders.schema.Blob]
class langchain.document_loaders.YoutubeLoader(video_id, add_video_info=False, language='en', translation='en', continue_on_failure=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Youtube transcripts.
Parameters
video_id (str) β
add_video_info (bool) β
language (Union[str, Sequence[str]]) β
translation (str) β
continue_on_failure (bool) β
static extract_video_id(youtube_url)[source]ο
Extract video id from common YT urls.
Parameters
youtube_url (str) β
Return type
str
classmethod from_youtube_url(youtube_url, **kwargs)[source]ο
Given youtube URL, load video.
Parameters
youtube_url (str) β
kwargs (Any) β
Return type
langchain.document_loaders.youtube.YoutubeLoader
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
6c0374edea1b-0 | Experimentalο
This module contains experimental modules and reproductions of existing work using LangChain primitives.
Autonomous agentsο
Here, we document the BabyAGI and AutoGPT classes from the langchain.experimental module.
class langchain.experimental.BabyAGI(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, task_list=None, task_creation_chain, task_prioritization_chain, execution_chain, task_id_counter=1, vectorstore, max_iterations=None)[source]ο
Bases: langchain.chains.base.Chain, pydantic.main.BaseModel
Controller model for the BabyAGI agent.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
task_list (collections.deque) β
task_creation_chain (langchain.chains.base.Chain) β
task_prioritization_chain (langchain.chains.base.Chain) β
execution_chain (langchain.chains.base.Chain) β
task_id_counter (int) β
vectorstore (langchain.vectorstores.base.VectorStore) β
max_iterations (Optional[int]) β
Return type
None
model Config[source]ο
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = Trueο
property input_keys: List[str]ο
Input keys this chain expects.
property output_keys: List[str]ο
Output keys this chain expects.
get_next_task(result, task_description, objective)[source]ο
Get the next task.
Parameters
result (str) β
task_description (str) β | https://api.python.langchain.com/en/latest/modules/experimental.html |
6c0374edea1b-1 | Parameters
result (str) β
task_description (str) β
objective (str) β
Return type
List[Dict]
prioritize_tasks(this_task_id, objective)[source]ο
Prioritize tasks.
Parameters
this_task_id (int) β
objective (str) β
Return type
List[Dict]
execute_task(objective, task, k=5)[source]ο
Execute a task.
Parameters
objective (str) β
task (str) β
k (int) β
Return type
str
classmethod from_llm(llm, vectorstore, verbose=False, task_execution_chain=None, **kwargs)[source]ο
Initialize the BabyAGI Controller.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
vectorstore (langchain.vectorstores.base.VectorStore) β
verbose (bool) β
task_execution_chain (Optional[langchain.chains.base.Chain]) β
kwargs (Dict[str, Any]) β
Return type
langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI
class langchain.experimental.AutoGPT(ai_name, memory, chain, output_parser, tools, feedback_tool=None, chat_history_memory=None)[source]ο
Bases: object
Agent class for interacting with Auto-GPT.
Parameters
ai_name (str) β
memory (VectorStoreRetriever) β
chain (LLMChain) β
output_parser (BaseAutoGPTOutputParser) β
tools (List[BaseTool]) β
feedback_tool (Optional[HumanInputRun]) β
chat_history_memory (Optional[BaseChatMessageHistory]) β
Generative agentsο
Here, we document the GenerativeAgent and GenerativeAgentMemory classes from the langchain.experimental module. | https://api.python.langchain.com/en/latest/modules/experimental.html |
6c0374edea1b-2 | class langchain.experimental.GenerativeAgent(*, name, age=None, traits='N/A', status, memory, llm, verbose=False, summary='', summary_refresh_seconds=3600, last_refreshed=None, daily_summaries=None)[source]ο
Bases: pydantic.main.BaseModel
A character with memory and innate characteristics.
Parameters
name (str) β
age (Optional[int]) β
traits (str) β
status (str) β
memory (langchain.experimental.generative_agents.memory.GenerativeAgentMemory) β
llm (langchain.base_language.BaseLanguageModel) β
verbose (bool) β
summary (str) β
summary_refresh_seconds (int) β
last_refreshed (datetime.datetime) β
daily_summaries (List[str]) β
Return type
None
attribute name: str [Required]ο
The characterβs name.
attribute age: Optional[int] = Noneο
The optional age of the character.
attribute traits: str = 'N/A'ο
Permanent traits to ascribe to the character.
attribute status: str [Required]ο
The traits of the character you wish not to change.
attribute memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory [Required]ο
The memory object that combines relevance, recency, and βimportanceβ.
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
The underlying language model.
attribute summary: str = ''ο
Stateful self-summary generated via reflection on the characterβs memory.
attribute summary_refresh_seconds: int = 3600ο
How frequently to re-generate the summary.
attribute last_refreshed: datetime.datetime [Optional]ο
The last time the characterβs summary was regenerated. | https://api.python.langchain.com/en/latest/modules/experimental.html |
6c0374edea1b-3 | The last time the characterβs summary was regenerated.
attribute daily_summaries: List[str] [Optional]ο
Summary of the events in the plan that the agent took.
model Config[source]ο
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = Trueο
summarize_related_memories(observation)[source]ο
Summarize memories that are most relevant to an observation.
Parameters
observation (str) β
Return type
str
generate_reaction(observation, now=None)[source]ο
React to a given observation.
Parameters
observation (str) β
now (Optional[datetime.datetime]) β
Return type
Tuple[bool, str]
generate_dialogue_response(observation, now=None)[source]ο
React to a given observation.
Parameters
observation (str) β
now (Optional[datetime.datetime]) β
Return type
Tuple[bool, str]
get_summary(force_refresh=False, now=None)[source]ο
Return a descriptive summary of the agent.
Parameters
force_refresh (bool) β
now (Optional[datetime.datetime]) β
Return type
str
get_full_header(force_refresh=False, now=None)[source]ο
Return a full header of the agentβs status, summary, and current time.
Parameters
force_refresh (bool) β
now (Optional[datetime.datetime]) β
Return type
str | https://api.python.langchain.com/en/latest/modules/experimental.html |
6c0374edea1b-4 | now (Optional[datetime.datetime]) β
Return type
str
class langchain.experimental.GenerativeAgentMemory(*, llm, memory_retriever, verbose=False, reflection_threshold=None, current_plan=[], importance_weight=0.15, aggregate_importance=0.0, max_tokens_limit=1200, queries_key='queries', most_recent_memories_token_key='recent_memories_token', add_memory_key='add_memory', relevant_memories_key='relevant_memories', relevant_memories_simple_key='relevant_memories_simple', most_recent_memories_key='most_recent_memories', now_key='now', reflecting=False)[source]ο
Bases: langchain.schema.BaseMemory
Parameters
llm (langchain.base_language.BaseLanguageModel) β
memory_retriever (langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever) β
verbose (bool) β
reflection_threshold (Optional[float]) β
current_plan (List[str]) β
importance_weight (float) β
aggregate_importance (float) β
max_tokens_limit (int) β
queries_key (str) β
most_recent_memories_token_key (str) β
add_memory_key (str) β
relevant_memories_key (str) β
relevant_memories_simple_key (str) β
most_recent_memories_key (str) β
now_key (str) β
reflecting (bool) β
Return type
None
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
The core language model.
attribute memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required]ο
The retriever to fetch related memories.
attribute reflection_threshold: Optional[float] = Noneο | https://api.python.langchain.com/en/latest/modules/experimental.html |
6c0374edea1b-5 | attribute reflection_threshold: Optional[float] = Noneο
When aggregate_importance exceeds reflection_threshold, stop to reflect.
attribute current_plan: List[str] = []ο
The current plan of the agent.
attribute importance_weight: float = 0.15ο
How much weight to assign the memory importance.
attribute aggregate_importance: float = 0.0ο
Track the sum of the βimportanceβ of recent memories.
Triggers reflection when it reaches reflection_threshold.
pause_to_reflect(now=None)[source]ο
Reflect on recent observations and generate βinsightsβ.
Parameters
now (Optional[datetime.datetime]) β
Return type
List[str]
add_memories(memory_content, now=None)[source]ο
Add an observations or memories to the agentβs memory.
Parameters
memory_content (str) β
now (Optional[datetime.datetime]) β
Return type
List[str]
add_memory(memory_content, now=None)[source]ο
Add an observation or memory to the agentβs memory.
Parameters
memory_content (str) β
now (Optional[datetime.datetime]) β
Return type
List[str]
fetch_memories(observation, now=None)[source]ο
Fetch related memories.
Parameters
observation (str) β
now (Optional[datetime.datetime]) β
Return type
List[langchain.schema.Document]
property memory_variables: List[str]ο
Input keys this memory class will load dynamically.
load_memory_variables(inputs)[source]ο
Return key-value pairs given the text input to the chain.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, str]
save_context(inputs, outputs)[source]ο
Save the context of this model run to memory.
Parameters
inputs (Dict[str, Any]) β | https://api.python.langchain.com/en/latest/modules/experimental.html |
6c0374edea1b-6 | Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, Any]) β
Return type
None
clear()[source]ο
Clear memory contents.
Return type
None | https://api.python.langchain.com/en/latest/modules/experimental.html |
1b035ff3a02e-0 | Utilitiesο
General utilities.
class langchain.utilities.ApifyWrapper(*, apify_client=None, apify_client_async=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around Apify.
To use, you should have the apify-client python package installed,
and the environment variable APIFY_API_TOKEN set with your API key, or pass
apify_api_token as a named parameter to the constructor.
Parameters
apify_client (Any) β
apify_client_async (Any) β
Return type
None
attribute apify_client: Any = Noneο
attribute apify_client_async: Any = Noneο
async acall_actor(actor_id, run_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]ο
Run an Actor on the Apify platform and wait for results to be ready.
Parameters
actor_id (str) β The ID or name of the Actor on the Apify platform.
run_input (Dict) β The input object of the Actor that youβre trying to run.
dataset_mapping_function (Callable) β A function that takes a single
dictionary (an Apify dataset item) and converts it to
an instance of the Document class.
build (str, optional) β Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) β Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) β Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from theActor runβs default dataset.
Return type
ApifyDatasetLoader
call_actor(actor_id, run_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]ο | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-1 | Run an Actor on the Apify platform and wait for results to be ready.
Parameters
actor_id (str) β The ID or name of the Actor on the Apify platform.
run_input (Dict) β The input object of the Actor that youβre trying to run.
dataset_mapping_function (Callable) β A function that takes a single
dictionary (an Apify dataset item) and converts it to an
instance of the Document class.
build (str, optional) β Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) β Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) β Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from theActor runβs default dataset.
Return type
ApifyDatasetLoader
class langchain.utilities.ArxivAPIWrapper(*, arxiv_search=None, arxiv_exceptions=None, top_k_results=3, load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000, ARXIV_MAX_QUERY_LENGTH=300)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around ArxivAPI.
To use, you should have the arxiv python package installed.
https://lukasschwab.me/arxiv.py/index.html
This wrapper will use the Arxiv API to conduct searches and
fetch document summaries. By default, it will return the document summaries
of the top-k results.
It limits the Document content by doc_content_chars_max.
Set doc_content_chars_max=None if you donβt want to limit the content size.
Parameters
top_k_results (int) β number of the top-scored document used for the arxiv tool | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-2 | ARXIV_MAX_QUERY_LENGTH (int) β the cut limit on the query used for the arxiv tool.
load_max_docs (int) β a limit to the number of loaded documents
load_all_available_meta (bool) β
if True: the metadata of the loaded Documents gets all available meta info(see https://lukasschwab.me/arxiv.py/index.html#Result),
if False: the metadata gets only the most informative fields.
arxiv_search (Any) β
arxiv_exceptions (Any) β
doc_content_chars_max (Optional[int]) β
Return type
None
attribute arxiv_exceptions: Any = Noneο
attribute doc_content_chars_max: Optional[int] = 4000ο
attribute load_all_available_meta: bool = Falseο
attribute load_max_docs: int = 100ο
attribute top_k_results: int = 3ο
load(query)[source]ο
Run Arxiv search and get the article texts plus the article meta information.
See https://lukasschwab.me/arxiv.py/index.html#Search
Returns: a list of documents with the document.page_content in text format
Parameters
query (str) β
Return type
List[langchain.schema.Document]
run(query)[source]ο
Run Arxiv search and get the article meta information.
See https://lukasschwab.me/arxiv.py/index.html#Search
See https://lukasschwab.me/arxiv.py/index.html#Result
It uses only the most informative fields of article meta information.
Parameters
query (str) β
Return type
str
class langchain.utilities.BashProcess(strip_newlines=False, return_err_output=False, persistent=False)[source]ο
Bases: object
Executes bash commands and returns the output.
Parameters | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-3 | Bases: object
Executes bash commands and returns the output.
Parameters
strip_newlines (bool) β
return_err_output (bool) β
persistent (bool) β
run(commands)[source]ο
Run commands and return final output.
Parameters
commands (Union[str, List[str]]) β
Return type
str
process_output(output, command)[source]ο
Parameters
output (str) β
command (str) β
Return type
str
class langchain.utilities.BibtexparserWrapper[source]ο
Bases: pydantic.main.BaseModel
Wrapper around bibtexparser.
To use, you should have the bibtexparser python package installed.
https://bibtexparser.readthedocs.io/en/master/
This wrapper will use bibtexparser to load a collection of references from
a bibtex file and fetch document summaries.
Return type
None
get_metadata(entry, load_extra=False)[source]ο
Get metadata for the given entry.
Parameters
entry (Mapping[str, Any]) β
load_extra (bool) β
Return type
Dict[str, Any]
load_bibtex_entries(path)[source]ο
Load bibtex entries from the bibtex file at the given path.
Parameters
path (str) β
Return type
List[Dict[str, Any]]
class langchain.utilities.BingSearchAPIWrapper(*, bing_subscription_key, bing_search_url, k=10)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Bing Search API.
In order to set this up, follow instructions at:
https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e
Parameters | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-4 | Parameters
bing_subscription_key (str) β
bing_search_url (str) β
k (int) β
Return type
None
attribute bing_search_url: str [Required]ο
attribute bing_subscription_key: str [Required]ο
attribute k: int = 10ο
results(query, num_results)[source]ο
Run query through BingSearch and return metadata.
Parameters
query (str) β The query to search for.
num_results (int) β The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query)[source]ο
Run query through BingSearch and parse result.
Parameters
query (str) β
Return type
str
class langchain.utilities.BraveSearchWrapper(*, api_key, search_kwargs=None)[source]ο
Bases: pydantic.main.BaseModel
Parameters
api_key (str) β
search_kwargs (dict) β
Return type
None
attribute api_key: str [Required]ο
attribute search_kwargs: dict [Optional]ο
run(query)[source]ο
Parameters
query (str) β
Return type
str
class langchain.utilities.DuckDuckGoSearchAPIWrapper(*, k=10, region='wt-wt', safesearch='moderate', time='y', max_results=5)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for DuckDuckGo Search API.
Free and does not require any setup
Parameters
k (int) β
region (Optional[str]) β
safesearch (str) β
time (Optional[str]) β | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-5 | safesearch (str) β
time (Optional[str]) β
max_results (int) β
Return type
None
attribute k: int = 10ο
attribute max_results: int = 5ο
attribute region: Optional[str] = 'wt-wt'ο
attribute safesearch: str = 'moderate'ο
attribute time: Optional[str] = 'y'ο
get_snippets(query)[source]ο
Run query through DuckDuckGo and return concatenated results.
Parameters
query (str) β
Return type
List[str]
results(query, num_results)[source]ο
Run query through DuckDuckGo and return metadata.
Parameters
query (str) β The query to search for.
num_results (int) β The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query)[source]ο
Parameters
query (str) β
Return type
str
class langchain.utilities.GooglePlacesAPIWrapper(*, gplaces_api_key=None, google_map_client=None, top_k_results=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around Google Places API.
To use, you should have the googlemaps python package installed,an API key for the google maps platform,
and the enviroment variable ββGPLACES_API_KEYββ
set with your API key , or pass βgplaces_api_keyβ
as a named parameter to the constructor.
By default, this will return the all the results on the input query.You can use the top_k_results argument to limit the number of results.
Example
from langchain import GooglePlacesAPIWrapper | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-6 | Example
from langchain import GooglePlacesAPIWrapper
gplaceapi = GooglePlacesAPIWrapper()
Parameters
gplaces_api_key (Optional[str]) β
google_map_client (Any) β
top_k_results (Optional[int]) β
Return type
None
attribute gplaces_api_key: Optional[str] = Noneο
attribute top_k_results: Optional[int] = Noneο
fetch_place_details(place_id)[source]ο
Parameters
place_id (str) β
Return type
Optional[str]
format_place_details(place_details)[source]ο
Parameters
place_details (Dict[str, Any]) β
Return type
Optional[str]
run(query)[source]ο
Run Places search and get k number of places that exists that match.
Parameters
query (str) β
Return type
str
class langchain.utilities.GoogleSearchAPIWrapper(*, search_engine=None, google_api_key=None, google_cse_id=None, k=10, siterestrict=False)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Google Search API.
Adapted from: Instructions adapted from https://stackoverflow.com/questions/
37083058/
programmatically-searching-google-in-python-using-custom-search
TODO: DOCS for using it
1. Install google-api-python-client
- If you donβt already have a Google account, sign up.
- If you have never created a Google APIs Console project,
read the Managing Projects page and create a project in the Google API Console.
- Install the library using pip install google-api-python-client
The current version of the library is 2.70.0 at this time
2. To create an API key:
- Navigate to the APIs & ServicesβCredentials panel in Cloud Console.
- Select Create credentials, then select API key from the drop-down menu. | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-7 | - Select Create credentials, then select API key from the drop-down menu.
- The API key created dialog box displays your newly created key.
- You now have an API_KEY
3. Setup Custom Search Engine so you can search the entire web
- Create a custom search engine in this link.
- In Sites to search, add any valid URL (i.e. www.stackoverflow.com).
- Thatβs all you have to fill up, the rest doesnβt matter.
In the left-side menu, click Edit search engine β {your search engine name}
β Setup Set Search the entire web to ON. Remove the URL you added from
the list of Sites to search.
- Under Search engine ID youβll find the search-engine-ID.
4. Enable the Custom Search API
- Navigate to the APIs & ServicesβDashboard panel in Cloud Console.
- Click Enable APIs and Services.
- Search for Custom Search API and click on it.
- Click Enable.
URL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis
.com
Parameters
search_engine (Any) β
google_api_key (Optional[str]) β
google_cse_id (Optional[str]) β
k (int) β
siterestrict (bool) β
Return type
None
attribute google_api_key: Optional[str] = Noneο
attribute google_cse_id: Optional[str] = Noneο
attribute k: int = 10ο
attribute siterestrict: bool = Falseο
results(query, num_results)[source]ο
Run query through GoogleSearch and return metadata.
Parameters
query (str) β The query to search for.
num_results (int) β The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result. | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-8 | Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query)[source]ο
Run query through GoogleSearch and parse result.
Parameters
query (str) β
Return type
str
class langchain.utilities.GoogleSerperAPIWrapper(*, k=10, gl='us', hl='en', type='search', tbs=None, serper_api_key=None, aiosession=None, result_key_for_type={'images': 'images', 'news': 'news', 'places': 'places', 'search': 'organic'})[source]ο
Bases: pydantic.main.BaseModel
Wrapper around the Serper.dev Google Search API.
You can create a free API key at https://serper.dev.
To use, you should have the environment variable SERPER_API_KEY
set with your API key, or pass serper_api_key as a named parameter
to the constructor.
Example
from langchain import GoogleSerperAPIWrapper
google_serper = GoogleSerperAPIWrapper()
Parameters
k (int) β
gl (str) β
hl (str) β
type (Literal['news', 'search', 'places', 'images']) β
tbs (Optional[str]) β
serper_api_key (Optional[str]) β
aiosession (Optional[aiohttp.client.ClientSession]) β
result_key_for_type (dict) β
Return type
None
attribute aiosession: Optional[aiohttp.client.ClientSession] = Noneο
attribute gl: str = 'us'ο
attribute hl: str = 'en'ο
attribute k: int = 10ο
attribute serper_api_key: Optional[str] = Noneο | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-9 | attribute serper_api_key: Optional[str] = Noneο
attribute tbs: Optional[str] = Noneο
attribute type: Literal['news', 'search', 'places', 'images'] = 'search'ο
async aresults(query, **kwargs)[source]ο
Run query through GoogleSearch.
Parameters
query (str) β
kwargs (Any) β
Return type
Dict
async arun(query, **kwargs)[source]ο
Run query through GoogleSearch and parse result async.
Parameters
query (str) β
kwargs (Any) β
Return type
str
results(query, **kwargs)[source]ο
Run query through GoogleSearch.
Parameters
query (str) β
kwargs (Any) β
Return type
Dict
run(query, **kwargs)[source]ο
Run query through GoogleSearch and parse result.
Parameters
query (str) β
kwargs (Any) β
Return type
str
class langchain.utilities.GraphQLAPIWrapper(*, custom_headers=None, graphql_endpoint, gql_client=None, gql_function)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around GraphQL API.
To use, you should have the gql python package installed.
This wrapper will use the GraphQL API to conduct queries.
Parameters
custom_headers (Optional[Dict[str, str]]) β
graphql_endpoint (str) β
gql_client (Any) β
gql_function (Callable[[str], Any]) β
Return type
None
attribute custom_headers: Optional[Dict[str, str]] = Noneο
attribute graphql_endpoint: str [Required]ο
run(query)[source]ο
Run a GraphQL query and get the results.
Parameters
query (str) β
Return type
str | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-10 | class langchain.utilities.JiraAPIWrapper(*, jira=None, confluence=None, jira_username=None, jira_api_token=None, jira_instance_url=None, operations=[{'mode': 'jql', 'name': 'JQL Query', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira jql API, useful when you need to search for Jira issues.\nΒ Β Β The input to this tool is a JQL query string, and will be passed into atlassian-python-api\'s Jira `jql` function,\nΒ Β Β For example, to find all the issues in project "Test" assigned to the me, you would pass in the following string:\nΒ Β Β project = Test AND assignee = currentUser()\nΒ Β Β or to find issues with summaries that contain the word "test", you would pass in the following string:\nΒ Β Β summary ~ \'test\'\nΒ Β Β '}, {'mode': 'get_projects', 'name': 'Get Projects', 'description': "\nΒ Β Β This tool is a wrapper around atlassian-python-api's Jira project API, \nΒ Β Β useful when you need to fetch all the projects the user has access to, find out how many projects there are, or as an intermediary step that involv searching by projects. \nΒ Β Β there is no input to this tool.\nΒ Β Β "}, {'mode': 'create_issue', 'name': 'Create Issue', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira issue_create API, useful when you need to create a Jira issue. \nΒ Β Β The input to this tool is a dictionary specifying the fields of the Jira issue, and will be passed into atlassian-python-api\'s Jira `issue_create` function.\nΒ Β Β For example, to create a low priority task called "test issue" with description "test description", you would pass in the following | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-11 | low priority task called "test issue" with description "test description", you would pass in the following dictionary: \nΒ Β Β {{"summary": "test issue", "description": "test description", "issuetype": {{"name": "Task"}}, "priority": {{"name": "Low"}}}}\nΒ Β Β '}, {'mode': 'other', 'name': 'Catch all Jira API call', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira API.\nΒ Β Β There are other dedicated tools for fetching all projects, and creating and searching for issues, \nΒ Β Β use this tool if you need to perform any other actions allowed by the atlassian-python-api Jira API.\nΒ Β Β The input to this tool is line of python code that calls a function from atlassian-python-api\'s Jira API\nΒ Β Β For example, to update the summary field of an issue, you would pass in the following string:\nΒ Β Β self.jira.update_issue_field(key, {{"summary": "New summary"}})\nΒ Β Β or to find out how many projects are in the Jira instance, you would pass in the following string:\nΒ Β Β self.jira.projects()\nΒ Β Β For more information on the Jira API, refer to https://atlassian-python-api.readthedocs.io/jira.html\nΒ Β Β '}, {'mode': 'create_page', 'name': 'Create confluence page', 'description': 'This tool is a wrapper around atlassian-python-api\'s Confluence \natlassian-python-api API, useful when you need to create a Confluence page. The input to this tool is a dictionary \nspecifying the fields of the Confluence page, and will be passed into atlassian-python-api\'s Confluence `create_page` \nfunction. For example, to create a page in the DEMO space titled "This is the title" with body "This is the body. You | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-12 | the DEMO space titled "This is the title" with body "This is the body. You can use \n<strong>HTML tags</strong>!", you would pass in the following dictionary: {{"space": "DEMO", "title":"This is the \ntitle","body":"This is the body. You can use <strong>HTML tags</strong>!"}} '}])[source]ο | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-13 | Bases: pydantic.main.BaseModel
Wrapper for Jira API.
Parameters
jira (Any) β
confluence (Any) β
jira_username (Optional[str]) β
jira_api_token (Optional[str]) β
jira_instance_url (Optional[str]) β
operations (List[Dict]) β
Return type
None
attribute confluence: Any = Noneο
attribute jira_api_token: Optional[str] = Noneο
attribute jira_instance_url: Optional[str] = Noneο
attribute jira_username: Optional[str] = Noneο | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-14 | attribute operations: List[Dict] = [{'mode': 'jql', 'name': 'JQL Query', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira jql API, useful when you need to search for Jira issues.\nΒ Β Β The input to this tool is a JQL query string, and will be passed into atlassian-python-api\'s Jira `jql` function,\nΒ Β Β For example, to find all the issues in project "Test" assigned to the me, you would pass in the following string:\nΒ Β Β project = Test AND assignee = currentUser()\nΒ Β Β or to find issues with summaries that contain the word "test", you would pass in the following string:\nΒ Β Β summary ~ \'test\'\nΒ Β Β '}, {'mode': 'get_projects', 'name': 'Get Projects', 'description': "\nΒ Β Β This tool is a wrapper around atlassian-python-api's Jira project API, \nΒ Β Β useful when you need to fetch all the projects the user has access to, find out how many projects there are, or as an intermediary step that involv searching by projects. \nΒ Β Β there is no input to this tool.\nΒ Β Β "}, {'mode': 'create_issue', 'name': 'Create Issue', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira issue_create API, useful when you need to create a Jira issue. \nΒ Β Β The input to this tool is a dictionary specifying the fields of the Jira issue, and will be passed into atlassian-python-api\'s Jira `issue_create` function.\nΒ Β Β For example, to create a low priority task called "test issue" with description "test description", you would pass in the following dictionary: \nΒ Β Β {{"summary": "test issue", "description": "test description", "issuetype": {{"name": | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-15 | "test issue", "description": "test description", "issuetype": {{"name": "Task"}}, "priority": {{"name": "Low"}}}}\nΒ Β Β '}, {'mode': 'other', 'name': 'Catch all Jira API call', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira API.\nΒ Β Β There are other dedicated tools for fetching all projects, and creating and searching for issues, \nΒ Β Β use this tool if you need to perform any other actions allowed by the atlassian-python-api Jira API.\nΒ Β Β The input to this tool is line of python code that calls a function from atlassian-python-api\'s Jira API\nΒ Β Β For example, to update the summary field of an issue, you would pass in the following string:\nΒ Β Β self.jira.update_issue_field(key, {{"summary": "New summary"}})\nΒ Β Β or to find out how many projects are in the Jira instance, you would pass in the following string:\nΒ Β Β self.jira.projects()\nΒ Β Β For more information on the Jira API, refer to https://atlassian-python-api.readthedocs.io/jira.html\nΒ Β Β '}, {'mode': 'create_page', 'name': 'Create confluence page', 'description': 'This tool is a wrapper around atlassian-python-api\'s Confluence \natlassian-python-api API, useful when you need to create a Confluence page. The input to this tool is a dictionary \nspecifying the fields of the Confluence page, and will be passed into atlassian-python-api\'s Confluence `create_page` \nfunction. For example, to create a page in the DEMO space titled "This is the title" with body "This is the body. You can use \n<strong>HTML tags</strong>!", you would pass in the following dictionary: {{"space": "DEMO", | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-16 | you would pass in the following dictionary: {{"space": "DEMO", "title":"This is the \ntitle","body":"This is the body. You can use <strong>HTML tags</strong>!"}} '}]ο | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-17 | issue_create(query)[source]ο
Parameters
query (str) β
Return type
str
list()[source]ο
Return type
List[Dict]
other(query)[source]ο
Parameters
query (str) β
Return type
str
page_create(query)[source]ο
Parameters
query (str) β
Return type
str
parse_issues(issues)[source]ο
Parameters
issues (Dict) β
Return type
List[dict]
parse_projects(projects)[source]ο
Parameters
projects (List[dict]) β
Return type
List[dict]
project()[source]ο
Return type
str
run(mode, query)[source]ο
Parameters
mode (str) β
query (str) β
Return type
str
search(query)[source]ο
Parameters
query (str) β
Return type
str
class langchain.utilities.LambdaWrapper(*, lambda_client=None, function_name=None, awslambda_tool_name=None, awslambda_tool_description=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for AWS Lambda SDK.
Docs for using:
pip install boto3
Create a lambda function using the AWS Console or CLI
Run aws configure and enter your AWS credentials
Parameters
lambda_client (Any) β
function_name (Optional[str]) β
awslambda_tool_name (Optional[str]) β
awslambda_tool_description (Optional[str]) β
Return type
None
attribute awslambda_tool_description: Optional[str] = Noneο
attribute awslambda_tool_name: Optional[str] = Noneο
attribute function_name: Optional[str] = Noneο
run(query)[source]ο
Invoke Lambda function and parse result.
Parameters | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-18 | run(query)[source]ο
Invoke Lambda function and parse result.
Parameters
query (str) β
Return type
str
class langchain.utilities.MaxComputeAPIWrapper(client)[source]ο
Bases: object
Interface for querying Alibaba Cloud MaxCompute tables.
Parameters
client (ODPS) β
classmethod from_params(endpoint, project, *, access_id=None, secret_access_key=None)[source]ο
Convenience constructor that builds the odsp.ODPS MaxCompute client fromgiven parameters.
Parameters
endpoint (str) β MaxCompute endpoint.
project (str) β A project is a basic organizational unit of MaxCompute, which is
similar to a database.
access_id (Optional[str]) β MaxCompute access ID. Should be passed in directly or set as the
environment variable MAX_COMPUTE_ACCESS_ID.
secret_access_key (Optional[str]) β MaxCompute secret access key. Should be passed in
directly or set as the environment variable
MAX_COMPUTE_SECRET_ACCESS_KEY.
Return type
langchain.utilities.max_compute.MaxComputeAPIWrapper
lazy_query(query)[source]ο
Parameters
query (str) β
Return type
Iterator[dict]
query(query)[source]ο
Parameters
query (str) β
Return type
List[dict]
class langchain.utilities.MetaphorSearchAPIWrapper(*, metaphor_api_key, k=10)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Metaphor Search API.
Parameters
metaphor_api_key (str) β
k (int) β
Return type
None
attribute k: int = 10ο
attribute metaphor_api_key: str [Required]ο | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-19 | attribute metaphor_api_key: str [Required]ο
results(query, num_results, include_domains=None, exclude_domains=None, start_crawl_date=None, end_crawl_date=None, start_published_date=None, end_published_date=None)[source]ο
Run query through Metaphor Search and return metadata.
Parameters
query (str) β The query to search for.
num_results (int) β The number of results to return.
include_domains (Optional[List[str]]) β
exclude_domains (Optional[List[str]]) β
start_crawl_date (Optional[str]) β
end_crawl_date (Optional[str]) β
start_published_date (Optional[str]) β
end_published_date (Optional[str]) β
Returns
title - The title of the
url - The url
author - Author of the content, if applicable. Otherwise, None.
published_date - Estimated date published
in YYYY-MM-DD format. Otherwise, None.
Return type
A list of dictionaries with the following keys
async results_async(query, num_results, include_domains=None, exclude_domains=None, start_crawl_date=None, end_crawl_date=None, start_published_date=None, end_published_date=None)[source]ο
Get results from the Metaphor Search API asynchronously.
Parameters
query (str) β
num_results (int) β
include_domains (Optional[List[str]]) β
exclude_domains (Optional[List[str]]) β
start_crawl_date (Optional[str]) β
end_crawl_date (Optional[str]) β
start_published_date (Optional[str]) β
end_published_date (Optional[str]) β
Return type
List[Dict]
class langchain.utilities.OpenWeatherMapAPIWrapper(*, owm=None, openweathermap_api_key=None)[source]ο
Bases: pydantic.main.BaseModel | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-20 | Bases: pydantic.main.BaseModel
Wrapper for OpenWeatherMap API using PyOWM.
Docs for using:
Go to OpenWeatherMap and sign up for an API key
Save your API KEY into OPENWEATHERMAP_API_KEY env variable
pip install pyowm
Parameters
owm (Any) β
openweathermap_api_key (Optional[str]) β
Return type
None
attribute openweathermap_api_key: Optional[str] = Noneο
attribute owm: Any = Noneο
run(location)[source]ο
Get the current weather information for a specified location.
Parameters
location (str) β
Return type
str
class langchain.utilities.PowerBIDataset(*, dataset_id, table_names, group_id=None, credential=None, token=None, impersonated_user_name=None, sample_rows_in_table_info=1, schemas=None, aiosession=None)[source]ο
Bases: pydantic.main.BaseModel
Create PowerBI engine from dataset ID and credential or token.
Use either the credential or a supplied token to authenticate.
If both are supplied the credential is used to generate a token.
The impersonated_user_name is the UPN of a user to be impersonated.
If the model is not RLS enabled, this will be ignored.
Parameters
dataset_id (str) β
table_names (List[str]) β
group_id (Optional[str]) β
credential (Optional[TokenCredential]) β
token (Optional[str]) β
impersonated_user_name (Optional[str]) β
sample_rows_in_table_info (langchain.utilities.powerbi.ConstrainedIntValue) β
schemas (Dict[str, str]) β
aiosession (Optional[aiohttp.client.ClientSession]) β
Return type
None | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-21 | aiosession (Optional[aiohttp.client.ClientSession]) β
Return type
None
attribute aiosession: Optional[aiohttp.ClientSession] = Noneο
attribute credential: Optional[TokenCredential] = Noneο
attribute dataset_id: str [Required]ο
attribute group_id: Optional[str] = Noneο
attribute impersonated_user_name: Optional[str] = Noneο
attribute sample_rows_in_table_info: int = 1ο
Constraints
exclusiveMinimum = 0
maximum = 10
attribute schemas: Dict[str, str] [Optional]ο
attribute table_names: List[str] [Required]ο
attribute token: Optional[str] = Noneο
async aget_table_info(table_names=None)[source]ο
Get information about specified tables.
Parameters
table_names (Optional[Union[List[str], str]]) β
Return type
str
async arun(command)[source]ο
Execute a DAX command and return the result asynchronously.
Parameters
command (str) β
Return type
Any
get_schemas()[source]ο
Get the available schemaβs.
Return type
str
get_table_info(table_names=None)[source]ο
Get information about specified tables.
Parameters
table_names (Optional[Union[List[str], str]]) β
Return type
str
get_table_names()[source]ο
Get names of tables available.
Return type
Iterable[str]
run(command)[source]ο
Execute a DAX command and return a json representing the results.
Parameters
command (str) β
Return type
Any
property headers: Dict[str, str]ο
Get the token.
property request_url: strο
Get the request url.
property table_info: strο
Information about all tables in the database. | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-22 | property table_info: strο
Information about all tables in the database.
class langchain.utilities.PubMedAPIWrapper(*, top_k_results=3, load_max_docs=25, doc_content_chars_max=2000, load_all_available_meta=False, email='your_email@example.com', base_url_esearch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?', base_url_efetch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?', max_retry=5, sleep_time=0.2, ARXIV_MAX_QUERY_LENGTH=300)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around PubMed API.
This wrapper will use the PubMed API to conduct searches and fetch
document summaries. By default, it will return the document summaries
of the top-k results of an input search.
Parameters
top_k_results (int) β number of the top-scored document used for the PubMed tool
load_max_docs (int) β a limit to the number of loaded documents
load_all_available_meta (bool) β
if True: the metadata of the loaded Documents gets all available meta info(see https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch)
if False: the metadata gets only the most informative fields.
doc_content_chars_max (int) β
email (str) β
base_url_esearch (str) β
base_url_efetch (str) β
max_retry (int) β
sleep_time (float) β
ARXIV_MAX_QUERY_LENGTH (int) β
Return type
None
attribute doc_content_chars_max: int = 2000ο
attribute email: str = 'your_email@example.com'ο
attribute load_all_available_meta: bool = Falseο | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-23 | attribute load_all_available_meta: bool = Falseο
attribute load_max_docs: int = 25ο
attribute top_k_results: int = 3ο
load(query)[source]ο
Search PubMed for documents matching the query.
Return a list of dictionaries containing the document metadata.
Parameters
query (str) β
Return type
List[dict]
load_docs(query)[source]ο
Parameters
query (str) β
Return type
List[langchain.schema.Document]
retrieve_article(uid, webenv)[source]ο
Parameters
uid (str) β
webenv (str) β
Return type
dict
run(query)[source]ο
Run PubMed search and get the article meta information.
See https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch
It uses only the most informative fields of article meta information.
Parameters
query (str) β
Return type
str
class langchain.utilities.PythonREPL(*, _globals=None, _locals=None)[source]ο
Bases: pydantic.main.BaseModel
Simulates a standalone Python REPL.
Parameters
_globals (Optional[Dict]) β
_locals (Optional[Dict]) β
Return type
None
attribute globals: Optional[Dict] [Optional] (alias '_globals')ο
attribute locals: Optional[Dict] [Optional] (alias '_locals')ο
run(command)[source]ο
Run command with own globals/locals and returns anything printed.
Parameters
command (str) β
Return type
str
pydantic settings langchain.utilities.SceneXplainAPIWrapper[source]ο
Bases: pydantic.env_settings.BaseSettings, pydantic.main.BaseModel
Wrapper for SceneXplain API. | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-24 | Wrapper for SceneXplain API.
In order to set this up, you need API key for the SceneXplain API.
You can obtain a key by following the steps below.
- Sign up for a free account at https://scenex.jina.ai/.
- Navigate to the API Access page (https://scenex.jina.ai/api)
and create a new API key.
Show JSON schema{
"title": "SceneXplainAPIWrapper",
"description": "Wrapper for SceneXplain API.\n\nIn order to set this up, you need API key for the SceneXplain API.\nYou can obtain a key by following the steps below.\n- Sign up for a free account at https://scenex.jina.ai/.\n- Navigate to the API Access page (https://scenex.jina.ai/api)\n and create a new API key.",
"type": "object",
"properties": {
"scenex_api_key": {
"title": "Scenex Api Key",
"env": "SCENEX_API_KEY",
"env_names": "{'scenex_api_key'}",
"type": "string"
},
"scenex_api_url": {
"title": "Scenex Api Url",
"default": "https://us-central1-causal-diffusion.cloudfunctions.net/describe",
"env_names": "{'scenex_api_url'}",
"type": "string"
}
},
"required": [
"scenex_api_key"
],
"additionalProperties": false
}
Fields
scenex_api_key (str)
scenex_api_url (str) | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-25 | Fields
scenex_api_key (str)
scenex_api_url (str)
attribute scenex_api_key: str [Required]ο
attribute scenex_api_url: str = 'https://us-central1-causal-diffusion.cloudfunctions.net/describe'ο
run(image)[source]ο
Run SceneXplain image explainer.
Parameters
image (str) β
Return type
str
validator validate_environment » all fields[source]ο
Validate that api key exists in environment.
Parameters
values (Dict) β
Return type
Dict
class langchain.utilities.SearxSearchWrapper(*, searx_host='', unsecure=False, params=None, headers=None, engines=[], categories=[], query_suffix='', k=10, aiosession=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Searx API.
To use you need to provide the searx host by passing the named parameter
searx_host or exporting the environment variable SEARX_HOST.
In some situations you might want to disable SSL verification, for example
if you are running searx locally. You can do this by passing the named parameter
unsecure. You can also pass the host url scheme as http to disable SSL.
Example
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://localhost:8888")
Example with SSL disabled:from langchain.utilities import SearxSearchWrapper
# note the unsecure parameter is not needed if you pass the url scheme as
# http
searx = SearxSearchWrapper(searx_host="http://localhost:8888",
unsecure=True)
Parameters
searx_host (str) β
unsecure (bool) β | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-26 | Parameters
searx_host (str) β
unsecure (bool) β
params (dict) β
headers (Optional[dict]) β
engines (Optional[List[str]]) β
categories (Optional[List[str]]) β
query_suffix (Optional[str]) β
k (int) β
aiosession (Optional[Any]) β
Return type
None
attribute aiosession: Optional[Any] = Noneο
attribute categories: Optional[List[str]] = []ο
attribute engines: Optional[List[str]] = []ο
attribute headers: Optional[dict] = Noneο
attribute k: int = 10ο
attribute params: dict [Optional]ο
attribute query_suffix: Optional[str] = ''ο
attribute searx_host: str = ''ο
attribute unsecure: bool = Falseο
async aresults(query, num_results, engines=None, query_suffix='', **kwargs)[source]ο
Asynchronously query with json results.
Uses aiohttp. See results for more info.
Parameters
query (str) β
num_results (int) β
engines (Optional[List[str]]) β
query_suffix (Optional[str]) β
kwargs (Any) β
Return type
List[Dict]
async arun(query, engines=None, query_suffix='', **kwargs)[source]ο
Asynchronously version of run.
Parameters
query (str) β
engines (Optional[List[str]]) β
query_suffix (Optional[str]) β
kwargs (Any) β
Return type
str
results(query, num_results, engines=None, categories=None, query_suffix='', **kwargs)[source]ο
Run query through Searx API and returns the results with metadata.
Parameters
query (str) β The query to search for. | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-27 | Parameters
query (str) β The query to search for.
query_suffix (Optional[str]) β Extra suffix appended to the query.
num_results (int) β Limit the number of results to return.
engines (Optional[List[str]]) β List of engines to use for the query.
categories (Optional[List[str]]) β List of categories to use for the query.
**kwargs β extra parameters to pass to the searx API.
kwargs (Any) β
Returns
{snippet: The description of the result.
title: The title of the result.
link: The link to the result.
engines: The engines used for the result.
category: Searx category of the result.
}
Return type
Dict with the following keys
run(query, engines=None, categories=None, query_suffix='', **kwargs)[source]ο
Run query through Searx API and parse results.
You can pass any other params to the searx query API.
Parameters
query (str) β The query to search for.
query_suffix (Optional[str]) β Extra suffix appended to the query.
engines (Optional[List[str]]) β List of engines to use for the query.
categories (Optional[List[str]]) β List of categories to use for the query.
**kwargs β extra parameters to pass to the searx API.
kwargs (Any) β
Returns
The result of the query.
Return type
str
Raises
ValueError β If an error occured with the query.
Example
This will make a query to the qwant engine:
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://my.searx.host")
searx.run("what is the weather in France ?", engine="qwant") | https://api.python.langchain.com/en/latest/modules/utilities.html |
1b035ff3a02e-28 | searx.run("what is the weather in France ?", engine="qwant")
# the same result can be achieved using the `!` syntax of searx
# to select the engine using `query_suffix`
searx.run("what is the weather in France ?", query_suffix="!qwant")
class langchain.utilities.SerpAPIWrapper(*, search_engine=None, params={'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}, serpapi_api_key=None, aiosession=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around SerpAPI.
To use, you should have the google-search-results python package installed,
and the environment variable SERPAPI_API_KEY set with your API key, or pass
serpapi_api_key as a named parameter to the constructor.
Example
from langchain import SerpAPIWrapper
serpapi = SerpAPIWrapper()
Parameters
search_engine (Any) β
params (dict) β
serpapi_api_key (Optional[str]) β
aiosession (Optional[aiohttp.client.ClientSession]) β
Return type
None
attribute aiosession: Optional[aiohttp.client.ClientSession] = Noneο
attribute params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}ο
attribute serpapi_api_key: Optional[str] = Noneο
async aresults(query)[source]ο
Use aiohttp to run query through SerpAPI and return the results async.
Parameters
query (str) β
Return type
dict
async arun(query, **kwargs)[source]ο
Run query through SerpAPI and parse result async.
Parameters | https://api.python.langchain.com/en/latest/modules/utilities.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.