id
stringlengths
14
16
text
stringlengths
31
2.41k
source
stringlengths
54
121
1a644f703b6b-28
None on_chain_start(serialized, inputs, **kwargs)[source] Run when chain starts running. Parameters serialized (Dict[str, Any]) – inputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_end(outputs, **kwargs)[source] Run when chain ends running. Parameters outputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_error(error, **kwargs)[source] Run when chain errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_tool_start(serialized, input_str, **kwargs)[source] Run when tool starts running. Parameters serialized (Dict[str, Any]) – input_str (str) – kwargs (Any) – Return type None on_tool_end(output, **kwargs)[source] Run when tool ends running. Parameters output (str) – kwargs (Any) – Return type None on_tool_error(error, **kwargs)[source] Run when tool errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_text(text, **kwargs)[source] Run when agent is ending. Parameters text (str) – kwargs (Any) – Return type None on_agent_finish(finish, **kwargs)[source] Run when agent ends running. Parameters finish (langchain.schema.AgentFinish) – kwargs (Any) – Return type None on_agent_action(action, **kwargs)[source] Run on agent action. Parameters
https://api.python.langchain.com/en/latest/modules/callbacks.html
1a644f703b6b-29
Run on agent action. Parameters action (langchain.schema.AgentAction) – kwargs (Any) – Return type Any flush_tracker(langchain_asset=None, reset=True, finish=False, job_type=None, project=None, entity=None, tags=None, group=None, name=None, notes=None, visualize=None, complexity_metrics=None)[source] Flush the tracker and reset the session. Parameters langchain_asset (Any) – The langchain asset to save. reset (bool) – Whether to reset the session. finish (bool) – Whether to finish the run. job_type (Optional[str]) – The job type. project (Optional[str]) – The project. entity (Optional[str]) – The entity. tags (Optional[Sequence]) – The tags. group (Optional[str]) – The group. name (Optional[str]) – The name. notes (Optional[str]) – The notes. visualize (Optional[bool]) – Whether to visualize. complexity_metrics (Optional[bool]) – Whether to compute complexity metrics. Returns – None Return type None class langchain.callbacks.WhyLabsCallbackHandler(logger)[source] Bases: langchain.callbacks.base.BaseCallbackHandler WhyLabs CallbackHandler. Parameters logger (Logger) – on_llm_start(serialized, prompts, **kwargs)[source] Pass the input prompts to the logger Parameters serialized (Dict[str, Any]) – prompts (List[str]) – kwargs (Any) – Return type None on_llm_end(response, **kwargs)[source] Pass the generated response to the logger. Parameters response (langchain.schema.LLMResult) – kwargs (Any) – Return type None
https://api.python.langchain.com/en/latest/modules/callbacks.html
1a644f703b6b-30
kwargs (Any) – Return type None on_llm_new_token(token, **kwargs)[source] Do nothing. Parameters token (str) – kwargs (Any) – Return type None on_llm_error(error, **kwargs)[source] Do nothing. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_chain_start(serialized, inputs, **kwargs)[source] Do nothing. Parameters serialized (Dict[str, Any]) – inputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_end(outputs, **kwargs)[source] Do nothing. Parameters outputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_error(error, **kwargs)[source] Do nothing. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_tool_start(serialized, input_str, **kwargs)[source] Do nothing. Parameters serialized (Dict[str, Any]) – input_str (str) – kwargs (Any) – Return type None on_agent_action(action, color=None, **kwargs)[source] Do nothing. Parameters action (langchain.schema.AgentAction) – color (Optional[str]) – kwargs (Any) – Return type Any on_tool_end(output, color=None, observation_prefix=None, llm_prefix=None, **kwargs)[source] Do nothing. Parameters output (str) – color (Optional[str]) – observation_prefix (Optional[str]) –
https://api.python.langchain.com/en/latest/modules/callbacks.html
1a644f703b6b-31
color (Optional[str]) – observation_prefix (Optional[str]) – llm_prefix (Optional[str]) – kwargs (Any) – Return type None on_tool_error(error, **kwargs)[source] Do nothing. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_text(text, **kwargs)[source] Do nothing. Parameters text (str) – kwargs (Any) – Return type None on_agent_finish(finish, color=None, **kwargs)[source] Run on agent end. Parameters finish (langchain.schema.AgentFinish) – color (Optional[str]) – kwargs (Any) – Return type None flush()[source] Return type None close()[source] Return type None classmethod from_params(*, api_key=None, org_id=None, dataset_id=None, sentiment=False, toxicity=False, themes=False)[source] Instantiate whylogs Logger from params. Parameters api_key (Optional[str]) – WhyLabs API key. Optional because the preferred way to specify the API key is with environment variable WHYLABS_API_KEY. org_id (Optional[str]) – WhyLabs organization id to write profiles to. If not set must be specified in environment variable WHYLABS_DEFAULT_ORG_ID. dataset_id (Optional[str]) – The model or dataset this callback is gathering telemetry for. If not set must be specified in environment variable WHYLABS_DEFAULT_DATASET_ID. sentiment (bool) – If True will initialize a model to perform sentiment analysis compound score. Defaults to False and will not gather this metric.
https://api.python.langchain.com/en/latest/modules/callbacks.html
1a644f703b6b-32
sentiment analysis compound score. Defaults to False and will not gather this metric. toxicity (bool) – If True will initialize a model to score toxicity. Defaults to False and will not gather this metric. themes (bool) – If True will initialize a model to calculate distance to configured themes. Defaults to None and will not gather this metric. Return type Logger langchain.callbacks.get_openai_callback()[source] Get the OpenAI callback handler in a context manager. which conveniently exposes token and cost information. Returns The OpenAI callback handler. Return type OpenAICallbackHandler Example >>> with get_openai_callback() as cb: ... # Use the OpenAI callback handler langchain.callbacks.tracing_enabled(session_name='default')[source] Get the Deprecated LangChainTracer in a context manager. Parameters session_name (str, optional) – The name of the session. Defaults to β€œdefault”. Returns The LangChainTracer session. Return type TracerSessionV1 Example >>> with tracing_enabled() as session: ... # Use the LangChainTracer session langchain.callbacks.wandb_tracing_enabled(session_name='default')[source] Get the WandbTracer in a context manager. Parameters session_name (str, optional) – The name of the session. Defaults to β€œdefault”. Returns None Return type Generator[None, None, None] Example >>> with wandb_tracing_enabled() as session: ... # Use the WandbTracer session
https://api.python.langchain.com/en/latest/modules/callbacks.html
98c621fca5f3-0
Document Loaders All different types of document loaders. class langchain.document_loaders.AcreomLoader(path, encoding='UTF-8', collect_metadata=True)[source] Bases: langchain.document_loaders.base.BaseLoader Parameters path (str) – encoding (str) – collect_metadata (bool) – FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL) lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.AZLyricsLoader(web_path, header_template=None, verify=True)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Loader that loads AZLyrics webpages. Parameters web_path (Union[str, List[str]]) – header_template (Optional[dict]) – verify (Optional[bool]) – load()[source] Load webpage. Return type List[langchain.schema.Document] class langchain.document_loaders.AirbyteJSONLoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads local airbyte json files. Parameters file_path (str) – load()[source] Load file. Return type List[langchain.schema.Document] class langchain.document_loaders.AirtableLoader(api_token, table_id, base_id)[source] Bases: langchain.document_loaders.base.BaseLoader Loader for Airtable tables. Parameters api_token (str) – table_id (str) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-1
Parameters api_token (str) – table_id (str) – base_id (str) – lazy_load()[source] Lazy load records from table. Return type Iterator[langchain.schema.Document] load()[source] Load Table. Return type List[langchain.schema.Document] class langchain.document_loaders.ApifyDatasetLoader(dataset_id, dataset_mapping_function)[source] Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel Logic for loading documents from Apify datasets. Parameters dataset_id (str) – dataset_mapping_function (Callable[[Dict], langchain.schema.Document]) – Return type None attribute apify_client: Any = None attribute dataset_id: str [Required] The ID of the dataset on the Apify platform. attribute dataset_mapping_function: Callable[[Dict], langchain.schema.Document] [Required] A custom function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.ArxivLoader(query, load_max_docs=100, load_all_available_meta=False)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from arxiv.org into a list of Documents. Each document represents one Document. The loader converts the original PDF format into the text. Parameters query (str) – load_max_docs (Optional[int]) – load_all_available_meta (Optional[bool]) – load()[source] Load data into document objects. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-2
Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.AzureBlobStorageContainerLoader(conn_str, container, prefix='')[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from Azure Blob Storage. Parameters conn_str (str) – container (str) – prefix (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.AzureBlobStorageFileLoader(conn_str, container, blob_name)[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from Azure Blob Storage. Parameters conn_str (str) – container (str) – blob_name (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.BSHTMLLoader(file_path, open_encoding=None, bs_kwargs=None, get_text_separator='')[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses beautiful soup to parse HTML files. Parameters file_path (str) – open_encoding (Optional[str]) – bs_kwargs (Optional[dict]) – get_text_separator (str) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.BibtexLoader(file_path, *, parser=None, max_docs=None, max_content_chars=4000, load_extra_metadata=False, file_pattern='[^:]+\\.pdf')[source] Bases: langchain.document_loaders.base.BaseLoader Loads a bibtex file into a list of Documents.
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-3
Loads a bibtex file into a list of Documents. Each document represents one entry from the bibtex file. If a PDF file is present in the file bibtex field, the original PDF is loaded into the document text. If no such file entry is present, the abstract field is used instead. Parameters file_path (str) – parser (Optional[langchain.utilities.bibtex.BibtexparserWrapper]) – max_docs (Optional[int]) – max_content_chars (Optional[int]) – load_extra_metadata (bool) – file_pattern (str) – lazy_load()[source] Load bibtex file using bibtexparser and get the article texts plus the article metadata. See https://bibtexparser.readthedocs.io/en/master/ Returns a list of documents with the document.page_content in text format Return type Iterator[langchain.schema.Document] load()[source] Load bibtex file documents from the given bibtex file path. See https://bibtexparser.readthedocs.io/en/master/ Parameters file_path – the path to the bibtex file Returns a list of documents with the document.page_content in text format Return type List[langchain.schema.Document] class langchain.document_loaders.BigQueryLoader(query, project=None, page_content_columns=None, metadata_columns=None, credentials=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from BigQuery into a list of documents. Each document represents one row of the result. The page_content_columns are written into the page_content of the document. The metadata_columns are written into the metadata of the document. By default, all columns are written into the page_content and none into the metadata. Parameters
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-4
are written into the page_content and none into the metadata. Parameters query (str) – project (Optional[str]) – page_content_columns (Optional[List[str]]) – metadata_columns (Optional[List[str]]) – credentials (Optional[Credentials]) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.BiliBiliLoader(video_urls)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads bilibili transcripts. Parameters video_urls (List[str]) – load()[source] Load from bilibili url. Return type List[langchain.schema.Document] class langchain.document_loaders.BlackboardLoader(blackboard_course_url, bbrouter, load_all_recursively=True, basic_auth=None, cookies=None)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Loader that loads all documents from a Blackboard course. This loader is not compatible with all Blackboard courses. It is only compatible with courses that use the new Blackboard interface. To use this loader, you must have the BbRouter cookie. You can get this cookie by logging into the course and then copying the value of the BbRouter cookie from the browser’s developer tools. Example from langchain.document_loaders import BlackboardLoader loader = BlackboardLoader( blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1", bbrouter="expires:12345...", ) documents = loader.load() Parameters blackboard_course_url (str) – bbrouter (str) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-5
blackboard_course_url (str) – bbrouter (str) – load_all_recursively (bool) – basic_auth (Optional[Tuple[str, str]]) – cookies (Optional[dict]) – folder_path: str base_url: str load_all_recursively: bool check_bs4()[source] Check if BeautifulSoup4 is installed. Raises ImportError – If BeautifulSoup4 is not installed. Return type None load()[source] Load data into document objects. Returns List of documents. Return type List[langchain.schema.Document] download(path)[source] Download a file from a url. Parameters path (str) – Path to the file. Return type None parse_filename(url)[source] Parse the filename from a url. Parameters url (str) – Url to parse the filename from. Returns The filename. Return type str class langchain.document_loaders.Blob(*, data=None, mimetype=None, encoding='utf-8', path=None)[source] Bases: pydantic.main.BaseModel A blob is used to represent raw data by either reference or value. Provides an interface to materialize the blob in different representations, and help to decouple the development of data loaders from the downstream parsing of the raw data. Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob Parameters data (Optional[Union[bytes, str]]) – mimetype (Optional[str]) – encoding (str) – path (Optional[Union[str, pathlib.PurePath]]) – Return type None attribute data: Optional[Union[bytes, str]] = None
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-6
None attribute data: Optional[Union[bytes, str]] = None attribute encoding: str = 'utf-8' attribute mimetype: Optional[str] = None attribute path: Optional[Union[str, pathlib.PurePath]] = None as_bytes()[source] Read data as bytes. Return type bytes as_bytes_io()[source] Read data as a byte stream. Return type Generator[Union[_io.BytesIO, _io.BufferedReader], None, None] as_string()[source] Read data as a string. Return type str classmethod from_data(data, *, encoding='utf-8', mime_type=None, path=None)[source] Initialize the blob from in-memory data. Parameters data (Union[str, bytes]) – the in-memory data associated with the blob encoding (str) – Encoding to use if decoding the bytes into a string mime_type (Optional[str]) – if provided, will be set as the mime-type of the data path (Optional[str]) – if provided, will be set as the source from which the data came Returns Blob instance Return type langchain.document_loaders.blob_loaders.schema.Blob classmethod from_path(path, *, encoding='utf-8', mime_type=None, guess_type=True)[source] Load the blob from a path like object. Parameters path (Union[str, pathlib.PurePath]) – path like object to file to be read encoding (str) – Encoding to use if decoding the bytes into a string mime_type (Optional[str]) – if provided, will be set as the mime-type of the data guess_type (bool) – If True, the mimetype will be guessed from the file extension, if a mime-type was not provided Returns Blob instance Return type
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-7
if a mime-type was not provided Returns Blob instance Return type langchain.document_loaders.blob_loaders.schema.Blob property source: Optional[str] The source location of the blob as string if known otherwise none. class langchain.document_loaders.BlobLoader[source] Bases: abc.ABC Abstract interface for blob loaders implementation. Implementer should be able to load raw content from a storage system according to some criteria and return the raw content lazily as a stream of blobs. abstract yield_blobs()[source] A lazy loader for raw data represented by LangChain’s Blob object. Returns A generator over blobs Return type Iterable[langchain.document_loaders.blob_loaders.schema.Blob] class langchain.document_loaders.BlockchainDocumentLoader(contract_address, blockchainType=BlockchainType.ETH_MAINNET, api_key='docs-demo', startToken='', get_all_tokens=False, max_execution_time=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads elements from a blockchain smart contract into Langchain documents. The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet, Polygon mainnet, and Polygon Mumbai testnet. If no BlockchainType is specified, the default is Ethereum mainnet. The Loader uses the Alchemy API to interact with the blockchain. ALCHEMY_API_KEY environment variable must be set to use this loader. The API returns 100 NFTs per request and can be paginated using the startToken parameter. If get_all_tokens is set to True, the loader will get all tokens on the contract. Note that for contracts with a large number of tokens, this may take a long time (e.g. 10k tokens is 100 requests). Default value is false for this reason.
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-8
Default value is false for this reason. The max_execution_time (sec) can be set to limit the execution time of the loader. Future versions of this loader can: Support additional Alchemy APIs (e.g. getTransactions, etc.) Support additional blockain APIs (e.g. Infura, Opensea, etc.) Parameters contract_address (str) – blockchainType (langchain.document_loaders.blockchain.BlockchainType) – api_key (str) – startToken (str) – get_all_tokens (bool) – max_execution_time (Optional[int]) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.CSVLoader(file_path, source_column=None, csv_args=None, encoding=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a CSV file into a list of documents. Each document represents one row of the CSV file. Every row is converted into a key/value pair and outputted to a new line in the document’s page_content. The source for each document loaded from csv is set to the value of the file_path argument for all doucments by default. You can override this by setting the source_column argument to the name of a column in the CSV file. The source of each document will then be set to the value of the column with the name specified in source_column. Output Example:column1: value1 column2: value2 column3: value3 Parameters file_path (str) – source_column (Optional[str]) – csv_args (Optional[Dict]) – encoding (Optional[str]) – load()[source] Load data into document objects. Return type
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-9
load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.ChatGPTLoader(log_file, num_logs=- 1)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads conversations from exported ChatGPT data. Parameters log_file (str) – num_logs (int) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.CoNLLULoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader Load CoNLL-U files. Parameters file_path (str) – load()[source] Load from file path. Return type List[langchain.schema.Document] class langchain.document_loaders.CollegeConfidentialLoader(web_path, header_template=None, verify=True)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Loader that loads College Confidential webpages. Parameters web_path (Union[str, List[str]]) – header_template (Optional[dict]) – verify (Optional[bool]) – load()[source] Load webpage. Return type List[langchain.schema.Document] class langchain.document_loaders.ConfluenceLoader(url, api_key=None, username=None, oauth2=None, token=None, cloud=True, number_of_retries=3, min_retry_seconds=2, max_retry_seconds=10, confluence_kwargs=None)[source] Bases: langchain.document_loaders.base.BaseLoader Load Confluence pages. Port of https://llamahub.ai/l/confluence This currently supports username/api_key, Oauth2 login or personal access token authentication.
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-10
This currently supports username/api_key, Oauth2 login or personal access token authentication. Specify a list page_ids and/or space_key to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned. You can also specify a boolean include_attachments to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG, SVG, Word and Excel. Confluence API supports difference format of page content. The storage format is the raw XML representation for storage. The view format is the HTML representation for viewing with macros are rendered as though it is viewed by users. You can pass a enum content_format argument to load() to specify the content format, this is set to ContentFormat.STORAGE by default. Hint: space_key and page_id can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id> Example from langchain.document_loaders import ConfluenceLoader loader = ConfluenceLoader( url="https://yoursite.atlassian.com/wiki", username="me", api_key="12345" ) documents = loader.load(space_key="SPACE",limit=50) Parameters url (str) – _description_ api_key (str, optional) – _description_, defaults to None username (str, optional) – _description_, defaults to None oauth2 (dict, optional) – _description_, defaults to {} token (str, optional) – _description_, defaults to None cloud (bool, optional) – _description_, defaults to True
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-11
cloud (bool, optional) – _description_, defaults to True number_of_retries (Optional[int], optional) – How many times to retry, defaults to 3 min_retry_seconds (Optional[int], optional) – defaults to 2 max_retry_seconds (Optional[int], optional) – defaults to 10 confluence_kwargs (dict, optional) – additional kwargs to initialize confluence with Raises ValueError – Errors while validating input ImportError – Required dependencies not installed. static validate_init_args(url=None, api_key=None, username=None, oauth2=None, token=None)[source] Validates proper combinations of init arguments Parameters url (Optional[str]) – api_key (Optional[str]) – username (Optional[str]) – oauth2 (Optional[dict]) – token (Optional[str]) – Return type Optional[List] load(space_key=None, page_ids=None, label=None, cql=None, include_restricted_content=False, include_archived_content=False, include_attachments=False, include_comments=False, content_format=ContentFormat.STORAGE, limit=50, max_pages=1000, ocr_languages=None)[source] Parameters space_key (Optional[str], optional) – Space key retrieved from a confluence URL, defaults to None page_ids (Optional[List[str]], optional) – List of specific page IDs to load, defaults to None label (Optional[str], optional) – Get all pages with this label, defaults to None cql (Optional[str], optional) – CQL Expression, defaults to None include_restricted_content (bool, optional) – defaults to False include_archived_content (bool, optional) – Whether to include archived content, defaults to False include_attachments (bool, optional) – defaults to False include_comments (bool, optional) – defaults to False
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-12
include_comments (bool, optional) – defaults to False content_format (ContentFormat) – Specify content format, defaults to ContentFormat.STORAGE limit (int, optional) – Maximum number of pages to retrieve per request, defaults to 50 max_pages (int, optional) – Maximum number of pages to retrieve in total, defaults 1000 ocr_languages (str, optional) – The languages to use for the Tesseract agent. To use a language, you’ll first need to install the appropriate Tesseract language pack. Raises ValueError – _description_ ImportError – _description_ Returns _description_ Return type List[Document] paginate_request(retrieval_method, **kwargs)[source] Paginate the various methods to retrieve groups of pages. Unfortunately, due to page size, sometimes the Confluence API doesn’t match the limit value. If limit is >100 confluence seems to cap the response to 100. Also, due to the Atlassian Python package, we don’t get the β€œnext” values from the β€œ_links” key because they only return the value from the results key. So here, the pagination starts from 0 and goes until the max_pages, getting the limit number of pages with each request. We have to manually check if there are more docs based on the length of the returned list of pages, rather than just checking for the presence of a next key in the response like this page would have you do: https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/ Parameters retrieval_method (callable) – Function used to retrieve docs kwargs (Any) – Returns List of documents Return type List is_public_page(page)[source] Check if a page is publicly accessible. Parameters page (dict) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-13
Check if a page is publicly accessible. Parameters page (dict) – Return type bool process_pages(pages, include_restricted_content, include_attachments, include_comments, content_format, ocr_languages=None)[source] Process a list of pages into a list of documents. Parameters pages (List[dict]) – include_restricted_content (bool) – include_attachments (bool) – include_comments (bool) – content_format (langchain.document_loaders.confluence.ContentFormat) – ocr_languages (Optional[str]) – Return type List[langchain.schema.Document] process_page(page, include_attachments, include_comments, content_format, ocr_languages=None)[source] Parameters page (dict) – include_attachments (bool) – include_comments (bool) – content_format (langchain.document_loaders.confluence.ContentFormat) – ocr_languages (Optional[str]) – Return type langchain.schema.Document process_attachment(page_id, ocr_languages=None)[source] Parameters page_id (str) – ocr_languages (Optional[str]) – Return type List[str] process_pdf(link, ocr_languages=None)[source] Parameters link (str) – ocr_languages (Optional[str]) – Return type str process_image(link, ocr_languages=None)[source] Parameters link (str) – ocr_languages (Optional[str]) – Return type str process_doc(link)[source] Parameters link (str) – Return type str process_xls(link)[source] Parameters link (str) – Return type str process_svg(link, ocr_languages=None)[source] Parameters
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-14
str process_svg(link, ocr_languages=None)[source] Parameters link (str) – ocr_languages (Optional[str]) – Return type str class langchain.document_loaders.DataFrameLoader(data_frame, page_content_column='text')[source] Bases: langchain.document_loaders.base.BaseLoader Load Pandas DataFrames. Parameters data_frame (Any) – page_content_column (str) – lazy_load()[source] Lazy load records from dataframe. Return type Iterator[langchain.schema.Document] load()[source] Load full dataframe. Return type List[langchain.schema.Document] class langchain.document_loaders.DiffbotLoader(api_token, urls, continue_on_failure=True)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Diffbot file json. Parameters api_token (str) – urls (List[str]) – continue_on_failure (bool) – load()[source] Extract text from Diffbot on all the URLs and return Document instances Return type List[langchain.schema.Document] class langchain.document_loaders.DirectoryLoader(path, glob='**/[!.]*', silent_errors=False, load_hidden=False, loader_cls=<class 'langchain.document_loaders.unstructured.UnstructuredFileLoader'>, loader_kwargs=None, recursive=False, show_progress=False, use_multithreading=False, max_concurrency=4)[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from a directory. Parameters path (str) – glob (str) – silent_errors (bool) – load_hidden (bool) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-15
silent_errors (bool) – load_hidden (bool) – loader_cls (Union[Type[langchain.document_loaders.unstructured.UnstructuredFileLoader], Type[langchain.document_loaders.text.TextLoader], Type[langchain.document_loaders.html_bs.BSHTMLLoader]]) – loader_kwargs (Optional[dict]) – recursive (bool) – show_progress (bool) – use_multithreading (bool) – max_concurrency (int) – load_file(item, path, docs, pbar)[source] Parameters item (pathlib.Path) – path (pathlib.Path) – docs (List[langchain.schema.Document]) – pbar (Optional[Any]) – Return type None load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.DiscordChatLoader(chat_log, user_id_col='ID')[source] Bases: langchain.document_loaders.base.BaseLoader Load Discord chat logs. Parameters chat_log (pd.DataFrame) – user_id_col (str) – load()[source] Load all chat messages. Return type List[langchain.schema.Document] class langchain.document_loaders.DocugamiLoader(*, api='https://api.docugami.com/v1preview1', access_token=None, docset_id=None, document_ids=None, file_paths=None, min_chunk_size=32)[source] Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel Loader that loads processed docs from Docugami. To use, you should have the lxml python package installed. Parameters api (str) – access_token (Optional[str]) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-16
Parameters api (str) – access_token (Optional[str]) – docset_id (Optional[str]) – document_ids (Optional[Sequence[str]]) – file_paths (Optional[Sequence[Union[pathlib.Path, str]]]) – min_chunk_size (int) – Return type None attribute access_token: Optional[str] = None attribute api: str = 'https://api.docugami.com/v1preview1' attribute docset_id: Optional[str] = None attribute document_ids: Optional[Sequence[str]] = None attribute file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None attribute min_chunk_size: int = 32 load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.Docx2txtLoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader, abc.ABC Loads a DOCX with docx2txt and chunks at character level. Defaults to check for local file, but if the file is a web path, it will download it to a temporary file, and use that, then clean up the temporary file after completion Parameters file_path (str) – load()[source] Load given path as single page. Return type List[langchain.schema.Document] class langchain.document_loaders.DuckDBLoader(query, database=':memory:', read_only=False, config=None, page_content_columns=None, metadata_columns=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from DuckDB into a list of documents. Each document represents one row of the result. The page_content_columns
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-17
Each document represents one row of the result. The page_content_columns are written into the page_content of the document. The metadata_columns are written into the metadata of the document. By default, all columns are written into the page_content and none into the metadata. Parameters query (str) – database (str) – read_only (bool) – config (Optional[Dict[str, str]]) – page_content_columns (Optional[List[str]]) – metadata_columns (Optional[List[str]]) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.EmbaasBlobLoader(*, embaas_api_key=None, api_url='https://api.embaas.io/v1/document/extract-text/bytes/', params={})[source] Bases: langchain.document_loaders.embaas.BaseEmbaasLoader, langchain.document_loaders.base.BaseBlobParser Wrapper around embaas’s document byte loader service. To use, you should have the environment variable EMBAAS_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example # Default parsing from langchain.document_loaders.embaas import EmbaasBlobLoader loader = EmbaasBlobLoader() blob = Blob.from_path(path="example.mp3") documents = loader.parse(blob=blob) # Custom api parameters (create embeddings automatically) from langchain.document_loaders.embaas import EmbaasBlobLoader loader = EmbaasBlobLoader( params={ "should_embed": True, "model": "e5-large-v2", "chunk_size": 256, "chunk_splitter": "CharacterTextSplitter" } )
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-18
"chunk_splitter": "CharacterTextSplitter" } ) blob = Blob.from_path(path="example.pdf") documents = loader.parse(blob=blob) Parameters embaas_api_key (Optional[str]) – api_url (str) – params (langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters) – Return type None lazy_parse(blob)[source] Lazy parsing interface. Subclasses are required to implement this method. Parameters blob (langchain.document_loaders.blob_loaders.schema.Blob) – Blob instance Returns Generator of documents Return type Iterator[langchain.schema.Document] class langchain.document_loaders.EmbaasLoader(*, embaas_api_key=None, api_url='https://api.embaas.io/v1/document/extract-text/bytes/', params={}, file_path, blob_loader=None)[source] Bases: langchain.document_loaders.embaas.BaseEmbaasLoader, langchain.document_loaders.base.BaseLoader Wrapper around embaas’s document loader service. To use, you should have the environment variable EMBAAS_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example # Default parsing from langchain.document_loaders.embaas import EmbaasLoader loader = EmbaasLoader(file_path="example.mp3") documents = loader.load() # Custom api parameters (create embeddings automatically) from langchain.document_loaders.embaas import EmbaasBlobLoader loader = EmbaasBlobLoader( file_path="example.pdf", params={ "should_embed": True, "model": "e5-large-v2", "chunk_size": 256,
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-19
"chunk_size": 256, "chunk_splitter": "CharacterTextSplitter" } ) documents = loader.load() Parameters embaas_api_key (Optional[str]) – api_url (str) – params (langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters) – file_path (str) – blob_loader (Optional[langchain.document_loaders.embaas.EmbaasBlobLoader]) – Return type None attribute blob_loader: Optional[langchain.document_loaders.embaas.EmbaasBlobLoader] = None The blob loader to use. If not provided, a default one will be created. attribute file_path: str [Required] The path to the file to load. lazy_load()[source] Load the documents from the file path lazily. Return type Iterator[langchain.schema.Document] load()[source] Load data into document objects. Return type List[langchain.schema.Document] load_and_split(text_splitter=None)[source] Load documents and split into chunks. Parameters text_splitter (Optional[langchain.text_splitter.TextSplitter]) – Return type List[langchain.schema.Document] class langchain.document_loaders.EverNoteLoader(file_path, load_single_document=True)[source] Bases: langchain.document_loaders.base.BaseLoader EverNote Loader. Loads an EverNote notebook export file e.g. my_notebook.enex into Documents. Instructions on producing this file can be found at https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML Currently only the plain text in the note is extracted and stored as the contents
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-20
Currently only the plain text in the note is extracted and stored as the contents of the Document, any non content metadata (e.g. β€˜author’, β€˜created’, β€˜updated’ etc. but not β€˜content-raw’ or β€˜resource’) tags on the note will be extracted and stored as metadata on the Document. Parameters file_path (str) – The path to the notebook export with a .enex extension load_single_document (bool) – Whether or not to concatenate the content of all notes into a single long Document. True (If this is set to) – the β€˜source’ which contains the file name of the export. load()[source] Load documents from EverNote export file. Return type List[langchain.schema.Document] class langchain.document_loaders.FacebookChatLoader(path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Facebook messages json directory dump. Parameters path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.FaunaLoader(query, page_content_field, secret, metadata_fields=None)[source] Bases: langchain.document_loaders.base.BaseLoader FaunaDB Loader. Parameters query (str) – page_content_field (str) – secret (str) – metadata_fields (Optional[Sequence[str]]) – query The FQL query string to execute. Type str page_content_field The field that contains the content of each page. Type str secret The secret key for authenticating to FaunaDB. Type str metadata_fields Optional list of field names to include in metadata. Type Optional[Sequence[str]]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-21
Optional list of field names to include in metadata. Type Optional[Sequence[str]] load()[source] Load data into document objects. Return type List[langchain.schema.Document] lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.FigmaFileLoader(access_token, ids, key)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Figma file json. Parameters access_token (str) – ids (str) – key (str) – load()[source] Load file Return type List[langchain.schema.Document] class langchain.document_loaders.FileSystemBlobLoader(path, *, glob='**/[!.]*', suffixes=None, show_progress=False)[source] Bases: langchain.document_loaders.blob_loaders.schema.BlobLoader Blob loader for the local file system. Example: from langchain.document_loaders.blob_loaders import FileSystemBlobLoader loader = FileSystemBlobLoader("/path/to/directory") for blob in loader.yield_blobs(): print(blob) Parameters path (Union[str, pathlib.Path]) – glob (str) – suffixes (Optional[Sequence[str]]) – show_progress (bool) – Return type None yield_blobs()[source] Yield blobs that match the requested pattern. Return type Iterable[langchain.document_loaders.blob_loaders.schema.Blob] count_matching_files()[source] Count files that match the pattern without loading them. Return type int class langchain.document_loaders.GCSDirectoryLoader(project_name, bucket, prefix='')[source]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-22
Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from GCS. Parameters project_name (str) – bucket (str) – prefix (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.GCSFileLoader(project_name, bucket, blob)[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from GCS. Parameters project_name (str) – bucket (str) – blob (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.GitHubIssuesLoader(*, repo, access_token, include_prs=True, milestone=None, state=None, assignee=None, creator=None, mentioned=None, labels=None, sort=None, direction=None, since=None)[source] Bases: langchain.document_loaders.github.BaseGitHubLoader Parameters repo (str) – access_token (str) – include_prs (bool) – milestone (Optional[Union[int, Literal['*', 'none']]]) – state (Optional[Literal['open', 'closed', 'all']]) – assignee (Optional[str]) – creator (Optional[str]) – mentioned (Optional[str]) – labels (Optional[List[str]]) – sort (Optional[Literal['created', 'updated', 'comments']]) – direction (Optional[Literal['asc', 'desc']]) – since (Optional[str]) – Return type None attribute assignee: Optional[str] = None
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-23
Return type None attribute assignee: Optional[str] = None Filter on assigned user. Pass β€˜none’ for no user and β€˜*’ for any user. attribute creator: Optional[str] = None Filter on the user that created the issue. attribute direction: Optional[Literal['asc', 'desc']] = None The direction to sort the results by. Can be one of: β€˜asc’, β€˜desc’. attribute include_prs: bool = True If True include Pull Requests in results, otherwise ignore them. attribute labels: Optional[List[str]] = None Label names to filter one. Example: bug,ui,@high. attribute mentioned: Optional[str] = None Filter on a user that’s mentioned in the issue. attribute milestone: Optional[Union[int, Literal['*', 'none']]] = None If integer is passed, it should be a milestone’s number field. If the string β€˜*’ is passed, issues with any milestone are accepted. If the string β€˜none’ is passed, issues without milestones are returned. attribute since: Optional[str] = None Only show notifications updated after the given time. This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ. attribute sort: Optional[Literal['created', 'updated', 'comments']] = None What to sort results by. Can be one of: β€˜created’, β€˜updated’, β€˜comments’. Default is β€˜created’. attribute state: Optional[Literal['open', 'closed', 'all']] = None Filter on issue state. Can be one of: β€˜open’, β€˜closed’, β€˜all’. lazy_load()[source] Get issues of a GitHub repository. Returns page_content metadata url title creator
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-24
Returns page_content metadata url title creator created_at last_update_time closed_time number of comments state labels assignee assignees milestone locked number is_pull_request Return type A list of Documents with attributes load()[source] Get issues of a GitHub repository. Returns page_content metadata url title creator created_at last_update_time closed_time number of comments state labels assignee assignees milestone locked number is_pull_request Return type A list of Documents with attributes parse_issue(issue)[source] Create Document objects from a list of GitHub issues. Parameters issue (dict) – Return type langchain.schema.Document property query_params: str property url: str class langchain.document_loaders.GitLoader(repo_path, clone_url=None, branch='main', file_filter=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads files from a Git repository into a list of documents. Repository can be local on disk available at repo_path, or remote at clone_url that will be cloned to repo_path. Currently supports only text files. Each document represents one file in the repository. The path points to the local Git repository, and the branch specifies the branch to load files from. By default, it loads from the main branch. Parameters repo_path (str) – clone_url (Optional[str]) – branch (Optional[str]) – file_filter (Optional[Callable[[str], bool]]) – load()[source] Load data into document objects. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-25
Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.GitbookLoader(web_page, load_all_paths=False, base_url=None, content_selector='main')[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Load GitBook data. load from either a single page, or load all (relative) paths in the navbar. Parameters web_page (str) – load_all_paths (bool) – base_url (Optional[str]) – content_selector (str) – load()[source] Fetch text from one single GitBook page. Return type List[langchain.schema.Document] class langchain.document_loaders.GoogleApiClient(credentials_path=PosixPath('/home/docs/.credentials/credentials.json'), service_account_path=PosixPath('/home/docs/.credentials/credentials.json'), token_path=PosixPath('/home/docs/.credentials/token.json'))[source] Bases: object A Generic Google Api Client. To use, you should have the google_auth_oauthlib,youtube_transcript_api,google python package installed. As the google api expects credentials you need to set up a google account and register your Service. β€œhttps://developers.google.com/docs/api/quickstart/python” Example from langchain.document_loaders import GoogleApiClient google_api_client = GoogleApiClient( service_account_path=Path("path_to_your_sec_file.json") ) Parameters credentials_path (pathlib.Path) – service_account_path (pathlib.Path) – token_path (pathlib.Path) – Return type None credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-26
service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json') token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json') classmethod validate_channel_or_videoIds_is_set(values)[source] Validate that either folder_id or document_ids is set, but not both. Parameters values (Dict[str, Any]) – Return type Dict[str, Any] class langchain.document_loaders.GoogleApiYoutubeLoader(google_api_client, channel_name=None, video_ids=None, add_video_info=True, captions_language='en', continue_on_failure=False)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads all Videos from a Channel To use, you should have the googleapiclient,youtube_transcript_api python package installed. As the service needs a google_api_client, you first have to initialize the GoogleApiClient. Additionally you have to either provide a channel name or a list of videoids β€œhttps://developers.google.com/docs/api/quickstart/python” Example from langchain.document_loaders import GoogleApiClient from langchain.document_loaders import GoogleApiYoutubeLoader google_api_client = GoogleApiClient( service_account_path=Path("path_to_your_sec_file.json") ) loader = GoogleApiYoutubeLoader( google_api_client=google_api_client, channel_name = "CodeAesthetic" ) load.load() Parameters google_api_client (langchain.document_loaders.youtube.GoogleApiClient) – channel_name (Optional[str]) – video_ids (Optional[List[str]]) – add_video_info (bool) – captions_language (str) – continue_on_failure (bool) – Return type None google_api_client: langchain.document_loaders.youtube.GoogleApiClient
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-27
Return type None google_api_client: langchain.document_loaders.youtube.GoogleApiClient channel_name: Optional[str] = None video_ids: Optional[List[str]] = None add_video_info: bool = True captions_language: str = 'en' continue_on_failure: bool = False classmethod validate_channel_or_videoIds_is_set(values)[source] Validate that either folder_id or document_ids is set, but not both. Parameters values (Dict[str, Any]) – Return type Dict[str, Any] load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.GoogleDriveLoader(*, service_account_key=PosixPath('/home/docs/.credentials/keys.json'), credentials_path=PosixPath('/home/docs/.credentials/credentials.json'), token_path=PosixPath('/home/docs/.credentials/token.json'), folder_id=None, document_ids=None, file_ids=None, recursive=False, file_types=None, load_trashed_files=False, file_loader_cls=None, file_loader_kwargs={})[source] Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel Loader that loads Google Docs from Google Drive. Parameters service_account_key (pathlib.Path) – credentials_path (pathlib.Path) – token_path (pathlib.Path) – folder_id (Optional[str]) – document_ids (Optional[List[str]]) – file_ids (Optional[List[str]]) – recursive (bool) – file_types (Optional[Sequence[str]]) – load_trashed_files (bool) – file_loader_cls (Any) – file_loader_kwargs (Dict[str, Any]) – Return type None
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-28
file_loader_kwargs (Dict[str, Any]) – Return type None attribute credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json') attribute document_ids: Optional[List[str]] = None attribute file_ids: Optional[List[str]] = None attribute file_loader_cls: Any = None attribute file_loader_kwargs: Dict[str, Any] = {} attribute file_types: Optional[Sequence[str]] = None attribute folder_id: Optional[str] = None attribute load_trashed_files: bool = False attribute recursive: bool = False attribute service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json') attribute token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json') load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.GutenbergLoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses urllib to load .txt web files. Parameters file_path (str) – load()[source] Load file. Return type List[langchain.schema.Document] class langchain.document_loaders.HNLoader(web_path, header_template=None, verify=True)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Load Hacker News data from either main page results or the comments page. Parameters web_path (Union[str, List[str]]) – header_template (Optional[dict]) – verify (Optional[bool]) – load()[source] Get important HN webpage information. Components are: title content source url, time of post author of the post
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-29
title content source url, time of post author of the post number of comments rank of the post Return type List[langchain.schema.Document] load_comments(soup_info)[source] Load comments from a HN post. Parameters soup_info (Any) – Return type List[langchain.schema.Document] load_results(soup)[source] Load items from an HN page. Parameters soup (Any) – Return type List[langchain.schema.Document] class langchain.document_loaders.HuggingFaceDatasetLoader(path, page_content_column='text', name=None, data_dir=None, data_files=None, cache_dir=None, keep_in_memory=None, save_infos=False, use_auth_token=None, num_proc=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from the Hugging Face Hub. Parameters path (str) – page_content_column (str) – name (Optional[str]) – data_dir (Optional[str]) – data_files (Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]]) – cache_dir (Optional[str]) – keep_in_memory (Optional[bool]) – save_infos (bool) – use_auth_token (Optional[Union[bool, str]]) – num_proc (Optional[int]) – lazy_load()[source] Load documents lazily. Return type Iterator[langchain.schema.Document] load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.IFixitLoader(web_path)[source] Bases: langchain.document_loaders.base.BaseLoader
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-30
Bases: langchain.document_loaders.base.BaseLoader Load iFixit repair guides, device wikis and answers. iFixit is the largest, open repair community on the web. The site contains nearly 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is licensed under CC-BY. This loader will allow you to download the text of a repair guide, text of Q&A’s and wikis from devices on iFixit using their open APIs and web scraping. Parameters web_path (str) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] static load_suggestions(query='', doc_type='all')[source] Parameters query (str) – doc_type (str) – Return type List[langchain.schema.Document] load_questions_and_answers(url_override=None)[source] Parameters url_override (Optional[str]) – Return type List[langchain.schema.Document] load_device(url_override=None, include_guides=True)[source] Parameters url_override (Optional[str]) – include_guides (bool) – Return type List[langchain.schema.Document] load_guide(url_override=None)[source] Parameters url_override (Optional[str]) – Return type List[langchain.schema.Document] class langchain.document_loaders.IMSDbLoader(web_path, header_template=None, verify=True)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Loader that loads IMSDb webpages. Parameters web_path (Union[str, List[str]]) – header_template (Optional[dict]) – verify (Optional[bool]) – load()[source] Load webpage.
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-31
verify (Optional[bool]) – load()[source] Load webpage. Return type List[langchain.schema.Document] class langchain.document_loaders.ImageCaptionLoader(path_images, blip_processor='Salesforce/blip-image-captioning-base', blip_model='Salesforce/blip-image-captioning-base')[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads the captions of an image Parameters path_images (Union[str, List[str]]) – blip_processor (str) – blip_model (str) – load()[source] Load from a list of image files Return type List[langchain.schema.Document] class langchain.document_loaders.IuguLoader(resource, api_token=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that fetches data from IUGU. Parameters resource (str) – api_token (Optional[str]) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.JSONLoader(file_path, jq_schema, content_key=None, metadata_func=None, text_content=True)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a JSON file and references a jq schema provided to load the text into documents. Example [{β€œtext”: …}, {β€œtext”: …}, {β€œtext”: …}] -> schema = .[].text {β€œkey”: [{β€œtext”: …}, {β€œtext”: …}, {β€œtext”: …}]} -> schema = .key[].text [β€œβ€, β€œβ€, β€œβ€] -> schema = .[] Parameters file_path (Union[str, pathlib.Path]) – jq_schema (str) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-32
file_path (Union[str, pathlib.Path]) – jq_schema (str) – content_key (Optional[str]) – metadata_func (Optional[Callable[[Dict, Dict], Dict]]) – text_content (bool) – load()[source] Load and return documents from the JSON file. Return type List[langchain.schema.Document] class langchain.document_loaders.JoplinLoader(access_token=None, port=41184, host='localhost')[source] Bases: langchain.document_loaders.base.BaseLoader Loader that fetches notes from Joplin. In order to use this loader, you need to have Joplin running with the Web Clipper enabled (look for β€œWeb Clipper” in the app settings). To get the access token, you need to go to the Web Clipper options and under β€œAdvanced Options” you will find the access token. You can find more information about the Web Clipper service here: https://joplinapp.org/clipper/ Parameters access_token (Optional[str]) – port (int) – host (str) – Return type None lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.MWDumpLoader(file_path, encoding='utf8')[source] Bases: langchain.document_loaders.base.BaseLoader Load MediaWiki dump from XML file .. rubric:: Example from langchain.document_loaders import MWDumpLoader loader = MWDumpLoader( file_path="myWiki.xml", encoding="utf8" ) docs = loader.load()
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-33
encoding="utf8" ) docs = loader.load() from langchain.text_splitter import RecursiveCharacterTextSplitter text_splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=0 ) texts = text_splitter.split_documents(docs) Parameters file_path (str) – XML local file path encoding (str, optional) – Charset encoding, defaults to β€œutf8” load()[source] Load from file path. Return type List[langchain.schema.Document] class langchain.document_loaders.MastodonTootsLoader(mastodon_accounts, number_toots=100, exclude_replies=False, access_token=None, api_base_url='https://mastodon.social')[source] Bases: langchain.document_loaders.base.BaseLoader Mastodon toots loader. Parameters mastodon_accounts (Sequence[str]) – number_toots (Optional[int]) – exclude_replies (bool) – access_token (Optional[str]) – api_base_url (str) – load()[source] Load toots into documents. Return type List[langchain.schema.Document] class langchain.document_loaders.MathpixPDFLoader(file_path, processed_file_format='mmd', max_wait_time_seconds=500, should_clean_pdf=False, **kwargs)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Parameters file_path (str) – processed_file_format (str) – max_wait_time_seconds (int) – should_clean_pdf (bool) – kwargs (Any) – Return type None property headers: dict property url: str property data: dict send_pdf()[source] Return type str
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-34
property data: dict send_pdf()[source] Return type str wait_for_processing(pdf_id)[source] Parameters pdf_id (str) – Return type None get_processed_pdf(pdf_id)[source] Parameters pdf_id (str) – Return type str clean_pdf(contents)[source] Parameters contents (str) – Return type str load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.MaxComputeLoader(query, api_wrapper, *, page_content_columns=None, metadata_columns=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from Alibaba Cloud MaxCompute table into documents. Parameters query (str) – api_wrapper (MaxComputeAPIWrapper) – page_content_columns (Optional[Sequence[str]]) – metadata_columns (Optional[Sequence[str]]) – classmethod from_params(query, endpoint, project, *, access_id=None, secret_access_key=None, **kwargs)[source] Convenience constructor that builds the MaxCompute API wrapper fromgiven parameters. Parameters query (str) – SQL query to execute. endpoint (str) – MaxCompute endpoint. project (str) – A project is a basic organizational unit of MaxCompute, which is similar to a database. access_id (Optional[str]) – MaxCompute access ID. Should be passed in directly or set as the environment variable MAX_COMPUTE_ACCESS_ID. secret_access_key (Optional[str]) – MaxCompute secret access key. Should be passed in directly or set as the environment variable MAX_COMPUTE_SECRET_ACCESS_KEY. kwargs (Any) – Return type langchain.document_loaders.max_compute.MaxComputeLoader
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-35
Return type langchain.document_loaders.max_compute.MaxComputeLoader lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.MergedDataLoader(loaders)[source] Bases: langchain.document_loaders.base.BaseLoader Merge documents from a list of loaders Parameters loaders (List) – lazy_load()[source] Lazy load docs from each individual loader. Return type Iterator[langchain.schema.Document] load()[source] Load docs. Return type List[langchain.schema.Document] class langchain.document_loaders.MHTMLLoader(file_path, open_encoding=None, bs_kwargs=None, get_text_separator='')[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses beautiful soup to parse HTML files. Parameters file_path (str) – open_encoding (Optional[str]) – bs_kwargs (Optional[dict]) – get_text_separator (str) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.ModernTreasuryLoader(resource, organization_id=None, api_key=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that fetches data from Modern Treasury. Parameters resource (str) – organization_id (Optional[str]) – api_key (Optional[str]) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-36
Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.NotebookLoader(path, include_outputs=False, max_output_length=10, remove_newline=False, traceback=False)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads .ipynb notebook files. Parameters path (str) – include_outputs (bool) – max_output_length (int) – remove_newline (bool) – traceback (bool) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.NotionDBLoader(integration_token, database_id, request_timeout_sec=10)[source] Bases: langchain.document_loaders.base.BaseLoader Notion DB Loader. Reads content from pages within a Noton Database. :param integration_token: Notion integration token. :type integration_token: str :param database_id: Notion database id. :type database_id: str :param request_timeout_sec: Timeout for Notion requests in seconds. :type request_timeout_sec: int Parameters integration_token (str) – database_id (str) – request_timeout_sec (Optional[int]) – Return type None load()[source] Load documents from the Notion database. :returns: List of documents. :rtype: List[Document] Return type List[langchain.schema.Document] load_page(page_summary)[source] Read a page. Parameters page_summary (Dict[str, Any]) – Return type langchain.schema.Document class langchain.document_loaders.NotionDirectoryLoader(path)[source] Bases: langchain.document_loaders.base.BaseLoader
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-37
Bases: langchain.document_loaders.base.BaseLoader Loader that loads Notion directory dump. Parameters path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.ObsidianLoader(path, encoding='UTF-8', collect_metadata=True)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Obsidian files from disk. Parameters path (str) – encoding (str) – collect_metadata (bool) – FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL) load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.OneDriveFileLoader(*, file)[source] Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel Parameters file (File) – Return type None attribute file: File [Required] load()[source] Load Documents Return type List[langchain.schema.Document] class langchain.document_loaders.OneDriveLoader(*, settings=None, drive_id, folder_path=None, object_ids=None, auth_with_token=False)[source] Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel Parameters settings (langchain.document_loaders.onedrive._OneDriveSettings) – drive_id (str) – folder_path (Optional[str]) – object_ids (Optional[List[str]]) – auth_with_token (bool) – Return type None attribute auth_with_token: bool = False attribute drive_id: str [Required]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-38
attribute drive_id: str [Required] attribute folder_path: Optional[str] = None attribute object_ids: Optional[List[str]] = None attribute settings: langchain.document_loaders.onedrive._OneDriveSettings [Optional] load()[source] Loads all supported document files from the specified OneDrive drive a nd returns a list of Document objects. Returns A list of Document objects representing the loaded documents. Return type List[Document] Raises ValueError – If the specified drive ID does not correspond to a drive in the OneDrive storage. – class langchain.document_loaders.OnlinePDFLoader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loader that loads online PDFs. Parameters file_path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.OutlookMessageLoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Outlook Message files using extract_msg. https://github.com/TeamMsgExtractor/msg-extractor Parameters file_path (str) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.OpenCityDataLoader(city_id, dataset_id, limit)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Open city data. Parameters city_id (str) – dataset_id (str) – limit (int) – lazy_load()[source] Lazy load records. Return type Iterator[langchain.schema.Document] load()[source] Load records. Return type
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-39
load()[source] Load records. Return type List[langchain.schema.Document] class langchain.document_loaders.PDFMinerLoader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loader that uses PDFMiner to load PDF files. Parameters file_path (str) – Return type None load()[source] Eagerly load the content. Return type List[langchain.schema.Document] lazy_load()[source] Lazily lod documents. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.PDFMinerPDFasHTMLLoader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loader that uses PDFMiner to load PDF files as HTML content. Parameters file_path (str) – load()[source] Load file. Return type List[langchain.schema.Document] class langchain.document_loaders.PDFPlumberLoader(file_path, text_kwargs=None)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loader that uses pdfplumber to load PDF files. Parameters file_path (str) – text_kwargs (Optional[Mapping[str, Any]]) – Return type None load()[source] Load file. Return type List[langchain.schema.Document] langchain.document_loaders.PagedPDFSplitter alias of langchain.document_loaders.pdf.PyPDFLoader class langchain.document_loaders.PlaywrightURLLoader(urls, continue_on_failure=True, headless=True, remove_selectors=None)[source] Bases: langchain.document_loaders.base.BaseLoader
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-40
Bases: langchain.document_loaders.base.BaseLoader Loader that uses Playwright and to load a page and unstructured to load the html. This is useful for loading pages that require javascript to render. Parameters urls (List[str]) – continue_on_failure (bool) – headless (bool) – remove_selectors (Optional[List[str]]) – urls List of URLs to load. Type List[str] continue_on_failure If True, continue loading other URLs on failure. Type bool headless If True, the browser will run in headless mode. Type bool load()[source] Load the specified URLs using Playwright and create Document instances. Returns A list of Document instances with loaded content. Return type List[Document] class langchain.document_loaders.PsychicLoader(api_key, connector_id, connection_id)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads documents from Psychic.dev. Parameters api_key (str) – connector_id (str) – connection_id (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.PyMuPDFLoader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loader that uses PyMuPDF to load PDF files. Parameters file_path (str) – Return type None load(**kwargs)[source] Load file. Parameters kwargs (Optional[Any]) – Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-41
kwargs (Optional[Any]) – Return type List[langchain.schema.Document] class langchain.document_loaders.PyPDFDirectoryLoader(path, glob='**/[!.]*.pdf', silent_errors=False, load_hidden=False, recursive=False)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a directory with PDF files with pypdf and chunks at character level. Loader also stores page numbers in metadatas. Parameters path (str) – glob (str) – silent_errors (bool) – load_hidden (bool) – recursive (bool) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.PyPDFLoader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loads a PDF with pypdf and chunks at character level. Loader also stores page numbers in metadatas. Parameters file_path (str) – Return type None load()[source] Load given path as pages. Return type List[langchain.schema.Document] lazy_load()[source] Lazy load given path as pages. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.PyPDFium2Loader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loads a PDF with pypdfium2 and chunks at character level. Parameters file_path (str) – load()[source] Load given path as pages. Return type List[langchain.schema.Document] lazy_load()[source] Lazy load given path as pages. Return type Iterator[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-42
Lazy load given path as pages. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.PySparkDataFrameLoader(spark_session=None, df=None, page_content_column='text', fraction_of_memory=0.1)[source] Bases: langchain.document_loaders.base.BaseLoader Load PySpark DataFrames Parameters spark_session (Optional[SparkSession]) – df (Optional[Any]) – page_content_column (str) – fraction_of_memory (float) – get_num_rows()[source] Gets the amount of β€œfeasible” rows for the DataFrame Return type Tuple[int, int] lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load from the dataframe. Return type List[langchain.schema.Document] class langchain.document_loaders.PythonLoader(file_path)[source] Bases: langchain.document_loaders.text.TextLoader Load Python files, respecting any non-default encoding if specified. Parameters file_path (str) – class langchain.document_loaders.ReadTheDocsLoader(path, encoding=None, errors=None, custom_html_tag=None, **kwargs)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads ReadTheDocs documentation directory dump. Parameters path (Union[str, pathlib.Path]) – encoding (Optional[str]) – errors (Optional[str]) – custom_html_tag (Optional[Tuple[str, dict]]) – kwargs (Optional[Any]) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.RecursiveUrlLoader(url, exclude_dirs=None)[source]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-43
Bases: langchain.document_loaders.base.BaseLoader Loader that loads all child links from a given url. Parameters url (str) – exclude_dirs (Optional[str]) – Return type None get_child_links_recursive(url, visited=None)[source] Recursively get all child links starting with the path of the input URL. Parameters url (str) – visited (Optional[Set[str]]) – Return type Set[str] lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load web pages. Return type List[langchain.schema.Document] class langchain.document_loaders.RedditPostsLoader(client_id, client_secret, user_agent, search_queries, mode, categories=['new'], number_posts=10)[source] Bases: langchain.document_loaders.base.BaseLoader Reddit posts loader. Read posts on a subreddit. First you need to go to https://www.reddit.com/prefs/apps/ and create your application Parameters client_id (str) – client_secret (str) – user_agent (str) – search_queries (Sequence[str]) – mode (str) – categories (Sequence[str]) – number_posts (Optional[int]) – load()[source] Load reddits. Return type List[langchain.schema.Document] class langchain.document_loaders.RoamLoader(path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Roam files from disk. Parameters path (str) – load()[source] Load documents. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-44
Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.S3DirectoryLoader(bucket, prefix='')[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from s3. Parameters bucket (str) – prefix (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.S3FileLoader(bucket, key)[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from s3. Parameters bucket (str) – key (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.SRTLoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader for .srt (subtitle) files. Parameters file_path (str) – load()[source] Load using pysrt file. Return type List[langchain.schema.Document] class langchain.document_loaders.SeleniumURLLoader(urls, continue_on_failure=True, browser='chrome', binary_location=None, executable_path=None, headless=True, arguments=[])[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses Selenium and to load a page and unstructured to load the html. This is useful for loading pages that require javascript to render. Parameters urls (List[str]) – continue_on_failure (bool) – browser (Literal['chrome', 'firefox']) – binary_location (Optional[str]) – executable_path (Optional[str]) – headless (bool) – arguments (List[str]) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-45
headless (bool) – arguments (List[str]) – urls List of URLs to load. Type List[str] continue_on_failure If True, continue loading other URLs on failure. Type bool browser The browser to use, either β€˜chrome’ or β€˜firefox’. Type str binary_location The location of the browser binary. Type Optional[str] executable_path The path to the browser executable. Type Optional[str] headless If True, the browser will run in headless mode. Type bool arguments [List[str]] List of arguments to pass to the browser. load()[source] Load the specified URLs using Selenium and create Document instances. Returns A list of Document instances with loaded content. Return type List[Document] class langchain.document_loaders.SitemapLoader(web_path, filter_urls=None, parsing_function=None, blocksize=None, blocknum=0, meta_function=None, is_local=False)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Loader that fetches a sitemap and loads those URLs. Parameters web_path (str) – filter_urls (Optional[List[str]]) – parsing_function (Optional[Callable]) – blocksize (Optional[int]) – blocknum (int) – meta_function (Optional[Callable]) – is_local (bool) – parse_sitemap(soup)[source] Parse sitemap xml and load into a list of dicts. Parameters soup (Any) – Return type List[dict] load()[source] Load sitemap. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-46
Load sitemap. Return type List[langchain.schema.Document] class langchain.document_loaders.SlackDirectoryLoader(zip_path, workspace_url=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader for loading documents from a Slack directory dump. Parameters zip_path (str) – workspace_url (Optional[str]) – load()[source] Load and return documents from the Slack directory dump. Return type List[langchain.schema.Document] class langchain.document_loaders.SnowflakeLoader(query, user, password, account, warehouse, role, database, schema, parameters=None, page_content_columns=None, metadata_columns=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from Snowflake into a list of documents. Each document represents one row of the result. The page_content_columns are written into the page_content of the document. The metadata_columns are written into the metadata of the document. By default, all columns are written into the page_content and none into the metadata. Parameters query (str) – user (str) – password (str) – account (str) – warehouse (str) – role (str) – database (str) – schema (str) – parameters (Optional[Dict[str, Any]]) – page_content_columns (Optional[List[str]]) – metadata_columns (Optional[List[str]]) – lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.SpreedlyLoader(access_token, resource)[source]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-47
class langchain.document_loaders.SpreedlyLoader(access_token, resource)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that fetches data from Spreedly API. Parameters access_token (str) – resource (str) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.StripeLoader(resource, access_token=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that fetches data from Stripe. Parameters resource (str) – access_token (Optional[str]) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.TelegramChatApiLoader(chat_entity=None, api_id=None, api_hash=None, username=None, file_path='telegram_data.json')[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Telegram chat json directory dump. Parameters chat_entity (Optional[EntityLike]) – api_id (Optional[int]) – api_hash (Optional[str]) – username (Optional[str]) – file_path (str) – async fetch_data_from_telegram()[source] Fetch data from Telegram API and save it as a JSON file. Return type None load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.TelegramChatFileLoader(path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Telegram chat json directory dump. Parameters path (str) – load()[source] Load documents.
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-48
Parameters path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] langchain.document_loaders.TelegramChatLoader alias of langchain.document_loaders.telegram.TelegramChatFileLoader class langchain.document_loaders.TextLoader(file_path, encoding=None, autodetect_encoding=False)[source] Bases: langchain.document_loaders.base.BaseLoader Load text files. Parameters file_path (str) – Path to the file to load. encoding (Optional[str]) – File encoding to use. If None, the file will be loaded encoding. (with the default system) – autodetect_encoding (bool) – Whether to try to autodetect the file encoding if the specified encoding fails. load()[source] Load from file path. Return type List[langchain.schema.Document] class langchain.document_loaders.ToMarkdownLoader(url, api_key)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads HTML to markdown using 2markdown. Parameters url (str) – api_key (str) – lazy_load()[source] Lazily load the file. Return type Iterator[langchain.schema.Document] load()[source] Load file. Return type List[langchain.schema.Document] class langchain.document_loaders.TomlLoader(source)[source] Bases: langchain.document_loaders.base.BaseLoader A TOML document loader that inherits from the BaseLoader class. This class can be initialized with either a single source file or a source directory containing TOML files. Parameters source (Union[str, pathlib.Path]) – load()[source] Load and return all documents. Return type
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-49
load()[source] Load and return all documents. Return type List[langchain.schema.Document] lazy_load()[source] Lazily load the TOML documents from the source file or directory. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.TrelloLoader(client, board_name, *, include_card_name=True, include_comments=True, include_checklist=True, card_filter='all', extra_metadata=('due_date', 'labels', 'list', 'closed'))[source] Bases: langchain.document_loaders.base.BaseLoader Trello loader. Reads all cards from a Trello board. Parameters client (TrelloClient) – board_name (str) – include_card_name (bool) – include_comments (bool) – include_checklist (bool) – card_filter (Literal['closed', 'open', 'all']) – extra_metadata (Tuple[str, ...]) – classmethod from_credentials(board_name, *, api_key=None, token=None, **kwargs)[source] Convenience constructor that builds TrelloClient init param for you. Parameters board_name (str) – The name of the Trello board. api_key (Optional[str]) – Trello API key. Can also be specified as environment variable TRELLO_API_KEY. token (Optional[str]) – Trello token. Can also be specified as environment variable TRELLO_TOKEN. include_card_name – Whether to include the name of the card in the document. include_comments – Whether to include the comments on the card in the document. include_checklist – Whether to include the checklist on the card in the document. card_filter – Filter on card status. Valid values are β€œclosed”, β€œopen”, β€œall”.
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-50
β€œall”. extra_metadata – List of additional metadata fields to include as document metadata.Valid values are β€œdue_date”, β€œlabels”, β€œlist”, β€œclosed”. kwargs (Any) – Return type langchain.document_loaders.trello.TrelloLoader load()[source] Loads all cards from the specified Trello board. You can filter the cards, metadata and text included by using the optional parameters. Returns:A list of documents, one for each card in the board. Return type List[langchain.schema.Document] class langchain.document_loaders.TwitterTweetLoader(auth_handler, twitter_users, number_tweets=100)[source] Bases: langchain.document_loaders.base.BaseLoader Twitter tweets loader. Read tweets of user twitter handle. First you need to go to https://developer.twitter.com/en/docs/twitter-api /getting-started/getting-access-to-the-twitter-api to get your token. And create a v2 version of the app. Parameters auth_handler (Union[OAuthHandler, OAuth2BearerHandler]) – twitter_users (Sequence[str]) – number_tweets (Optional[int]) – load()[source] Load tweets. Return type List[langchain.schema.Document] classmethod from_bearer_token(oauth2_bearer_token, twitter_users, number_tweets=100)[source] Create a TwitterTweetLoader from OAuth2 bearer token. Parameters oauth2_bearer_token (str) – twitter_users (Sequence[str]) – number_tweets (Optional[int]) – Return type langchain.document_loaders.twitter.TwitterTweetLoader classmethod from_secrets(access_token, access_token_secret, consumer_key, consumer_secret, twitter_users, number_tweets=100)[source] Create a TwitterTweetLoader from access tokens and secrets. Parameters
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-51
Create a TwitterTweetLoader from access tokens and secrets. Parameters access_token (str) – access_token_secret (str) – consumer_key (str) – consumer_secret (str) – twitter_users (Sequence[str]) – number_tweets (Optional[int]) – Return type langchain.document_loaders.twitter.TwitterTweetLoader class langchain.document_loaders.UnstructuredAPIFileIOLoader(file, mode='single', url='https://api.unstructured.io/general/v0/general', api_key='', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileIOLoader Loader that uses the unstructured web API to load file IO objects. Parameters file (Union[IO, Sequence[IO]]) – mode (str) – url (str) – api_key (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredAPIFileLoader(file_path='', mode='single', url='https://api.unstructured.io/general/v0/general', api_key='', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses the unstructured web API to load files. Parameters file_path (Union[str, List[str]]) – mode (str) – url (str) – api_key (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredCSVLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load CSV files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-52
mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredEPubLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load epub files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredEmailLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load email files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredExcelLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load Microsoft Excel files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredFileIOLoader(file, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredBaseLoader Loader that uses unstructured to load file IO objects. Parameters file (Union[IO, Sequence[IO]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredFileLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredBaseLoader
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-53
Bases: langchain.document_loaders.unstructured.UnstructuredBaseLoader Loader that uses unstructured to load files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredHTMLLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load HTML files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredImageLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load image files, such as PNGs and JPGs. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredMarkdownLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load markdown files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredODTLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load open office ODT files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-54
mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredPDFLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load PDF files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredPowerPointLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load powerpoint files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredRSTLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load RST files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredRTFLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load rtf files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredURLLoader(urls, continue_on_failure=True, mode='single', show_progress_bar=False, **unstructured_kwargs)[source] Bases: langchain.document_loaders.base.BaseLoader
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-55
Bases: langchain.document_loaders.base.BaseLoader Loader that uses unstructured to load HTML files. Parameters urls (List[str]) – continue_on_failure (bool) – mode (str) – show_progress_bar (bool) – unstructured_kwargs (Any) – load()[source] Load file. Return type List[langchain.schema.Document] class langchain.document_loaders.UnstructuredWordDocumentLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load word documents. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredXMLLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load XML files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.WeatherDataLoader(client, places)[source] Bases: langchain.document_loaders.base.BaseLoader Weather Reader. Reads the forecast & current weather of any location using OpenWeatherMap’s free API. Checkout β€˜https://openweathermap.org/appid’ for more on how to generate a free OpenWeatherMap API. Parameters client (OpenWeatherMapAPIWrapper) – places (Sequence[str]) – Return type None classmethod from_params(places, *, openweathermap_api_key=None)[source] Parameters places (Sequence[str]) – openweathermap_api_key (Optional[str]) – Return type
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-56
openweathermap_api_key (Optional[str]) – Return type langchain.document_loaders.weather.WeatherDataLoader lazy_load()[source] Lazily load weather data for the given locations. Return type Iterator[langchain.schema.Document] load()[source] Load weather data for the given locations. Return type List[langchain.schema.Document] class langchain.document_loaders.WebBaseLoader(web_path, header_template=None, verify=True)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses urllib and beautiful soup to load webpages. Parameters web_path (Union[str, List[str]]) – header_template (Optional[dict]) – verify (Optional[bool]) – requests_per_second: int = 2 Max number of concurrent requests to make. default_parser: str = 'html.parser' Default parser to use for BeautifulSoup. requests_kwargs: Dict[str, Any] = {} kwargs for requests bs_get_text_kwargs: Dict[str, Any] = {} kwargs for beatifulsoup4 get_text web_paths: List[str] property web_path: str async fetch_all(urls)[source] Fetch all urls concurrently with rate limiting. Parameters urls (List[str]) – Return type Any scrape_all(urls, parser=None)[source] Fetch all urls, then return soups for all results. Parameters urls (List[str]) – parser (Optional[str]) – Return type List[Any] scrape(parser=None)[source] Scrape data from webpage and return it in BeautifulSoup format. Parameters parser (Optional[str]) – Return type Any lazy_load()[source]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-57
Return type Any lazy_load()[source] Lazy load text from the url(s) in web_path. Return type Iterator[langchain.schema.Document] load()[source] Load text from the url(s) in web_path. Return type List[langchain.schema.Document] aload()[source] Load text from the urls in web_path async into Documents. Return type List[langchain.schema.Document] class langchain.document_loaders.WhatsAppChatLoader(path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads WhatsApp messages text file. Parameters path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.WikipediaLoader(query, lang='en', load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from www.wikipedia.org into a list of Documents. The hard limit on the number of downloaded Documents is 300 for now. Each wiki page represents one Document. Parameters query (str) – lang (str) – load_max_docs (Optional[int]) – load_all_available_meta (Optional[bool]) – doc_content_chars_max (Optional[int]) – load()[source] Loads the query result from Wikipedia into a list of Documents. Returns A list of Document objects representing the loadedWikipedia pages. Return type List[Document] class langchain.document_loaders.YoutubeAudioLoader(urls, save_dir)[source] Bases: langchain.document_loaders.blob_loaders.schema.BlobLoader Load YouTube urls as audio file(s). Parameters
https://api.python.langchain.com/en/latest/modules/document_loaders.html
98c621fca5f3-58
Load YouTube urls as audio file(s). Parameters urls (List[str]) – save_dir (str) – yield_blobs()[source] Yield audio blobs for each url. Return type Iterable[langchain.document_loaders.blob_loaders.schema.Blob] class langchain.document_loaders.YoutubeLoader(video_id, add_video_info=False, language='en', translation='en', continue_on_failure=False)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Youtube transcripts. Parameters video_id (str) – add_video_info (bool) – language (Union[str, Sequence[str]]) – translation (str) – continue_on_failure (bool) – static extract_video_id(youtube_url)[source] Extract video id from common YT urls. Parameters youtube_url (str) – Return type str classmethod from_youtube_url(youtube_url, **kwargs)[source] Given youtube URL, load video. Parameters youtube_url (str) – kwargs (Any) – Return type langchain.document_loaders.youtube.YoutubeLoader load()[source] Load documents. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
6c0374edea1b-0
Experimental This module contains experimental modules and reproductions of existing work using LangChain primitives. Autonomous agents Here, we document the BabyAGI and AutoGPT classes from the langchain.experimental module. class langchain.experimental.BabyAGI(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, task_list=None, task_creation_chain, task_prioritization_chain, execution_chain, task_id_counter=1, vectorstore, max_iterations=None)[source] Bases: langchain.chains.base.Chain, pydantic.main.BaseModel Controller model for the BabyAGI agent. Parameters memory (Optional[langchain.schema.BaseMemory]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – verbose (bool) – tags (Optional[List[str]]) – task_list (collections.deque) – task_creation_chain (langchain.chains.base.Chain) – task_prioritization_chain (langchain.chains.base.Chain) – execution_chain (langchain.chains.base.Chain) – task_id_counter (int) – vectorstore (langchain.vectorstores.base.VectorStore) – max_iterations (Optional[int]) – Return type None model Config[source] Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True property input_keys: List[str] Input keys this chain expects. property output_keys: List[str] Output keys this chain expects. get_next_task(result, task_description, objective)[source] Get the next task. Parameters result (str) – task_description (str) –
https://api.python.langchain.com/en/latest/modules/experimental.html
6c0374edea1b-1
Parameters result (str) – task_description (str) – objective (str) – Return type List[Dict] prioritize_tasks(this_task_id, objective)[source] Prioritize tasks. Parameters this_task_id (int) – objective (str) – Return type List[Dict] execute_task(objective, task, k=5)[source] Execute a task. Parameters objective (str) – task (str) – k (int) – Return type str classmethod from_llm(llm, vectorstore, verbose=False, task_execution_chain=None, **kwargs)[source] Initialize the BabyAGI Controller. Parameters llm (langchain.base_language.BaseLanguageModel) – vectorstore (langchain.vectorstores.base.VectorStore) – verbose (bool) – task_execution_chain (Optional[langchain.chains.base.Chain]) – kwargs (Dict[str, Any]) – Return type langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI class langchain.experimental.AutoGPT(ai_name, memory, chain, output_parser, tools, feedback_tool=None, chat_history_memory=None)[source] Bases: object Agent class for interacting with Auto-GPT. Parameters ai_name (str) – memory (VectorStoreRetriever) – chain (LLMChain) – output_parser (BaseAutoGPTOutputParser) – tools (List[BaseTool]) – feedback_tool (Optional[HumanInputRun]) – chat_history_memory (Optional[BaseChatMessageHistory]) – Generative agents Here, we document the GenerativeAgent and GenerativeAgentMemory classes from the langchain.experimental module.
https://api.python.langchain.com/en/latest/modules/experimental.html
6c0374edea1b-2
class langchain.experimental.GenerativeAgent(*, name, age=None, traits='N/A', status, memory, llm, verbose=False, summary='', summary_refresh_seconds=3600, last_refreshed=None, daily_summaries=None)[source] Bases: pydantic.main.BaseModel A character with memory and innate characteristics. Parameters name (str) – age (Optional[int]) – traits (str) – status (str) – memory (langchain.experimental.generative_agents.memory.GenerativeAgentMemory) – llm (langchain.base_language.BaseLanguageModel) – verbose (bool) – summary (str) – summary_refresh_seconds (int) – last_refreshed (datetime.datetime) – daily_summaries (List[str]) – Return type None attribute name: str [Required] The character’s name. attribute age: Optional[int] = None The optional age of the character. attribute traits: str = 'N/A' Permanent traits to ascribe to the character. attribute status: str [Required] The traits of the character you wish not to change. attribute memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory [Required] The memory object that combines relevance, recency, and β€˜importance’. attribute llm: langchain.base_language.BaseLanguageModel [Required] The underlying language model. attribute summary: str = '' Stateful self-summary generated via reflection on the character’s memory. attribute summary_refresh_seconds: int = 3600 How frequently to re-generate the summary. attribute last_refreshed: datetime.datetime [Optional] The last time the character’s summary was regenerated.
https://api.python.langchain.com/en/latest/modules/experimental.html
6c0374edea1b-3
The last time the character’s summary was regenerated. attribute daily_summaries: List[str] [Optional] Summary of the events in the plan that the agent took. model Config[source] Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True summarize_related_memories(observation)[source] Summarize memories that are most relevant to an observation. Parameters observation (str) – Return type str generate_reaction(observation, now=None)[source] React to a given observation. Parameters observation (str) – now (Optional[datetime.datetime]) – Return type Tuple[bool, str] generate_dialogue_response(observation, now=None)[source] React to a given observation. Parameters observation (str) – now (Optional[datetime.datetime]) – Return type Tuple[bool, str] get_summary(force_refresh=False, now=None)[source] Return a descriptive summary of the agent. Parameters force_refresh (bool) – now (Optional[datetime.datetime]) – Return type str get_full_header(force_refresh=False, now=None)[source] Return a full header of the agent’s status, summary, and current time. Parameters force_refresh (bool) – now (Optional[datetime.datetime]) – Return type str
https://api.python.langchain.com/en/latest/modules/experimental.html
6c0374edea1b-4
now (Optional[datetime.datetime]) – Return type str class langchain.experimental.GenerativeAgentMemory(*, llm, memory_retriever, verbose=False, reflection_threshold=None, current_plan=[], importance_weight=0.15, aggregate_importance=0.0, max_tokens_limit=1200, queries_key='queries', most_recent_memories_token_key='recent_memories_token', add_memory_key='add_memory', relevant_memories_key='relevant_memories', relevant_memories_simple_key='relevant_memories_simple', most_recent_memories_key='most_recent_memories', now_key='now', reflecting=False)[source] Bases: langchain.schema.BaseMemory Parameters llm (langchain.base_language.BaseLanguageModel) – memory_retriever (langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever) – verbose (bool) – reflection_threshold (Optional[float]) – current_plan (List[str]) – importance_weight (float) – aggregate_importance (float) – max_tokens_limit (int) – queries_key (str) – most_recent_memories_token_key (str) – add_memory_key (str) – relevant_memories_key (str) – relevant_memories_simple_key (str) – most_recent_memories_key (str) – now_key (str) – reflecting (bool) – Return type None attribute llm: langchain.base_language.BaseLanguageModel [Required] The core language model. attribute memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required] The retriever to fetch related memories. attribute reflection_threshold: Optional[float] = None
https://api.python.langchain.com/en/latest/modules/experimental.html
6c0374edea1b-5
attribute reflection_threshold: Optional[float] = None When aggregate_importance exceeds reflection_threshold, stop to reflect. attribute current_plan: List[str] = [] The current plan of the agent. attribute importance_weight: float = 0.15 How much weight to assign the memory importance. attribute aggregate_importance: float = 0.0 Track the sum of the β€˜importance’ of recent memories. Triggers reflection when it reaches reflection_threshold. pause_to_reflect(now=None)[source] Reflect on recent observations and generate β€˜insights’. Parameters now (Optional[datetime.datetime]) – Return type List[str] add_memories(memory_content, now=None)[source] Add an observations or memories to the agent’s memory. Parameters memory_content (str) – now (Optional[datetime.datetime]) – Return type List[str] add_memory(memory_content, now=None)[source] Add an observation or memory to the agent’s memory. Parameters memory_content (str) – now (Optional[datetime.datetime]) – Return type List[str] fetch_memories(observation, now=None)[source] Fetch related memories. Parameters observation (str) – now (Optional[datetime.datetime]) – Return type List[langchain.schema.Document] property memory_variables: List[str] Input keys this memory class will load dynamically. load_memory_variables(inputs)[source] Return key-value pairs given the text input to the chain. Parameters inputs (Dict[str, Any]) – Return type Dict[str, str] save_context(inputs, outputs)[source] Save the context of this model run to memory. Parameters inputs (Dict[str, Any]) –
https://api.python.langchain.com/en/latest/modules/experimental.html
6c0374edea1b-6
Parameters inputs (Dict[str, Any]) – outputs (Dict[str, Any]) – Return type None clear()[source] Clear memory contents. Return type None
https://api.python.langchain.com/en/latest/modules/experimental.html
1b035ff3a02e-0
Utilities General utilities. class langchain.utilities.ApifyWrapper(*, apify_client=None, apify_client_async=None)[source] Bases: pydantic.main.BaseModel Wrapper around Apify. To use, you should have the apify-client python package installed, and the environment variable APIFY_API_TOKEN set with your API key, or pass apify_api_token as a named parameter to the constructor. Parameters apify_client (Any) – apify_client_async (Any) – Return type None attribute apify_client: Any = None attribute apify_client_async: Any = None async acall_actor(actor_id, run_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source] Run an Actor on the Apify platform and wait for results to be ready. Parameters actor_id (str) – The ID or name of the Actor on the Apify platform. run_input (Dict) – The input object of the Actor that you’re trying to run. dataset_mapping_function (Callable) – A function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. build (str, optional) – Optionally specifies the actor build to run. It can be either a build tag or build number. memory_mbytes (int, optional) – Optional memory limit for the run, in megabytes. timeout_secs (int, optional) – Optional timeout for the run, in seconds. Returns A loader that will fetch the records from theActor run’s default dataset. Return type ApifyDatasetLoader call_actor(actor_id, run_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-1
Run an Actor on the Apify platform and wait for results to be ready. Parameters actor_id (str) – The ID or name of the Actor on the Apify platform. run_input (Dict) – The input object of the Actor that you’re trying to run. dataset_mapping_function (Callable) – A function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. build (str, optional) – Optionally specifies the actor build to run. It can be either a build tag or build number. memory_mbytes (int, optional) – Optional memory limit for the run, in megabytes. timeout_secs (int, optional) – Optional timeout for the run, in seconds. Returns A loader that will fetch the records from theActor run’s default dataset. Return type ApifyDatasetLoader class langchain.utilities.ArxivAPIWrapper(*, arxiv_search=None, arxiv_exceptions=None, top_k_results=3, load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000, ARXIV_MAX_QUERY_LENGTH=300)[source] Bases: pydantic.main.BaseModel Wrapper around ArxivAPI. To use, you should have the arxiv python package installed. https://lukasschwab.me/arxiv.py/index.html This wrapper will use the Arxiv API to conduct searches and fetch document summaries. By default, it will return the document summaries of the top-k results. It limits the Document content by doc_content_chars_max. Set doc_content_chars_max=None if you don’t want to limit the content size. Parameters top_k_results (int) – number of the top-scored document used for the arxiv tool
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-2
ARXIV_MAX_QUERY_LENGTH (int) – the cut limit on the query used for the arxiv tool. load_max_docs (int) – a limit to the number of loaded documents load_all_available_meta (bool) – if True: the metadata of the loaded Documents gets all available meta info(see https://lukasschwab.me/arxiv.py/index.html#Result), if False: the metadata gets only the most informative fields. arxiv_search (Any) – arxiv_exceptions (Any) – doc_content_chars_max (Optional[int]) – Return type None attribute arxiv_exceptions: Any = None attribute doc_content_chars_max: Optional[int] = 4000 attribute load_all_available_meta: bool = False attribute load_max_docs: int = 100 attribute top_k_results: int = 3 load(query)[source] Run Arxiv search and get the article texts plus the article meta information. See https://lukasschwab.me/arxiv.py/index.html#Search Returns: a list of documents with the document.page_content in text format Parameters query (str) – Return type List[langchain.schema.Document] run(query)[source] Run Arxiv search and get the article meta information. See https://lukasschwab.me/arxiv.py/index.html#Search See https://lukasschwab.me/arxiv.py/index.html#Result It uses only the most informative fields of article meta information. Parameters query (str) – Return type str class langchain.utilities.BashProcess(strip_newlines=False, return_err_output=False, persistent=False)[source] Bases: object Executes bash commands and returns the output. Parameters
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-3
Bases: object Executes bash commands and returns the output. Parameters strip_newlines (bool) – return_err_output (bool) – persistent (bool) – run(commands)[source] Run commands and return final output. Parameters commands (Union[str, List[str]]) – Return type str process_output(output, command)[source] Parameters output (str) – command (str) – Return type str class langchain.utilities.BibtexparserWrapper[source] Bases: pydantic.main.BaseModel Wrapper around bibtexparser. To use, you should have the bibtexparser python package installed. https://bibtexparser.readthedocs.io/en/master/ This wrapper will use bibtexparser to load a collection of references from a bibtex file and fetch document summaries. Return type None get_metadata(entry, load_extra=False)[source] Get metadata for the given entry. Parameters entry (Mapping[str, Any]) – load_extra (bool) – Return type Dict[str, Any] load_bibtex_entries(path)[source] Load bibtex entries from the bibtex file at the given path. Parameters path (str) – Return type List[Dict[str, Any]] class langchain.utilities.BingSearchAPIWrapper(*, bing_subscription_key, bing_search_url, k=10)[source] Bases: pydantic.main.BaseModel Wrapper for Bing Search API. In order to set this up, follow instructions at: https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e Parameters
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-4
Parameters bing_subscription_key (str) – bing_search_url (str) – k (int) – Return type None attribute bing_search_url: str [Required] attribute bing_subscription_key: str [Required] attribute k: int = 10 results(query, num_results)[source] Run query through BingSearch and return metadata. Parameters query (str) – The query to search for. num_results (int) – The number of results to return. Returns snippet - The description of the result. title - The title of the result. link - The link to the result. Return type A list of dictionaries with the following keys run(query)[source] Run query through BingSearch and parse result. Parameters query (str) – Return type str class langchain.utilities.BraveSearchWrapper(*, api_key, search_kwargs=None)[source] Bases: pydantic.main.BaseModel Parameters api_key (str) – search_kwargs (dict) – Return type None attribute api_key: str [Required] attribute search_kwargs: dict [Optional] run(query)[source] Parameters query (str) – Return type str class langchain.utilities.DuckDuckGoSearchAPIWrapper(*, k=10, region='wt-wt', safesearch='moderate', time='y', max_results=5)[source] Bases: pydantic.main.BaseModel Wrapper for DuckDuckGo Search API. Free and does not require any setup Parameters k (int) – region (Optional[str]) – safesearch (str) – time (Optional[str]) –
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-5
safesearch (str) – time (Optional[str]) – max_results (int) – Return type None attribute k: int = 10 attribute max_results: int = 5 attribute region: Optional[str] = 'wt-wt' attribute safesearch: str = 'moderate' attribute time: Optional[str] = 'y' get_snippets(query)[source] Run query through DuckDuckGo and return concatenated results. Parameters query (str) – Return type List[str] results(query, num_results)[source] Run query through DuckDuckGo and return metadata. Parameters query (str) – The query to search for. num_results (int) – The number of results to return. Returns snippet - The description of the result. title - The title of the result. link - The link to the result. Return type A list of dictionaries with the following keys run(query)[source] Parameters query (str) – Return type str class langchain.utilities.GooglePlacesAPIWrapper(*, gplaces_api_key=None, google_map_client=None, top_k_results=None)[source] Bases: pydantic.main.BaseModel Wrapper around Google Places API. To use, you should have the googlemaps python package installed,an API key for the google maps platform, and the enviroment variable β€˜β€™GPLACES_API_KEY’’ set with your API key , or pass β€˜gplaces_api_key’ as a named parameter to the constructor. By default, this will return the all the results on the input query.You can use the top_k_results argument to limit the number of results. Example from langchain import GooglePlacesAPIWrapper
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-6
Example from langchain import GooglePlacesAPIWrapper gplaceapi = GooglePlacesAPIWrapper() Parameters gplaces_api_key (Optional[str]) – google_map_client (Any) – top_k_results (Optional[int]) – Return type None attribute gplaces_api_key: Optional[str] = None attribute top_k_results: Optional[int] = None fetch_place_details(place_id)[source] Parameters place_id (str) – Return type Optional[str] format_place_details(place_details)[source] Parameters place_details (Dict[str, Any]) – Return type Optional[str] run(query)[source] Run Places search and get k number of places that exists that match. Parameters query (str) – Return type str class langchain.utilities.GoogleSearchAPIWrapper(*, search_engine=None, google_api_key=None, google_cse_id=None, k=10, siterestrict=False)[source] Bases: pydantic.main.BaseModel Wrapper for Google Search API. Adapted from: Instructions adapted from https://stackoverflow.com/questions/ 37083058/ programmatically-searching-google-in-python-using-custom-search TODO: DOCS for using it 1. Install google-api-python-client - If you don’t already have a Google account, sign up. - If you have never created a Google APIs Console project, read the Managing Projects page and create a project in the Google API Console. - Install the library using pip install google-api-python-client The current version of the library is 2.70.0 at this time 2. To create an API key: - Navigate to the APIs & Servicesβ†’Credentials panel in Cloud Console. - Select Create credentials, then select API key from the drop-down menu.
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-7
- Select Create credentials, then select API key from the drop-down menu. - The API key created dialog box displays your newly created key. - You now have an API_KEY 3. Setup Custom Search Engine so you can search the entire web - Create a custom search engine in this link. - In Sites to search, add any valid URL (i.e. www.stackoverflow.com). - That’s all you have to fill up, the rest doesn’t matter. In the left-side menu, click Edit search engine β†’ {your search engine name} β†’ Setup Set Search the entire web to ON. Remove the URL you added from the list of Sites to search. - Under Search engine ID you’ll find the search-engine-ID. 4. Enable the Custom Search API - Navigate to the APIs & Servicesβ†’Dashboard panel in Cloud Console. - Click Enable APIs and Services. - Search for Custom Search API and click on it. - Click Enable. URL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis .com Parameters search_engine (Any) – google_api_key (Optional[str]) – google_cse_id (Optional[str]) – k (int) – siterestrict (bool) – Return type None attribute google_api_key: Optional[str] = None attribute google_cse_id: Optional[str] = None attribute k: int = 10 attribute siterestrict: bool = False results(query, num_results)[source] Run query through GoogleSearch and return metadata. Parameters query (str) – The query to search for. num_results (int) – The number of results to return. Returns snippet - The description of the result. title - The title of the result.
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-8
Returns snippet - The description of the result. title - The title of the result. link - The link to the result. Return type A list of dictionaries with the following keys run(query)[source] Run query through GoogleSearch and parse result. Parameters query (str) – Return type str class langchain.utilities.GoogleSerperAPIWrapper(*, k=10, gl='us', hl='en', type='search', tbs=None, serper_api_key=None, aiosession=None, result_key_for_type={'images': 'images', 'news': 'news', 'places': 'places', 'search': 'organic'})[source] Bases: pydantic.main.BaseModel Wrapper around the Serper.dev Google Search API. You can create a free API key at https://serper.dev. To use, you should have the environment variable SERPER_API_KEY set with your API key, or pass serper_api_key as a named parameter to the constructor. Example from langchain import GoogleSerperAPIWrapper google_serper = GoogleSerperAPIWrapper() Parameters k (int) – gl (str) – hl (str) – type (Literal['news', 'search', 'places', 'images']) – tbs (Optional[str]) – serper_api_key (Optional[str]) – aiosession (Optional[aiohttp.client.ClientSession]) – result_key_for_type (dict) – Return type None attribute aiosession: Optional[aiohttp.client.ClientSession] = None attribute gl: str = 'us' attribute hl: str = 'en' attribute k: int = 10 attribute serper_api_key: Optional[str] = None
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-9
attribute serper_api_key: Optional[str] = None attribute tbs: Optional[str] = None attribute type: Literal['news', 'search', 'places', 'images'] = 'search' async aresults(query, **kwargs)[source] Run query through GoogleSearch. Parameters query (str) – kwargs (Any) – Return type Dict async arun(query, **kwargs)[source] Run query through GoogleSearch and parse result async. Parameters query (str) – kwargs (Any) – Return type str results(query, **kwargs)[source] Run query through GoogleSearch. Parameters query (str) – kwargs (Any) – Return type Dict run(query, **kwargs)[source] Run query through GoogleSearch and parse result. Parameters query (str) – kwargs (Any) – Return type str class langchain.utilities.GraphQLAPIWrapper(*, custom_headers=None, graphql_endpoint, gql_client=None, gql_function)[source] Bases: pydantic.main.BaseModel Wrapper around GraphQL API. To use, you should have the gql python package installed. This wrapper will use the GraphQL API to conduct queries. Parameters custom_headers (Optional[Dict[str, str]]) – graphql_endpoint (str) – gql_client (Any) – gql_function (Callable[[str], Any]) – Return type None attribute custom_headers: Optional[Dict[str, str]] = None attribute graphql_endpoint: str [Required] run(query)[source] Run a GraphQL query and get the results. Parameters query (str) – Return type str
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-10
class langchain.utilities.JiraAPIWrapper(*, jira=None, confluence=None, jira_username=None, jira_api_token=None, jira_instance_url=None, operations=[{'mode': 'jql', 'name': 'JQL Query', 'description': '\nΒ Β Β  This tool is a wrapper around atlassian-python-api\'s Jira jql API, useful when you need to search for Jira issues.\nΒ Β Β  The input to this tool is a JQL query string, and will be passed into atlassian-python-api\'s Jira `jql` function,\nΒ Β Β  For example, to find all the issues in project "Test" assigned to the me, you would pass in the following string:\nΒ Β Β  project = Test AND assignee = currentUser()\nΒ Β Β  or to find issues with summaries that contain the word "test", you would pass in the following string:\nΒ Β Β  summary ~ \'test\'\nΒ Β Β  '}, {'mode': 'get_projects', 'name': 'Get Projects', 'description': "\nΒ Β Β  This tool is a wrapper around atlassian-python-api's Jira project API, \nΒ Β Β  useful when you need to fetch all the projects the user has access to, find out how many projects there are, or as an intermediary step that involv searching by projects. \nΒ Β Β  there is no input to this tool.\nΒ Β Β  "}, {'mode': 'create_issue', 'name': 'Create Issue', 'description': '\nΒ Β Β  This tool is a wrapper around atlassian-python-api\'s Jira issue_create API, useful when you need to create a Jira issue. \nΒ Β Β  The input to this tool is a dictionary specifying the fields of the Jira issue, and will be passed into atlassian-python-api\'s Jira `issue_create` function.\nΒ Β Β  For example, to create a low priority task called "test issue" with description "test description", you would pass in the following
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-11
low priority task called "test issue" with description "test description", you would pass in the following dictionary: \nΒ Β Β  {{"summary": "test issue", "description": "test description", "issuetype": {{"name": "Task"}}, "priority": {{"name": "Low"}}}}\nΒ Β Β  '}, {'mode': 'other', 'name': 'Catch all Jira API call', 'description': '\nΒ Β Β  This tool is a wrapper around atlassian-python-api\'s Jira API.\nΒ Β Β  There are other dedicated tools for fetching all projects, and creating and searching for issues, \nΒ Β Β  use this tool if you need to perform any other actions allowed by the atlassian-python-api Jira API.\nΒ Β Β  The input to this tool is line of python code that calls a function from atlassian-python-api\'s Jira API\nΒ Β Β  For example, to update the summary field of an issue, you would pass in the following string:\nΒ Β Β  self.jira.update_issue_field(key, {{"summary": "New summary"}})\nΒ Β Β  or to find out how many projects are in the Jira instance, you would pass in the following string:\nΒ Β Β  self.jira.projects()\nΒ Β Β  For more information on the Jira API, refer to https://atlassian-python-api.readthedocs.io/jira.html\nΒ Β Β  '}, {'mode': 'create_page', 'name': 'Create confluence page', 'description': 'This tool is a wrapper around atlassian-python-api\'s Confluence \natlassian-python-api API, useful when you need to create a Confluence page. The input to this tool is a dictionary \nspecifying the fields of the Confluence page, and will be passed into atlassian-python-api\'s Confluence `create_page` \nfunction. For example, to create a page in the DEMO space titled "This is the title" with body "This is the body. You
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-12
the DEMO space titled "This is the title" with body "This is the body. You can use \n<strong>HTML tags</strong>!", you would pass in the following dictionary: {{"space": "DEMO", "title":"This is the \ntitle","body":"This is the body. You can use <strong>HTML tags</strong>!"}} '}])[source]
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-13
Bases: pydantic.main.BaseModel Wrapper for Jira API. Parameters jira (Any) – confluence (Any) – jira_username (Optional[str]) – jira_api_token (Optional[str]) – jira_instance_url (Optional[str]) – operations (List[Dict]) – Return type None attribute confluence: Any = None attribute jira_api_token: Optional[str] = None attribute jira_instance_url: Optional[str] = None attribute jira_username: Optional[str] = None
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-14
attribute operations: List[Dict] = [{'mode': 'jql', 'name': 'JQL Query', 'description': '\nΒ Β Β  This tool is a wrapper around atlassian-python-api\'s Jira jql API, useful when you need to search for Jira issues.\nΒ Β Β  The input to this tool is a JQL query string, and will be passed into atlassian-python-api\'s Jira `jql` function,\nΒ Β Β  For example, to find all the issues in project "Test" assigned to the me, you would pass in the following string:\nΒ Β Β  project = Test AND assignee = currentUser()\nΒ Β Β  or to find issues with summaries that contain the word "test", you would pass in the following string:\nΒ Β Β  summary ~ \'test\'\nΒ Β Β  '}, {'mode': 'get_projects', 'name': 'Get Projects', 'description': "\nΒ Β Β  This tool is a wrapper around atlassian-python-api's Jira project API, \nΒ Β Β  useful when you need to fetch all the projects the user has access to, find out how many projects there are, or as an intermediary step that involv searching by projects. \nΒ Β Β  there is no input to this tool.\nΒ Β Β  "}, {'mode': 'create_issue', 'name': 'Create Issue', 'description': '\nΒ Β Β  This tool is a wrapper around atlassian-python-api\'s Jira issue_create API, useful when you need to create a Jira issue. \nΒ Β Β  The input to this tool is a dictionary specifying the fields of the Jira issue, and will be passed into atlassian-python-api\'s Jira `issue_create` function.\nΒ Β Β  For example, to create a low priority task called "test issue" with description "test description", you would pass in the following dictionary: \nΒ Β Β  {{"summary": "test issue", "description": "test description", "issuetype": {{"name":
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-15
"test issue", "description": "test description", "issuetype": {{"name": "Task"}}, "priority": {{"name": "Low"}}}}\nΒ Β Β  '}, {'mode': 'other', 'name': 'Catch all Jira API call', 'description': '\nΒ Β Β  This tool is a wrapper around atlassian-python-api\'s Jira API.\nΒ Β Β  There are other dedicated tools for fetching all projects, and creating and searching for issues, \nΒ Β Β  use this tool if you need to perform any other actions allowed by the atlassian-python-api Jira API.\nΒ Β Β  The input to this tool is line of python code that calls a function from atlassian-python-api\'s Jira API\nΒ Β Β  For example, to update the summary field of an issue, you would pass in the following string:\nΒ Β Β  self.jira.update_issue_field(key, {{"summary": "New summary"}})\nΒ Β Β  or to find out how many projects are in the Jira instance, you would pass in the following string:\nΒ Β Β  self.jira.projects()\nΒ Β Β  For more information on the Jira API, refer to https://atlassian-python-api.readthedocs.io/jira.html\nΒ Β Β  '}, {'mode': 'create_page', 'name': 'Create confluence page', 'description': 'This tool is a wrapper around atlassian-python-api\'s Confluence \natlassian-python-api API, useful when you need to create a Confluence page. The input to this tool is a dictionary \nspecifying the fields of the Confluence page, and will be passed into atlassian-python-api\'s Confluence `create_page` \nfunction. For example, to create a page in the DEMO space titled "This is the title" with body "This is the body. You can use \n<strong>HTML tags</strong>!", you would pass in the following dictionary: {{"space": "DEMO",
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-16
you would pass in the following dictionary: {{"space": "DEMO", "title":"This is the \ntitle","body":"This is the body. You can use <strong>HTML tags</strong>!"}} '}]
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-17
issue_create(query)[source] Parameters query (str) – Return type str list()[source] Return type List[Dict] other(query)[source] Parameters query (str) – Return type str page_create(query)[source] Parameters query (str) – Return type str parse_issues(issues)[source] Parameters issues (Dict) – Return type List[dict] parse_projects(projects)[source] Parameters projects (List[dict]) – Return type List[dict] project()[source] Return type str run(mode, query)[source] Parameters mode (str) – query (str) – Return type str search(query)[source] Parameters query (str) – Return type str class langchain.utilities.LambdaWrapper(*, lambda_client=None, function_name=None, awslambda_tool_name=None, awslambda_tool_description=None)[source] Bases: pydantic.main.BaseModel Wrapper for AWS Lambda SDK. Docs for using: pip install boto3 Create a lambda function using the AWS Console or CLI Run aws configure and enter your AWS credentials Parameters lambda_client (Any) – function_name (Optional[str]) – awslambda_tool_name (Optional[str]) – awslambda_tool_description (Optional[str]) – Return type None attribute awslambda_tool_description: Optional[str] = None attribute awslambda_tool_name: Optional[str] = None attribute function_name: Optional[str] = None run(query)[source] Invoke Lambda function and parse result. Parameters
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-18
run(query)[source] Invoke Lambda function and parse result. Parameters query (str) – Return type str class langchain.utilities.MaxComputeAPIWrapper(client)[source] Bases: object Interface for querying Alibaba Cloud MaxCompute tables. Parameters client (ODPS) – classmethod from_params(endpoint, project, *, access_id=None, secret_access_key=None)[source] Convenience constructor that builds the odsp.ODPS MaxCompute client fromgiven parameters. Parameters endpoint (str) – MaxCompute endpoint. project (str) – A project is a basic organizational unit of MaxCompute, which is similar to a database. access_id (Optional[str]) – MaxCompute access ID. Should be passed in directly or set as the environment variable MAX_COMPUTE_ACCESS_ID. secret_access_key (Optional[str]) – MaxCompute secret access key. Should be passed in directly or set as the environment variable MAX_COMPUTE_SECRET_ACCESS_KEY. Return type langchain.utilities.max_compute.MaxComputeAPIWrapper lazy_query(query)[source] Parameters query (str) – Return type Iterator[dict] query(query)[source] Parameters query (str) – Return type List[dict] class langchain.utilities.MetaphorSearchAPIWrapper(*, metaphor_api_key, k=10)[source] Bases: pydantic.main.BaseModel Wrapper for Metaphor Search API. Parameters metaphor_api_key (str) – k (int) – Return type None attribute k: int = 10 attribute metaphor_api_key: str [Required]
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-19
attribute metaphor_api_key: str [Required] results(query, num_results, include_domains=None, exclude_domains=None, start_crawl_date=None, end_crawl_date=None, start_published_date=None, end_published_date=None)[source] Run query through Metaphor Search and return metadata. Parameters query (str) – The query to search for. num_results (int) – The number of results to return. include_domains (Optional[List[str]]) – exclude_domains (Optional[List[str]]) – start_crawl_date (Optional[str]) – end_crawl_date (Optional[str]) – start_published_date (Optional[str]) – end_published_date (Optional[str]) – Returns title - The title of the url - The url author - Author of the content, if applicable. Otherwise, None. published_date - Estimated date published in YYYY-MM-DD format. Otherwise, None. Return type A list of dictionaries with the following keys async results_async(query, num_results, include_domains=None, exclude_domains=None, start_crawl_date=None, end_crawl_date=None, start_published_date=None, end_published_date=None)[source] Get results from the Metaphor Search API asynchronously. Parameters query (str) – num_results (int) – include_domains (Optional[List[str]]) – exclude_domains (Optional[List[str]]) – start_crawl_date (Optional[str]) – end_crawl_date (Optional[str]) – start_published_date (Optional[str]) – end_published_date (Optional[str]) – Return type List[Dict] class langchain.utilities.OpenWeatherMapAPIWrapper(*, owm=None, openweathermap_api_key=None)[source] Bases: pydantic.main.BaseModel
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-20
Bases: pydantic.main.BaseModel Wrapper for OpenWeatherMap API using PyOWM. Docs for using: Go to OpenWeatherMap and sign up for an API key Save your API KEY into OPENWEATHERMAP_API_KEY env variable pip install pyowm Parameters owm (Any) – openweathermap_api_key (Optional[str]) – Return type None attribute openweathermap_api_key: Optional[str] = None attribute owm: Any = None run(location)[source] Get the current weather information for a specified location. Parameters location (str) – Return type str class langchain.utilities.PowerBIDataset(*, dataset_id, table_names, group_id=None, credential=None, token=None, impersonated_user_name=None, sample_rows_in_table_info=1, schemas=None, aiosession=None)[source] Bases: pydantic.main.BaseModel Create PowerBI engine from dataset ID and credential or token. Use either the credential or a supplied token to authenticate. If both are supplied the credential is used to generate a token. The impersonated_user_name is the UPN of a user to be impersonated. If the model is not RLS enabled, this will be ignored. Parameters dataset_id (str) – table_names (List[str]) – group_id (Optional[str]) – credential (Optional[TokenCredential]) – token (Optional[str]) – impersonated_user_name (Optional[str]) – sample_rows_in_table_info (langchain.utilities.powerbi.ConstrainedIntValue) – schemas (Dict[str, str]) – aiosession (Optional[aiohttp.client.ClientSession]) – Return type None
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-21
aiosession (Optional[aiohttp.client.ClientSession]) – Return type None attribute aiosession: Optional[aiohttp.ClientSession] = None attribute credential: Optional[TokenCredential] = None attribute dataset_id: str [Required] attribute group_id: Optional[str] = None attribute impersonated_user_name: Optional[str] = None attribute sample_rows_in_table_info: int = 1 Constraints exclusiveMinimum = 0 maximum = 10 attribute schemas: Dict[str, str] [Optional] attribute table_names: List[str] [Required] attribute token: Optional[str] = None async aget_table_info(table_names=None)[source] Get information about specified tables. Parameters table_names (Optional[Union[List[str], str]]) – Return type str async arun(command)[source] Execute a DAX command and return the result asynchronously. Parameters command (str) – Return type Any get_schemas()[source] Get the available schema’s. Return type str get_table_info(table_names=None)[source] Get information about specified tables. Parameters table_names (Optional[Union[List[str], str]]) – Return type str get_table_names()[source] Get names of tables available. Return type Iterable[str] run(command)[source] Execute a DAX command and return a json representing the results. Parameters command (str) – Return type Any property headers: Dict[str, str] Get the token. property request_url: str Get the request url. property table_info: str Information about all tables in the database.
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-22
property table_info: str Information about all tables in the database. class langchain.utilities.PubMedAPIWrapper(*, top_k_results=3, load_max_docs=25, doc_content_chars_max=2000, load_all_available_meta=False, email='your_email@example.com', base_url_esearch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?', base_url_efetch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?', max_retry=5, sleep_time=0.2, ARXIV_MAX_QUERY_LENGTH=300)[source] Bases: pydantic.main.BaseModel Wrapper around PubMed API. This wrapper will use the PubMed API to conduct searches and fetch document summaries. By default, it will return the document summaries of the top-k results of an input search. Parameters top_k_results (int) – number of the top-scored document used for the PubMed tool load_max_docs (int) – a limit to the number of loaded documents load_all_available_meta (bool) – if True: the metadata of the loaded Documents gets all available meta info(see https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch) if False: the metadata gets only the most informative fields. doc_content_chars_max (int) – email (str) – base_url_esearch (str) – base_url_efetch (str) – max_retry (int) – sleep_time (float) – ARXIV_MAX_QUERY_LENGTH (int) – Return type None attribute doc_content_chars_max: int = 2000 attribute email: str = 'your_email@example.com' attribute load_all_available_meta: bool = False
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-23
attribute load_all_available_meta: bool = False attribute load_max_docs: int = 25 attribute top_k_results: int = 3 load(query)[source] Search PubMed for documents matching the query. Return a list of dictionaries containing the document metadata. Parameters query (str) – Return type List[dict] load_docs(query)[source] Parameters query (str) – Return type List[langchain.schema.Document] retrieve_article(uid, webenv)[source] Parameters uid (str) – webenv (str) – Return type dict run(query)[source] Run PubMed search and get the article meta information. See https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch It uses only the most informative fields of article meta information. Parameters query (str) – Return type str class langchain.utilities.PythonREPL(*, _globals=None, _locals=None)[source] Bases: pydantic.main.BaseModel Simulates a standalone Python REPL. Parameters _globals (Optional[Dict]) – _locals (Optional[Dict]) – Return type None attribute globals: Optional[Dict] [Optional] (alias '_globals') attribute locals: Optional[Dict] [Optional] (alias '_locals') run(command)[source] Run command with own globals/locals and returns anything printed. Parameters command (str) – Return type str pydantic settings langchain.utilities.SceneXplainAPIWrapper[source] Bases: pydantic.env_settings.BaseSettings, pydantic.main.BaseModel Wrapper for SceneXplain API.
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-24
Wrapper for SceneXplain API. In order to set this up, you need API key for the SceneXplain API. You can obtain a key by following the steps below. - Sign up for a free account at https://scenex.jina.ai/. - Navigate to the API Access page (https://scenex.jina.ai/api) and create a new API key. Show JSON schema{ "title": "SceneXplainAPIWrapper", "description": "Wrapper for SceneXplain API.\n\nIn order to set this up, you need API key for the SceneXplain API.\nYou can obtain a key by following the steps below.\n- Sign up for a free account at https://scenex.jina.ai/.\n- Navigate to the API Access page (https://scenex.jina.ai/api)\n and create a new API key.", "type": "object", "properties": { "scenex_api_key": { "title": "Scenex Api Key", "env": "SCENEX_API_KEY", "env_names": "{'scenex_api_key'}", "type": "string" }, "scenex_api_url": { "title": "Scenex Api Url", "default": "https://us-central1-causal-diffusion.cloudfunctions.net/describe", "env_names": "{'scenex_api_url'}", "type": "string" } }, "required": [ "scenex_api_key" ], "additionalProperties": false } Fields scenex_api_key (str) scenex_api_url (str)
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-25
Fields scenex_api_key (str) scenex_api_url (str) attribute scenex_api_key: str [Required] attribute scenex_api_url: str = 'https://us-central1-causal-diffusion.cloudfunctions.net/describe' run(image)[source] Run SceneXplain image explainer. Parameters image (str) – Return type str validator validate_environmentΒ  »  all fields[source] Validate that api key exists in environment. Parameters values (Dict) – Return type Dict class langchain.utilities.SearxSearchWrapper(*, searx_host='', unsecure=False, params=None, headers=None, engines=[], categories=[], query_suffix='', k=10, aiosession=None)[source] Bases: pydantic.main.BaseModel Wrapper for Searx API. To use you need to provide the searx host by passing the named parameter searx_host or exporting the environment variable SEARX_HOST. In some situations you might want to disable SSL verification, for example if you are running searx locally. You can do this by passing the named parameter unsecure. You can also pass the host url scheme as http to disable SSL. Example from langchain.utilities import SearxSearchWrapper searx = SearxSearchWrapper(searx_host="http://localhost:8888") Example with SSL disabled:from langchain.utilities import SearxSearchWrapper # note the unsecure parameter is not needed if you pass the url scheme as # http searx = SearxSearchWrapper(searx_host="http://localhost:8888", unsecure=True) Parameters searx_host (str) – unsecure (bool) –
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-26
Parameters searx_host (str) – unsecure (bool) – params (dict) – headers (Optional[dict]) – engines (Optional[List[str]]) – categories (Optional[List[str]]) – query_suffix (Optional[str]) – k (int) – aiosession (Optional[Any]) – Return type None attribute aiosession: Optional[Any] = None attribute categories: Optional[List[str]] = [] attribute engines: Optional[List[str]] = [] attribute headers: Optional[dict] = None attribute k: int = 10 attribute params: dict [Optional] attribute query_suffix: Optional[str] = '' attribute searx_host: str = '' attribute unsecure: bool = False async aresults(query, num_results, engines=None, query_suffix='', **kwargs)[source] Asynchronously query with json results. Uses aiohttp. See results for more info. Parameters query (str) – num_results (int) – engines (Optional[List[str]]) – query_suffix (Optional[str]) – kwargs (Any) – Return type List[Dict] async arun(query, engines=None, query_suffix='', **kwargs)[source] Asynchronously version of run. Parameters query (str) – engines (Optional[List[str]]) – query_suffix (Optional[str]) – kwargs (Any) – Return type str results(query, num_results, engines=None, categories=None, query_suffix='', **kwargs)[source] Run query through Searx API and returns the results with metadata. Parameters query (str) – The query to search for.
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-27
Parameters query (str) – The query to search for. query_suffix (Optional[str]) – Extra suffix appended to the query. num_results (int) – Limit the number of results to return. engines (Optional[List[str]]) – List of engines to use for the query. categories (Optional[List[str]]) – List of categories to use for the query. **kwargs – extra parameters to pass to the searx API. kwargs (Any) – Returns {snippet: The description of the result. title: The title of the result. link: The link to the result. engines: The engines used for the result. category: Searx category of the result. } Return type Dict with the following keys run(query, engines=None, categories=None, query_suffix='', **kwargs)[source] Run query through Searx API and parse results. You can pass any other params to the searx query API. Parameters query (str) – The query to search for. query_suffix (Optional[str]) – Extra suffix appended to the query. engines (Optional[List[str]]) – List of engines to use for the query. categories (Optional[List[str]]) – List of categories to use for the query. **kwargs – extra parameters to pass to the searx API. kwargs (Any) – Returns The result of the query. Return type str Raises ValueError – If an error occured with the query. Example This will make a query to the qwant engine: from langchain.utilities import SearxSearchWrapper searx = SearxSearchWrapper(searx_host="http://my.searx.host") searx.run("what is the weather in France ?", engine="qwant")
https://api.python.langchain.com/en/latest/modules/utilities.html
1b035ff3a02e-28
searx.run("what is the weather in France ?", engine="qwant") # the same result can be achieved using the `!` syntax of searx # to select the engine using `query_suffix` searx.run("what is the weather in France ?", query_suffix="!qwant") class langchain.utilities.SerpAPIWrapper(*, search_engine=None, params={'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}, serpapi_api_key=None, aiosession=None)[source] Bases: pydantic.main.BaseModel Wrapper around SerpAPI. To use, you should have the google-search-results python package installed, and the environment variable SERPAPI_API_KEY set with your API key, or pass serpapi_api_key as a named parameter to the constructor. Example from langchain import SerpAPIWrapper serpapi = SerpAPIWrapper() Parameters search_engine (Any) – params (dict) – serpapi_api_key (Optional[str]) – aiosession (Optional[aiohttp.client.ClientSession]) – Return type None attribute aiosession: Optional[aiohttp.client.ClientSession] = None attribute params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'} attribute serpapi_api_key: Optional[str] = None async aresults(query)[source] Use aiohttp to run query through SerpAPI and return the results async. Parameters query (str) – Return type dict async arun(query, **kwargs)[source] Run query through SerpAPI and parse result async. Parameters
https://api.python.langchain.com/en/latest/modules/utilities.html