id
stringlengths
14
16
text
stringlengths
31
2.41k
source
stringlengths
53
121
46d47edcfcf7-12
Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a TensorflowHub embedding model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.SagemakerEndpointEmbeddings(*, client=None, endpoint_name='', region_name='', credentials_profile_name=None, content_handler, model_kwargs=None, endpoint_kwargs=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around custom Sagemaker Inference Endpoints. To use, you must supply the endpoint name from your deployed Sagemaker model & the region where it is deployed. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Sagemaker endpoint. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html Parameters client (Any) – endpoint_name (str) – region_name (str) – credentials_profile_name (Optional[str]) – content_handler (langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler) – model_kwargs (Optional[Dict]) – endpoint_kwargs (Optional[Dict]) – Return type None attribute content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required] The content handler class that provides an input and output transform functions to handle formats between LLM
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-13
The content handler class that provides an input and output transform functions to handle formats between LLM and the endpoint. attribute credentials_profile_name: Optional[str] = None The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html attribute endpoint_kwargs: Optional[Dict] = None Optional attributes passed to the invoke_endpoint function. See `boto3`_. docs for more info. .. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html> attribute endpoint_name: str = '' The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region. attribute model_kwargs: Optional[Dict] = None Key word arguments to pass to the model. attribute region_name: str = '' The aws region where the Sagemaker model is deployed, eg. us-west-2. embed_documents(texts, chunk_size=64)[source] Compute doc embeddings using a SageMaker Inference Endpoint. Parameters texts (List[str]) – The list of texts to embed. chunk_size (int) – The chunk size defines how many input texts will be grouped together as request. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a SageMaker inference endpoint. Parameters text (str) – The text to embed. Returns
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-14
Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.HuggingFaceInstructEmbeddings(*, client=None, model_name='hkunlp/instructor-large', cache_folder=None, model_kwargs=None, encode_kwargs=None, embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ')[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around sentence_transformers embedding models. To use, you should have the sentence_transformers and InstructorEmbedding python packages installed. Example from langchain.embeddings import HuggingFaceInstructEmbeddings model_name = "hkunlp/instructor-large" model_kwargs = {'device': 'cpu'} encode_kwargs = {'normalize_embeddings': True} hf = HuggingFaceInstructEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs ) Parameters client (Any) – model_name (str) – cache_folder (Optional[str]) – model_kwargs (Dict[str, Any]) – encode_kwargs (Dict[str, Any]) – embed_instruction (str) – query_instruction (str) – Return type None attribute cache_folder: Optional[str] = None Path to store models. Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable. attribute embed_instruction: str = 'Represent the document for retrieval: ' Instruction to use for embedding documents. attribute encode_kwargs: Dict[str, Any] [Optional] Key word arguments to pass when calling the encode method of the model. attribute model_kwargs: Dict[str, Any] [Optional]
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-15
attribute model_kwargs: Dict[str, Any] [Optional] Key word arguments to pass to the model. attribute model_name: str = 'hkunlp/instructor-large' Model name to use. attribute query_instruction: str = 'Represent the question for retrieving supporting documents: ' Instruction to use for embedding query. embed_documents(texts)[source] Compute doc embeddings using a HuggingFace instruct model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a HuggingFace instruct model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.MosaicMLInstructorEmbeddings(*, endpoint_url='https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ', retry_sleep=1.0, mosaicml_api_token=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around MosaicML’s embedding inference service. To use, you should have the environment variable MOSAICML_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Example from langchain.llms import MosaicMLInstructorEmbeddings endpoint_url = ( "https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict" ) mosaic_llm = MosaicMLInstructorEmbeddings( endpoint_url=endpoint_url,
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-16
endpoint_url=endpoint_url, mosaicml_api_token="my-api-key" ) Parameters endpoint_url (str) – embed_instruction (str) – query_instruction (str) – retry_sleep (float) – mosaicml_api_token (Optional[str]) – Return type None attribute embed_instruction: str = 'Represent the document for retrieval: ' Instruction used to embed documents. attribute endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict' Endpoint URL to use. attribute query_instruction: str = 'Represent the question for retrieving supporting documents: ' Instruction used to embed the query. attribute retry_sleep: float = 1.0 How long to try sleeping for if a rate limit is encountered embed_documents(texts)[source] Embed documents using a MosaicML deployed instructor embedding model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Embed a query using a MosaicML deployed instructor embedding model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.SelfHostedEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn, load_fn_kwargs=None, model_reqs=['./', 'torch'], inference_kwargs=None)[source] Bases: langchain.llms.self_hosted.SelfHostedPipeline, langchain.embeddings.base.Embeddings
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-17
Runs custom embedding models on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") def get_pipeline(): model_id = "facebook/bart-large" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) return pipeline("feature-extraction", model=model, tokenizer=tokenizer) embeddings = SelfHostedEmbeddings( model_load_fn=get_pipeline, hardware=gpu model_reqs=["./", "torch", "transformers"], ) Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings import runhouse as rh from transformers import pipeline gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") pipeline = pipeline(model="bert-base-uncased", task="feature-extraction") rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to(gpu, path="models") embeddings = SelfHostedHFEmbeddings.from_pipeline( pipeline="models/pipeline.pkl", hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Parameters cache (Optional[bool]) – verbose (bool) –
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-18
) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – pipeline_ref (Any) – client (Any) – inference_fn (Callable) – hardware (Any) – model_load_fn (Callable) – load_fn_kwargs (Optional[dict]) – model_reqs (List[str]) – inference_kwargs (Any) – Return type None attribute inference_fn: Callable = <function _embed_documents> Inference function to extract the embeddings on the remote hardware. attribute inference_kwargs: Any = None Any kwargs to pass to the model’s inference function. embed_documents(texts)[source] Compute doc embeddings using a HuggingFace transformer model. Parameters texts (List[str]) – The list of texts to embed.s Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a HuggingFace transformer model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float]
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-19
Returns Embeddings for the text. Return type List[float] class langchain.embeddings.SelfHostedHuggingFaceEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn=<function load_embedding_model>, load_fn_kwargs=None, model_reqs=['./', 'sentence_transformers', 'torch'], inference_kwargs=None, model_id='sentence-transformers/all-mpnet-base-v2')[source] Bases: langchain.embeddings.self_hosted.SelfHostedEmbeddings Runs sentence_transformers embedding models on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example from langchain.embeddings import SelfHostedHuggingFaceEmbeddings import runhouse as rh model_name = "sentence-transformers/all-mpnet-base-v2" gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – pipeline_ref (Any) – client (Any) – inference_fn (Callable) – hardware (Any) –
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-20
inference_fn (Callable) – hardware (Any) – model_load_fn (Callable) – load_fn_kwargs (Optional[dict]) – model_reqs (List[str]) – inference_kwargs (Any) – model_id (str) – Return type None attribute hardware: Any = None Remote hardware to send the inference function to. attribute inference_fn: Callable = <function _embed_documents> Inference function to extract the embeddings. attribute load_fn_kwargs: Optional[dict] = None Key word arguments to pass to the model load function. attribute model_id: str = 'sentence-transformers/all-mpnet-base-v2' Model name to use. attribute model_load_fn: Callable = <function load_embedding_model> Function to load the model remotely on the server. attribute model_reqs: List[str] = ['./', 'sentence_transformers', 'torch'] Requirements to install on hardware to inference the model. class langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn=<function load_embedding_model>, load_fn_kwargs=None, model_reqs=['./', 'InstructorEmbedding', 'torch'], inference_kwargs=None, model_id='hkunlp/instructor-large', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ')[source] Bases: langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings Runs InstructorEmbedding embedding models on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure,
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-21
Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings import runhouse as rh model_name = "hkunlp/instructor-large" gpu = rh.cluster(name='rh-a10x', instance_type='A100:1') hf = SelfHostedHuggingFaceInstructEmbeddings( model_name=model_name, hardware=gpu) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – pipeline_ref (Any) – client (Any) – inference_fn (Callable) – hardware (Any) – model_load_fn (Callable) – load_fn_kwargs (Optional[dict]) – model_reqs (List[str]) – inference_kwargs (Any) – model_id (str) – embed_instruction (str) – query_instruction (str) – Return type None attribute embed_instruction: str = 'Represent the document for retrieval: ' Instruction to use for embedding documents. attribute model_id: str = 'hkunlp/instructor-large' Model name to use. attribute model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch'] Requirements to install on hardware to inference the model.
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-22
Requirements to install on hardware to inference the model. attribute query_instruction: str = 'Represent the question for retrieving supporting documents: ' Instruction to use for embedding query. embed_documents(texts)[source] Compute doc embeddings using a HuggingFace instruct model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a HuggingFace instruct model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.FakeEmbeddings(*, size)[source] Bases: langchain.embeddings.base.Embeddings, pydantic.main.BaseModel Parameters size (int) – Return type None embed_documents(texts)[source] Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] embed_query(text)[source] Embed query text. Parameters text (str) – Return type List[float] class langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper for Aleph Alpha’s Asymmetric Embeddings AA provides you with an endpoint to embed a document and a query. The models were optimized to make the embeddings of documents and the query for a document as similar as possible.
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-23
the query for a document as similar as possible. To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/ Example from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding embeddings = AlephAlphaSymmetricSemanticEmbedding() document = "This is a content of the document" query = "What is the content of the document?" doc_result = embeddings.embed_documents([document]) query_result = embeddings.embed_query(query) Parameters client (Any) – model (Optional[str]) – hosting (Optional[str]) – normalize (Optional[bool]) – compress_to_size (Optional[int]) – contextual_control_threshold (Optional[int]) – control_log_additive (Optional[bool]) – aleph_alpha_api_key (Optional[str]) – Return type None attribute aleph_alpha_api_key: Optional[str] = None API key for Aleph Alpha API. attribute compress_to_size: Optional[int] = 128 Should the returned embeddings come back as an original 5120-dim vector, or should it be compressed to 128-dim. attribute contextual_control_threshold: Optional[int] = None Attention control parameters only apply to those tokens that have explicitly been set in the request. attribute control_log_additive: Optional[bool] = True Apply controls on prompt items by adding the log(control_factor) to attention scores. attribute hosting: Optional[str] = 'https://api.aleph-alpha.com' Optional parameter that specifies which datacenters may process the request. attribute model: Optional[str] = 'luminous-base' Model name to use. attribute normalize: Optional[bool] = True Should returned embeddings be normalized
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-24
attribute normalize: Optional[bool] = True Should returned embeddings be normalized embed_documents(texts)[source] Call out to Aleph Alpha’s asymmetric Document endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Call out to Aleph Alpha’s asymmetric, query embedding endpoint :param text: The text to embed. Returns Embeddings for the text. Parameters text (str) – Return type List[float] class langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source] Bases: langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding The symmetric version of the Aleph Alpha’s semantic embeddings. The main difference is that here, both the documents and queries are embedded with a SemanticRepresentation.Symmetric .. rubric:: Example from aleph_alpha import AlephAlphaSymmetricSemanticEmbedding embeddings = AlephAlphaAsymmetricSemanticEmbedding() text = "This is a test text" doc_result = embeddings.embed_documents([text]) query_result = embeddings.embed_query(text) Parameters client (Any) – model (Optional[str]) – hosting (Optional[str]) – normalize (Optional[bool]) – compress_to_size (Optional[int]) – contextual_control_threshold (Optional[int]) – control_log_additive (Optional[bool]) – aleph_alpha_api_key (Optional[str]) –
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-25
aleph_alpha_api_key (Optional[str]) – Return type None embed_documents(texts)[source] Call out to Aleph Alpha’s Document endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Call out to Aleph Alpha’s asymmetric, query embedding endpoint :param text: The text to embed. Returns Embeddings for the text. Parameters text (str) – Return type List[float] langchain.embeddings.SentenceTransformerEmbeddings alias of langchain.embeddings.huggingface.HuggingFaceEmbeddings class langchain.embeddings.MiniMaxEmbeddings(*, endpoint_url='https://api.minimax.chat/v1/embeddings', model='embo-01', embed_type_db='db', embed_type_query='query', minimax_group_id=None, minimax_api_key=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around MiniMax’s embedding inference service. To use, you should have the environment variable MINIMAX_GROUP_ID and MINIMAX_API_KEY set with your API token, or pass it as a named parameter to the constructor. Example from langchain.embeddings import MiniMaxEmbeddings embeddings = MiniMaxEmbeddings() query_text = "This is a test query." query_result = embeddings.embed_query(query_text) document_text = "This is a test document." document_result = embeddings.embed_documents([document_text]) Parameters endpoint_url (str) – model (str) – embed_type_db (str) – embed_type_query (str) – minimax_group_id (Optional[str]) –
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-26
embed_type_query (str) – minimax_group_id (Optional[str]) – minimax_api_key (Optional[str]) – Return type None attribute embed_type_db: str = 'db' For embed_documents attribute embed_type_query: str = 'query' For embed_query attribute endpoint_url: str = 'https://api.minimax.chat/v1/embeddings' Endpoint URL to use. attribute minimax_api_key: Optional[str] = None API Key for MiniMax API. attribute minimax_group_id: Optional[str] = None Group ID for MiniMax API. attribute model: str = 'embo-01' Embeddings model name to use. embed_documents(texts)[source] Embed documents using a MiniMax embedding endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Embed a query using a MiniMax embedding endpoint. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.BedrockEmbeddings(*, client=None, region_name=None, credentials_profile_name=None, model_id='amazon.titan-e1t-medium', model_kwargs=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Embeddings provider to invoke Bedrock embedding models. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-27
If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Bedrock service. Parameters client (Any) – region_name (Optional[str]) – credentials_profile_name (Optional[str]) – model_id (str) – model_kwargs (Optional[Dict]) – Return type None attribute credentials_profile_name: Optional[str] = None The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html attribute model_id: str = 'amazon.titan-e1t-medium' Id of the model to call, e.g., amazon.titan-e1t-medium, this is equivalent to the modelId property in the list-foundation-models api attribute model_kwargs: Optional[Dict] = None Key word arguments to pass to the model. attribute region_name: Optional[str] = None The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config in case it is not provided here. embed_documents(texts, chunk_size=1)[source] Compute doc embeddings using a Bedrock model. Parameters texts (List[str]) – The list of texts to embed. chunk_size (int) – Bedrock currently only allows single string inputs, so chunk size is always 1. This input is here only for compatibility with the embeddings interface. Returns
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-28
only for compatibility with the embeddings interface. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a Bedrock model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.DeepInfraEmbeddings(*, model_id='sentence-transformers/clip-ViT-B-32', normalize=False, embed_instruction='passage: ', query_instruction='query: ', model_kwargs=None, deepinfra_api_token=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around Deep Infra’s embedding inference service. To use, you should have the environment variable DEEPINFRA_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. There are multiple embeddings models available, see https://deepinfra.com/models?type=embeddings. Example from langchain.embeddings import DeepInfraEmbeddings deepinfra_emb = DeepInfraEmbeddings( model_id="sentence-transformers/clip-ViT-B-32", deepinfra_api_token="my-api-key" ) r1 = deepinfra_emb.embed_documents( [ "Alpha is the first letter of Greek alphabet", "Beta is the second letter of Greek alphabet", ] ) r2 = deepinfra_emb.embed_query( "What is the second letter of Greek alphabet" ) Parameters model_id (str) – normalize (bool) – embed_instruction (str) – query_instruction (str) – model_kwargs (Optional[dict]) – deepinfra_api_token (Optional[str]) – Return type None
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-29
deepinfra_api_token (Optional[str]) – Return type None attribute embed_instruction: str = 'passage: ' Instruction used to embed documents. attribute model_id: str = 'sentence-transformers/clip-ViT-B-32' Embeddings model to use. attribute model_kwargs: Optional[dict] = None Other model keyword args attribute normalize: bool = False whether to normalize the computed embeddings attribute query_instruction: str = 'query: ' Instruction used to embed the query. embed_documents(texts)[source] Embed documents using a Deep Infra deployed embedding model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Embed a query using a Deep Infra deployed embedding model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.DashScopeEmbeddings(*, client=None, model='text-embedding-v1', dashscope_api_key=None, max_retries=5)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around DashScope embedding models. To use, you should have the dashscope python package installed, and the environment variable DASHSCOPE_API_KEY set with your API key or pass it as a named parameter to the constructor. Example from langchain.embeddings import DashScopeEmbeddings embeddings = DashScopeEmbeddings(dashscope_api_key="my-api-key") Example import os os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY"
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-30
os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY" from langchain.embeddings.dashscope import DashScopeEmbeddings embeddings = DashScopeEmbeddings( model="text-embedding-v1", ) text = "This is a test query." query_result = embeddings.embed_query(text) Parameters client (Any) – model (str) – dashscope_api_key (Optional[str]) – max_retries (int) – Return type None attribute dashscope_api_key: Optional[str] = None Maximum number of retries to make when generating. embed_documents(texts)[source] Call out to DashScope’s embedding endpoint for embedding search docs. Parameters texts (List[str]) – The list of texts to embed. chunk_size – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Call out to DashScope’s embedding endpoint for embedding query text. Parameters text (str) – The text to embed. Returns Embedding for the text. Return type List[float] class langchain.embeddings.EmbaasEmbeddings(*, model='e5-large-v2', instruction=None, api_url='https://api.embaas.io/v1/embeddings/', embaas_api_key=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around embaas’s embedding service. To use, you should have the environment variable EMBAAS_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example # Initialise with default model and instruction
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-31
it as a named parameter to the constructor. Example # Initialise with default model and instruction from langchain.embeddings import EmbaasEmbeddings emb = EmbaasEmbeddings() # Initialise with custom model and instruction from langchain.embeddings import EmbaasEmbeddings emb_model = "instructor-large" emb_inst = "Represent the Wikipedia document for retrieval" emb = EmbaasEmbeddings( model=emb_model, instruction=emb_inst ) Parameters model (str) – instruction (Optional[str]) – api_url (str) – embaas_api_key (Optional[str]) – Return type None attribute api_url: str = 'https://api.embaas.io/v1/embeddings/' The URL for the embaas embeddings API. attribute instruction: Optional[str] = None Instruction used for domain-specific embeddings. attribute model: str = 'e5-large-v2' The model used for embeddings. embed_documents(texts)[source] Get embeddings for a list of texts. Parameters texts (List[str]) – The list of texts to get embeddings for. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Get embeddings for a single text. Parameters text (str) – The text to get embeddings for. Returns List of embeddings. Return type List[float]
https://api.python.langchain.com/en/latest/modules/embeddings.html
897ef9cfbf02-0
Utilities General utilities. class langchain.utilities.ApifyWrapper(*, apify_client=None, apify_client_async=None)[source] Bases: pydantic.main.BaseModel Wrapper around Apify. To use, you should have the apify-client python package installed, and the environment variable APIFY_API_TOKEN set with your API key, or pass apify_api_token as a named parameter to the constructor. Parameters apify_client (Any) – apify_client_async (Any) – Return type None attribute apify_client: Any = None attribute apify_client_async: Any = None async acall_actor(actor_id, run_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source] Run an Actor on the Apify platform and wait for results to be ready. Parameters actor_id (str) – The ID or name of the Actor on the Apify platform. run_input (Dict) – The input object of the Actor that you’re trying to run. dataset_mapping_function (Callable) – A function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. build (str, optional) – Optionally specifies the actor build to run. It can be either a build tag or build number. memory_mbytes (int, optional) – Optional memory limit for the run, in megabytes. timeout_secs (int, optional) – Optional timeout for the run, in seconds. Returns A loader that will fetch the records from theActor run’s default dataset. Return type ApifyDatasetLoader async acall_actor_task(task_id, task_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-1
Run a saved Actor task on Apify and wait for results to be ready. Parameters task_id (str) – The ID or name of the task on the Apify platform. task_input (Dict) – The input object of the task that you’re trying to run. Overrides the task’s saved input. dataset_mapping_function (Callable) – A function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. build (str, optional) – Optionally specifies the actor build to run. It can be either a build tag or build number. memory_mbytes (int, optional) – Optional memory limit for the run, in megabytes. timeout_secs (int, optional) – Optional timeout for the run, in seconds. Returns A loader that will fetch the records from thetask run’s default dataset. Return type ApifyDatasetLoader call_actor(actor_id, run_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source] Run an Actor on the Apify platform and wait for results to be ready. Parameters actor_id (str) – The ID or name of the Actor on the Apify platform. run_input (Dict) – The input object of the Actor that you’re trying to run. dataset_mapping_function (Callable) – A function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. build (str, optional) – Optionally specifies the actor build to run. It can be either a build tag or build number. memory_mbytes (int, optional) – Optional memory limit for the run, in megabytes. timeout_secs (int, optional) – Optional timeout for the run, in seconds. Returns
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-2
timeout_secs (int, optional) – Optional timeout for the run, in seconds. Returns A loader that will fetch the records from theActor run’s default dataset. Return type ApifyDatasetLoader call_actor_task(task_id, task_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source] Run a saved Actor task on Apify and wait for results to be ready. Parameters task_id (str) – The ID or name of the task on the Apify platform. task_input (Dict) – The input object of the task that you’re trying to run. Overrides the task’s saved input. dataset_mapping_function (Callable) – A function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. build (str, optional) – Optionally specifies the actor build to run. It can be either a build tag or build number. memory_mbytes (int, optional) – Optional memory limit for the run, in megabytes. timeout_secs (int, optional) – Optional timeout for the run, in seconds. Returns A loader that will fetch the records from thetask run’s default dataset. Return type ApifyDatasetLoader class langchain.utilities.ArxivAPIWrapper(*, arxiv_search=None, arxiv_exceptions=None, top_k_results=3, load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000, ARXIV_MAX_QUERY_LENGTH=300)[source] Bases: pydantic.main.BaseModel Wrapper around ArxivAPI. To use, you should have the arxiv python package installed. https://lukasschwab.me/arxiv.py/index.html This wrapper will use the Arxiv API to conduct searches and
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-3
This wrapper will use the Arxiv API to conduct searches and fetch document summaries. By default, it will return the document summaries of the top-k results. It limits the Document content by doc_content_chars_max. Set doc_content_chars_max=None if you don’t want to limit the content size. Parameters top_k_results (int) – number of the top-scored document used for the arxiv tool ARXIV_MAX_QUERY_LENGTH (int) – the cut limit on the query used for the arxiv tool. load_max_docs (int) – a limit to the number of loaded documents load_all_available_meta (bool) – if True: the metadata of the loaded Documents gets all available meta info(see https://lukasschwab.me/arxiv.py/index.html#Result), if False: the metadata gets only the most informative fields. arxiv_search (Any) – arxiv_exceptions (Any) – doc_content_chars_max (Optional[int]) – Return type None attribute arxiv_exceptions: Any = None attribute doc_content_chars_max: Optional[int] = 4000 attribute load_all_available_meta: bool = False attribute load_max_docs: int = 100 attribute top_k_results: int = 3 load(query)[source] Run Arxiv search and get the article texts plus the article meta information. See https://lukasschwab.me/arxiv.py/index.html#Search Returns: a list of documents with the document.page_content in text format Parameters query (str) – Return type List[langchain.schema.Document] run(query)[source] Run Arxiv search and get the article meta information. See https://lukasschwab.me/arxiv.py/index.html#Search
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-4
See https://lukasschwab.me/arxiv.py/index.html#Search See https://lukasschwab.me/arxiv.py/index.html#Result It uses only the most informative fields of article meta information. Parameters query (str) – Return type str class langchain.utilities.BashProcess(strip_newlines=False, return_err_output=False, persistent=False)[source] Bases: object Executes bash commands and returns the output. Parameters strip_newlines (bool) – return_err_output (bool) – persistent (bool) – run(commands)[source] Run commands and return final output. Parameters commands (Union[str, List[str]]) – Return type str process_output(output, command)[source] Parameters output (str) – command (str) – Return type str class langchain.utilities.BibtexparserWrapper[source] Bases: pydantic.main.BaseModel Wrapper around bibtexparser. To use, you should have the bibtexparser python package installed. https://bibtexparser.readthedocs.io/en/master/ This wrapper will use bibtexparser to load a collection of references from a bibtex file and fetch document summaries. Return type None get_metadata(entry, load_extra=False)[source] Get metadata for the given entry. Parameters entry (Mapping[str, Any]) – load_extra (bool) – Return type Dict[str, Any] load_bibtex_entries(path)[source] Load bibtex entries from the bibtex file at the given path. Parameters path (str) – Return type List[Dict[str, Any]]
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-5
Parameters path (str) – Return type List[Dict[str, Any]] class langchain.utilities.BingSearchAPIWrapper(*, bing_subscription_key, bing_search_url, k=10)[source] Bases: pydantic.main.BaseModel Wrapper for Bing Search API. In order to set this up, follow instructions at: https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e Parameters bing_subscription_key (str) – bing_search_url (str) – k (int) – Return type None attribute bing_search_url: str [Required] attribute bing_subscription_key: str [Required] attribute k: int = 10 results(query, num_results)[source] Run query through BingSearch and return metadata. Parameters query (str) – The query to search for. num_results (int) – The number of results to return. Returns snippet - The description of the result. title - The title of the result. link - The link to the result. Return type A list of dictionaries with the following keys run(query)[source] Run query through BingSearch and parse result. Parameters query (str) – Return type str class langchain.utilities.BraveSearchWrapper(*, api_key, search_kwargs=None)[source] Bases: pydantic.main.BaseModel Parameters api_key (str) – search_kwargs (dict) – Return type None attribute api_key: str [Required] attribute search_kwargs: dict [Optional] run(query)[source] Parameters query (str) – Return type str
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-6
Parameters query (str) – Return type str class langchain.utilities.DuckDuckGoSearchAPIWrapper(*, k=10, region='wt-wt', safesearch='moderate', time='y', max_results=5)[source] Bases: pydantic.main.BaseModel Wrapper for DuckDuckGo Search API. Free and does not require any setup Parameters k (int) – region (Optional[str]) – safesearch (str) – time (Optional[str]) – max_results (int) – Return type None attribute k: int = 10 attribute max_results: int = 5 attribute region: Optional[str] = 'wt-wt' attribute safesearch: str = 'moderate' attribute time: Optional[str] = 'y' get_snippets(query)[source] Run query through DuckDuckGo and return concatenated results. Parameters query (str) – Return type List[str] results(query, num_results)[source] Run query through DuckDuckGo and return metadata. Parameters query (str) – The query to search for. num_results (int) – The number of results to return. Returns snippet - The description of the result. title - The title of the result. link - The link to the result. Return type A list of dictionaries with the following keys run(query)[source] Parameters query (str) – Return type str class langchain.utilities.GooglePlacesAPIWrapper(*, gplaces_api_key=None, google_map_client=None, top_k_results=None)[source] Bases: pydantic.main.BaseModel Wrapper around Google Places API.
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-7
Bases: pydantic.main.BaseModel Wrapper around Google Places API. To use, you should have the googlemaps python package installed,an API key for the google maps platform, and the enviroment variable β€˜β€™GPLACES_API_KEY’’ set with your API key , or pass β€˜gplaces_api_key’ as a named parameter to the constructor. By default, this will return the all the results on the input query.You can use the top_k_results argument to limit the number of results. Example from langchain import GooglePlacesAPIWrapper gplaceapi = GooglePlacesAPIWrapper() Parameters gplaces_api_key (Optional[str]) – google_map_client (Any) – top_k_results (Optional[int]) – Return type None attribute gplaces_api_key: Optional[str] = None attribute top_k_results: Optional[int] = None fetch_place_details(place_id)[source] Parameters place_id (str) – Return type Optional[str] format_place_details(place_details)[source] Parameters place_details (Dict[str, Any]) – Return type Optional[str] run(query)[source] Run Places search and get k number of places that exists that match. Parameters query (str) – Return type str class langchain.utilities.GoogleSearchAPIWrapper(*, search_engine=None, google_api_key=None, google_cse_id=None, k=10, siterestrict=False)[source] Bases: pydantic.main.BaseModel Wrapper for Google Search API. Adapted from: Instructions adapted from https://stackoverflow.com/questions/ 37083058/ programmatically-searching-google-in-python-using-custom-search TODO: DOCS for using it 1. Install google-api-python-client
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-8
TODO: DOCS for using it 1. Install google-api-python-client - If you don’t already have a Google account, sign up. - If you have never created a Google APIs Console project, read the Managing Projects page and create a project in the Google API Console. - Install the library using pip install google-api-python-client The current version of the library is 2.70.0 at this time 2. To create an API key: - Navigate to the APIs & Servicesβ†’Credentials panel in Cloud Console. - Select Create credentials, then select API key from the drop-down menu. - The API key created dialog box displays your newly created key. - You now have an API_KEY 3. Setup Custom Search Engine so you can search the entire web - Create a custom search engine in this link. - In Sites to search, add any valid URL (i.e. www.stackoverflow.com). - That’s all you have to fill up, the rest doesn’t matter. In the left-side menu, click Edit search engine β†’ {your search engine name} β†’ Setup Set Search the entire web to ON. Remove the URL you added from the list of Sites to search. - Under Search engine ID you’ll find the search-engine-ID. 4. Enable the Custom Search API - Navigate to the APIs & Servicesβ†’Dashboard panel in Cloud Console. - Click Enable APIs and Services. - Search for Custom Search API and click on it. - Click Enable. URL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis .com Parameters search_engine (Any) – google_api_key (Optional[str]) – google_cse_id (Optional[str]) – k (int) – siterestrict (bool) – Return type None
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-9
siterestrict (bool) – Return type None attribute google_api_key: Optional[str] = None attribute google_cse_id: Optional[str] = None attribute k: int = 10 attribute siterestrict: bool = False results(query, num_results)[source] Run query through GoogleSearch and return metadata. Parameters query (str) – The query to search for. num_results (int) – The number of results to return. Returns snippet - The description of the result. title - The title of the result. link - The link to the result. Return type A list of dictionaries with the following keys run(query)[source] Run query through GoogleSearch and parse result. Parameters query (str) – Return type str class langchain.utilities.GoogleSerperAPIWrapper(*, k=10, gl='us', hl='en', type='search', tbs=None, serper_api_key=None, aiosession=None, result_key_for_type={'images': 'images', 'news': 'news', 'places': 'places', 'search': 'organic'})[source] Bases: pydantic.main.BaseModel Wrapper around the Serper.dev Google Search API. You can create a free API key at https://serper.dev. To use, you should have the environment variable SERPER_API_KEY set with your API key, or pass serper_api_key as a named parameter to the constructor. Example from langchain import GoogleSerperAPIWrapper google_serper = GoogleSerperAPIWrapper() Parameters k (int) – gl (str) – hl (str) – type (Literal['news', 'search', 'places', 'images']) –
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-10
type (Literal['news', 'search', 'places', 'images']) – tbs (Optional[str]) – serper_api_key (Optional[str]) – aiosession (Optional[aiohttp.client.ClientSession]) – result_key_for_type (dict) – Return type None attribute aiosession: Optional[aiohttp.client.ClientSession] = None attribute gl: str = 'us' attribute hl: str = 'en' attribute k: int = 10 attribute serper_api_key: Optional[str] = None attribute tbs: Optional[str] = None attribute type: Literal['news', 'search', 'places', 'images'] = 'search' async aresults(query, **kwargs)[source] Run query through GoogleSearch. Parameters query (str) – kwargs (Any) – Return type Dict async arun(query, **kwargs)[source] Run query through GoogleSearch and parse result async. Parameters query (str) – kwargs (Any) – Return type str results(query, **kwargs)[source] Run query through GoogleSearch. Parameters query (str) – kwargs (Any) – Return type Dict run(query, **kwargs)[source] Run query through GoogleSearch and parse result. Parameters query (str) – kwargs (Any) – Return type str class langchain.utilities.GraphQLAPIWrapper(*, custom_headers=None, graphql_endpoint, gql_client=None, gql_function)[source] Bases: pydantic.main.BaseModel Wrapper around GraphQL API. To use, you should have the gql python package installed.
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-11
Wrapper around GraphQL API. To use, you should have the gql python package installed. This wrapper will use the GraphQL API to conduct queries. Parameters custom_headers (Optional[Dict[str, str]]) – graphql_endpoint (str) – gql_client (Any) – gql_function (Callable[[str], Any]) – Return type None attribute custom_headers: Optional[Dict[str, str]] = None attribute graphql_endpoint: str [Required] run(query)[source] Run a GraphQL query and get the results. Parameters query (str) – Return type str
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-12
class langchain.utilities.JiraAPIWrapper(*, jira=None, confluence=None, jira_username=None, jira_api_token=None, jira_instance_url=None, operations=[{'mode': 'jql', 'name': 'JQL Query', 'description': '\nΒ Β Β  This tool is a wrapper around atlassian-python-api\'s Jira jql API, useful when you need to search for Jira issues.\nΒ Β Β  The input to this tool is a JQL query string, and will be passed into atlassian-python-api\'s Jira `jql` function,\nΒ Β Β  For example, to find all the issues in project "Test" assigned to the me, you would pass in the following string:\nΒ Β Β  project = Test AND assignee = currentUser()\nΒ Β Β  or to find issues with summaries that contain the word "test", you would pass in the following string:\nΒ Β Β  summary ~ \'test\'\nΒ Β Β  '}, {'mode': 'get_projects', 'name': 'Get Projects', 'description': "\nΒ Β Β  This tool is a wrapper around atlassian-python-api's Jira project API, \nΒ Β Β  useful when you need to fetch all the projects the user has access to, find out how many projects there are, or as an intermediary step that involv searching by projects. \nΒ Β Β  there is no input to this tool.\nΒ Β Β  "}, {'mode': 'create_issue', 'name': 'Create Issue', 'description': '\nΒ Β Β  This tool is a wrapper around atlassian-python-api\'s Jira issue_create API, useful when you need to create a Jira issue. \nΒ Β Β  The input to this tool is a dictionary specifying the fields of the Jira issue, and will be passed into atlassian-python-api\'s Jira `issue_create` function.\nΒ Β Β  For example, to create a low priority task called "test issue" with description "test description", you would pass in the following
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-13
low priority task called "test issue" with description "test description", you would pass in the following dictionary: \nΒ Β Β  {{"summary": "test issue", "description": "test description", "issuetype": {{"name": "Task"}}, "priority": {{"name": "Low"}}}}\nΒ Β Β  '}, {'mode': 'other', 'name': 'Catch all Jira API call', 'description': '\nΒ Β Β  This tool is a wrapper around atlassian-python-api\'s Jira API.\nΒ Β Β  There are other dedicated tools for fetching all projects, and creating and searching for issues, \nΒ Β Β  use this tool if you need to perform any other actions allowed by the atlassian-python-api Jira API.\nΒ Β Β  The input to this tool is line of python code that calls a function from atlassian-python-api\'s Jira API\nΒ Β Β  For example, to update the summary field of an issue, you would pass in the following string:\nΒ Β Β  self.jira.update_issue_field(key, {{"summary": "New summary"}})\nΒ Β Β  or to find out how many projects are in the Jira instance, you would pass in the following string:\nΒ Β Β  self.jira.projects()\nΒ Β Β  For more information on the Jira API, refer to https://atlassian-python-api.readthedocs.io/jira.html\nΒ Β Β  '}, {'mode': 'create_page', 'name': 'Create confluence page', 'description': 'This tool is a wrapper around atlassian-python-api\'s Confluence \natlassian-python-api API, useful when you need to create a Confluence page. The input to this tool is a dictionary \nspecifying the fields of the Confluence page, and will be passed into atlassian-python-api\'s Confluence `create_page` \nfunction. For example, to create a page in the DEMO space titled "This is the title" with body "This is the body. You
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-14
the DEMO space titled "This is the title" with body "This is the body. You can use \n<strong>HTML tags</strong>!", you would pass in the following dictionary: {{"space": "DEMO", "title":"This is the \ntitle","body":"This is the body. You can use <strong>HTML tags</strong>!"}} '}])[source]
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-15
Bases: pydantic.main.BaseModel Wrapper for Jira API. Parameters jira (Any) – confluence (Any) – jira_username (Optional[str]) – jira_api_token (Optional[str]) – jira_instance_url (Optional[str]) – operations (List[Dict]) – Return type None attribute confluence: Any = None attribute jira_api_token: Optional[str] = None attribute jira_instance_url: Optional[str] = None attribute jira_username: Optional[str] = None
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-16
attribute operations: List[Dict] = [{'mode': 'jql', 'name': 'JQL Query', 'description': '\nΒ Β Β  This tool is a wrapper around atlassian-python-api\'s Jira jql API, useful when you need to search for Jira issues.\nΒ Β Β  The input to this tool is a JQL query string, and will be passed into atlassian-python-api\'s Jira `jql` function,\nΒ Β Β  For example, to find all the issues in project "Test" assigned to the me, you would pass in the following string:\nΒ Β Β  project = Test AND assignee = currentUser()\nΒ Β Β  or to find issues with summaries that contain the word "test", you would pass in the following string:\nΒ Β Β  summary ~ \'test\'\nΒ Β Β  '}, {'mode': 'get_projects', 'name': 'Get Projects', 'description': "\nΒ Β Β  This tool is a wrapper around atlassian-python-api's Jira project API, \nΒ Β Β  useful when you need to fetch all the projects the user has access to, find out how many projects there are, or as an intermediary step that involv searching by projects. \nΒ Β Β  there is no input to this tool.\nΒ Β Β  "}, {'mode': 'create_issue', 'name': 'Create Issue', 'description': '\nΒ Β Β  This tool is a wrapper around atlassian-python-api\'s Jira issue_create API, useful when you need to create a Jira issue. \nΒ Β Β  The input to this tool is a dictionary specifying the fields of the Jira issue, and will be passed into atlassian-python-api\'s Jira `issue_create` function.\nΒ Β Β  For example, to create a low priority task called "test issue" with description "test description", you would pass in the following dictionary: \nΒ Β Β  {{"summary": "test issue", "description": "test description", "issuetype": {{"name":
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-17
"test issue", "description": "test description", "issuetype": {{"name": "Task"}}, "priority": {{"name": "Low"}}}}\nΒ Β Β  '}, {'mode': 'other', 'name': 'Catch all Jira API call', 'description': '\nΒ Β Β  This tool is a wrapper around atlassian-python-api\'s Jira API.\nΒ Β Β  There are other dedicated tools for fetching all projects, and creating and searching for issues, \nΒ Β Β  use this tool if you need to perform any other actions allowed by the atlassian-python-api Jira API.\nΒ Β Β  The input to this tool is line of python code that calls a function from atlassian-python-api\'s Jira API\nΒ Β Β  For example, to update the summary field of an issue, you would pass in the following string:\nΒ Β Β  self.jira.update_issue_field(key, {{"summary": "New summary"}})\nΒ Β Β  or to find out how many projects are in the Jira instance, you would pass in the following string:\nΒ Β Β  self.jira.projects()\nΒ Β Β  For more information on the Jira API, refer to https://atlassian-python-api.readthedocs.io/jira.html\nΒ Β Β  '}, {'mode': 'create_page', 'name': 'Create confluence page', 'description': 'This tool is a wrapper around atlassian-python-api\'s Confluence \natlassian-python-api API, useful when you need to create a Confluence page. The input to this tool is a dictionary \nspecifying the fields of the Confluence page, and will be passed into atlassian-python-api\'s Confluence `create_page` \nfunction. For example, to create a page in the DEMO space titled "This is the title" with body "This is the body. You can use \n<strong>HTML tags</strong>!", you would pass in the following dictionary: {{"space": "DEMO",
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-18
you would pass in the following dictionary: {{"space": "DEMO", "title":"This is the \ntitle","body":"This is the body. You can use <strong>HTML tags</strong>!"}} '}]
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-19
issue_create(query)[source] Parameters query (str) – Return type str list()[source] Return type List[Dict] other(query)[source] Parameters query (str) – Return type str page_create(query)[source] Parameters query (str) – Return type str parse_issues(issues)[source] Parameters issues (Dict) – Return type List[dict] parse_projects(projects)[source] Parameters projects (List[dict]) – Return type List[dict] project()[source] Return type str run(mode, query)[source] Parameters mode (str) – query (str) – Return type str search(query)[source] Parameters query (str) – Return type str class langchain.utilities.LambdaWrapper(*, lambda_client=None, function_name=None, awslambda_tool_name=None, awslambda_tool_description=None)[source] Bases: pydantic.main.BaseModel Wrapper for AWS Lambda SDK. Docs for using: pip install boto3 Create a lambda function using the AWS Console or CLI Run aws configure and enter your AWS credentials Parameters lambda_client (Any) – function_name (Optional[str]) – awslambda_tool_name (Optional[str]) – awslambda_tool_description (Optional[str]) – Return type None attribute awslambda_tool_description: Optional[str] = None attribute awslambda_tool_name: Optional[str] = None attribute function_name: Optional[str] = None run(query)[source] Invoke Lambda function and parse result. Parameters
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-20
run(query)[source] Invoke Lambda function and parse result. Parameters query (str) – Return type str class langchain.utilities.MaxComputeAPIWrapper(client)[source] Bases: object Interface for querying Alibaba Cloud MaxCompute tables. Parameters client (ODPS) – classmethod from_params(endpoint, project, *, access_id=None, secret_access_key=None)[source] Convenience constructor that builds the odsp.ODPS MaxCompute client fromgiven parameters. Parameters endpoint (str) – MaxCompute endpoint. project (str) – A project is a basic organizational unit of MaxCompute, which is similar to a database. access_id (Optional[str]) – MaxCompute access ID. Should be passed in directly or set as the environment variable MAX_COMPUTE_ACCESS_ID. secret_access_key (Optional[str]) – MaxCompute secret access key. Should be passed in directly or set as the environment variable MAX_COMPUTE_SECRET_ACCESS_KEY. Return type langchain.utilities.max_compute.MaxComputeAPIWrapper lazy_query(query)[source] Parameters query (str) – Return type Iterator[dict] query(query)[source] Parameters query (str) – Return type List[dict] class langchain.utilities.MetaphorSearchAPIWrapper(*, metaphor_api_key, k=10)[source] Bases: pydantic.main.BaseModel Wrapper for Metaphor Search API. Parameters metaphor_api_key (str) – k (int) – Return type None attribute k: int = 10 attribute metaphor_api_key: str [Required]
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-21
attribute metaphor_api_key: str [Required] results(query, num_results, include_domains=None, exclude_domains=None, start_crawl_date=None, end_crawl_date=None, start_published_date=None, end_published_date=None)[source] Run query through Metaphor Search and return metadata. Parameters query (str) – The query to search for. num_results (int) – The number of results to return. include_domains (Optional[List[str]]) – exclude_domains (Optional[List[str]]) – start_crawl_date (Optional[str]) – end_crawl_date (Optional[str]) – start_published_date (Optional[str]) – end_published_date (Optional[str]) – Returns title - The title of the url - The url author - Author of the content, if applicable. Otherwise, None. published_date - Estimated date published in YYYY-MM-DD format. Otherwise, None. Return type A list of dictionaries with the following keys async results_async(query, num_results, include_domains=None, exclude_domains=None, start_crawl_date=None, end_crawl_date=None, start_published_date=None, end_published_date=None)[source] Get results from the Metaphor Search API asynchronously. Parameters query (str) – num_results (int) – include_domains (Optional[List[str]]) – exclude_domains (Optional[List[str]]) – start_crawl_date (Optional[str]) – end_crawl_date (Optional[str]) – start_published_date (Optional[str]) – end_published_date (Optional[str]) – Return type List[Dict] class langchain.utilities.OpenWeatherMapAPIWrapper(*, owm=None, openweathermap_api_key=None)[source] Bases: pydantic.main.BaseModel
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-22
Bases: pydantic.main.BaseModel Wrapper for OpenWeatherMap API using PyOWM. Docs for using: Go to OpenWeatherMap and sign up for an API key Save your API KEY into OPENWEATHERMAP_API_KEY env variable pip install pyowm Parameters owm (Any) – openweathermap_api_key (Optional[str]) – Return type None attribute openweathermap_api_key: Optional[str] = None attribute owm: Any = None run(location)[source] Get the current weather information for a specified location. Parameters location (str) – Return type str class langchain.utilities.PowerBIDataset(*, dataset_id, table_names, group_id=None, credential=None, token=None, impersonated_user_name=None, sample_rows_in_table_info=1, schemas=None, aiosession=None)[source] Bases: pydantic.main.BaseModel Create PowerBI engine from dataset ID and credential or token. Use either the credential or a supplied token to authenticate. If both are supplied the credential is used to generate a token. The impersonated_user_name is the UPN of a user to be impersonated. If the model is not RLS enabled, this will be ignored. Parameters dataset_id (str) – table_names (List[str]) – group_id (Optional[str]) – credential (Optional[TokenCredential]) – token (Optional[str]) – impersonated_user_name (Optional[str]) – sample_rows_in_table_info (langchain.utilities.powerbi.ConstrainedIntValue) – schemas (Dict[str, str]) – aiosession (Optional[aiohttp.client.ClientSession]) – Return type None
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-23
aiosession (Optional[aiohttp.client.ClientSession]) – Return type None attribute aiosession: Optional[aiohttp.ClientSession] = None attribute credential: Optional[TokenCredential] = None attribute dataset_id: str [Required] attribute group_id: Optional[str] = None attribute impersonated_user_name: Optional[str] = None attribute sample_rows_in_table_info: int = 1 Constraints exclusiveMinimum = 0 maximum = 10 attribute schemas: Dict[str, str] [Optional] attribute table_names: List[str] [Required] attribute token: Optional[str] = None async aget_table_info(table_names=None)[source] Get information about specified tables. Parameters table_names (Optional[Union[List[str], str]]) – Return type str async arun(command)[source] Execute a DAX command and return the result asynchronously. Parameters command (str) – Return type Any get_schemas()[source] Get the available schema’s. Return type str get_table_info(table_names=None)[source] Get information about specified tables. Parameters table_names (Optional[Union[List[str], str]]) – Return type str get_table_names()[source] Get names of tables available. Return type Iterable[str] run(command)[source] Execute a DAX command and return a json representing the results. Parameters command (str) – Return type Any property headers: Dict[str, str] Get the token. property request_url: str Get the request url. property table_info: str Information about all tables in the database.
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-24
property table_info: str Information about all tables in the database. class langchain.utilities.PubMedAPIWrapper(*, top_k_results=3, load_max_docs=25, doc_content_chars_max=2000, load_all_available_meta=False, email='your_email@example.com', base_url_esearch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?', base_url_efetch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?', max_retry=5, sleep_time=0.2, ARXIV_MAX_QUERY_LENGTH=300)[source] Bases: pydantic.main.BaseModel Wrapper around PubMed API. This wrapper will use the PubMed API to conduct searches and fetch document summaries. By default, it will return the document summaries of the top-k results of an input search. Parameters top_k_results (int) – number of the top-scored document used for the PubMed tool load_max_docs (int) – a limit to the number of loaded documents load_all_available_meta (bool) – if True: the metadata of the loaded Documents gets all available meta info(see https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch) if False: the metadata gets only the most informative fields. doc_content_chars_max (int) – email (str) – base_url_esearch (str) – base_url_efetch (str) – max_retry (int) – sleep_time (float) – ARXIV_MAX_QUERY_LENGTH (int) – Return type None attribute doc_content_chars_max: int = 2000 attribute email: str = 'your_email@example.com' attribute load_all_available_meta: bool = False
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-25
attribute load_all_available_meta: bool = False attribute load_max_docs: int = 25 attribute top_k_results: int = 3 load(query)[source] Search PubMed for documents matching the query. Return a list of dictionaries containing the document metadata. Parameters query (str) – Return type List[dict] load_docs(query)[source] Parameters query (str) – Return type List[langchain.schema.Document] retrieve_article(uid, webenv)[source] Parameters uid (str) – webenv (str) – Return type dict run(query)[source] Run PubMed search and get the article meta information. See https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch It uses only the most informative fields of article meta information. Parameters query (str) – Return type str class langchain.utilities.PythonREPL(*, _globals=None, _locals=None)[source] Bases: pydantic.main.BaseModel Simulates a standalone Python REPL. Parameters _globals (Optional[Dict]) – _locals (Optional[Dict]) – Return type None attribute globals: Optional[Dict] [Optional] (alias '_globals') attribute locals: Optional[Dict] [Optional] (alias '_locals') run(command)[source] Run command with own globals/locals and returns anything printed. Parameters command (str) – Return type str pydantic settings langchain.utilities.SceneXplainAPIWrapper[source] Bases: pydantic.env_settings.BaseSettings, pydantic.main.BaseModel Wrapper for SceneXplain API.
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-26
Wrapper for SceneXplain API. In order to set this up, you need API key for the SceneXplain API. You can obtain a key by following the steps below. - Sign up for a free account at https://scenex.jina.ai/. - Navigate to the API Access page (https://scenex.jina.ai/api) and create a new API key. Show JSON schema{ "title": "SceneXplainAPIWrapper", "description": "Wrapper for SceneXplain API.\n\nIn order to set this up, you need API key for the SceneXplain API.\nYou can obtain a key by following the steps below.\n- Sign up for a free account at https://scenex.jina.ai/.\n- Navigate to the API Access page (https://scenex.jina.ai/api)\n and create a new API key.", "type": "object", "properties": { "scenex_api_key": { "title": "Scenex Api Key", "env": "SCENEX_API_KEY", "env_names": "{'scenex_api_key'}", "type": "string" }, "scenex_api_url": { "title": "Scenex Api Url", "default": "https://us-central1-causal-diffusion.cloudfunctions.net/describe", "env_names": "{'scenex_api_url'}", "type": "string" } }, "required": [ "scenex_api_key" ], "additionalProperties": false } Fields scenex_api_key (str) scenex_api_url (str)
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-27
Fields scenex_api_key (str) scenex_api_url (str) attribute scenex_api_key: str [Required] attribute scenex_api_url: str = 'https://us-central1-causal-diffusion.cloudfunctions.net/describe' run(image)[source] Run SceneXplain image explainer. Parameters image (str) – Return type str validator validate_environmentΒ  »  all fields[source] Validate that api key exists in environment. Parameters values (Dict) – Return type Dict class langchain.utilities.SearxSearchWrapper(*, searx_host='', unsecure=False, params=None, headers=None, engines=[], categories=[], query_suffix='', k=10, aiosession=None)[source] Bases: pydantic.main.BaseModel Wrapper for Searx API. To use you need to provide the searx host by passing the named parameter searx_host or exporting the environment variable SEARX_HOST. In some situations you might want to disable SSL verification, for example if you are running searx locally. You can do this by passing the named parameter unsecure. You can also pass the host url scheme as http to disable SSL. Example from langchain.utilities import SearxSearchWrapper searx = SearxSearchWrapper(searx_host="http://localhost:8888") Example with SSL disabled:from langchain.utilities import SearxSearchWrapper # note the unsecure parameter is not needed if you pass the url scheme as # http searx = SearxSearchWrapper(searx_host="http://localhost:8888", unsecure=True) Parameters searx_host (str) – unsecure (bool) –
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-28
Parameters searx_host (str) – unsecure (bool) – params (dict) – headers (Optional[dict]) – engines (Optional[List[str]]) – categories (Optional[List[str]]) – query_suffix (Optional[str]) – k (int) – aiosession (Optional[Any]) – Return type None attribute aiosession: Optional[Any] = None attribute categories: Optional[List[str]] = [] attribute engines: Optional[List[str]] = [] attribute headers: Optional[dict] = None attribute k: int = 10 attribute params: dict [Optional] attribute query_suffix: Optional[str] = '' attribute searx_host: str = '' attribute unsecure: bool = False async aresults(query, num_results, engines=None, query_suffix='', **kwargs)[source] Asynchronously query with json results. Uses aiohttp. See results for more info. Parameters query (str) – num_results (int) – engines (Optional[List[str]]) – query_suffix (Optional[str]) – kwargs (Any) – Return type List[Dict] async arun(query, engines=None, query_suffix='', **kwargs)[source] Asynchronously version of run. Parameters query (str) – engines (Optional[List[str]]) – query_suffix (Optional[str]) – kwargs (Any) – Return type str results(query, num_results, engines=None, categories=None, query_suffix='', **kwargs)[source] Run query through Searx API and returns the results with metadata. Parameters query (str) – The query to search for.
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-29
Parameters query (str) – The query to search for. query_suffix (Optional[str]) – Extra suffix appended to the query. num_results (int) – Limit the number of results to return. engines (Optional[List[str]]) – List of engines to use for the query. categories (Optional[List[str]]) – List of categories to use for the query. **kwargs – extra parameters to pass to the searx API. kwargs (Any) – Returns {snippet: The description of the result. title: The title of the result. link: The link to the result. engines: The engines used for the result. category: Searx category of the result. } Return type Dict with the following keys run(query, engines=None, categories=None, query_suffix='', **kwargs)[source] Run query through Searx API and parse results. You can pass any other params to the searx query API. Parameters query (str) – The query to search for. query_suffix (Optional[str]) – Extra suffix appended to the query. engines (Optional[List[str]]) – List of engines to use for the query. categories (Optional[List[str]]) – List of categories to use for the query. **kwargs – extra parameters to pass to the searx API. kwargs (Any) – Returns The result of the query. Return type str Raises ValueError – If an error occurred with the query. Example This will make a query to the qwant engine: from langchain.utilities import SearxSearchWrapper searx = SearxSearchWrapper(searx_host="http://my.searx.host") searx.run("what is the weather in France ?", engine="qwant")
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-30
searx.run("what is the weather in France ?", engine="qwant") # the same result can be achieved using the `!` syntax of searx # to select the engine using `query_suffix` searx.run("what is the weather in France ?", query_suffix="!qwant") class langchain.utilities.SerpAPIWrapper(*, search_engine=None, params={'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}, serpapi_api_key=None, aiosession=None)[source] Bases: pydantic.main.BaseModel Wrapper around SerpAPI. To use, you should have the google-search-results python package installed, and the environment variable SERPAPI_API_KEY set with your API key, or pass serpapi_api_key as a named parameter to the constructor. Example from langchain.utilities import SerpAPIWrapper serpapi = SerpAPIWrapper() Parameters search_engine (Any) – params (dict) – serpapi_api_key (Optional[str]) – aiosession (Optional[aiohttp.client.ClientSession]) – Return type None attribute aiosession: Optional[aiohttp.client.ClientSession] = None attribute params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'} attribute serpapi_api_key: Optional[str] = None async aresults(query)[source] Use aiohttp to run query through SerpAPI and return the results async. Parameters query (str) – Return type dict async arun(query, **kwargs)[source] Run query through SerpAPI and parse result async. Parameters
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-31
Run query through SerpAPI and parse result async. Parameters query (str) – kwargs (Any) – Return type str get_params(query)[source] Get parameters for SerpAPI. Parameters query (str) – Return type Dict[str, str] results(query)[source] Run query through SerpAPI and return the raw result. Parameters query (str) – Return type dict run(query, **kwargs)[source] Run query through SerpAPI and parse result. Parameters query (str) – kwargs (Any) – Return type str class langchain.utilities.SparkSQL(spark_session=None, catalog=None, schema=None, ignore_tables=None, include_tables=None, sample_rows_in_table_info=3)[source] Bases: object Parameters spark_session (Optional[SparkSession]) – catalog (Optional[str]) – schema (Optional[str]) – ignore_tables (Optional[List[str]]) – include_tables (Optional[List[str]]) – sample_rows_in_table_info (int) – classmethod from_uri(database_uri, engine_args=None, **kwargs)[source] Creating a remote Spark Session via Spark connect. For example: SparkSQL.from_uri(β€œsc://localhost:15002”) Parameters database_uri (str) – engine_args (Optional[dict]) – kwargs (Any) – Return type langchain.utilities.spark_sql.SparkSQL get_usable_table_names()[source] Get names of tables available. Return type Iterable[str] get_table_info(table_names=None)[source] Parameters table_names (Optional[List[str]]) – Return type str run(command, fetch='all')[source]
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-32
Return type str run(command, fetch='all')[source] Parameters command (str) – fetch (str) – Return type str get_table_info_no_throw(table_names=None)[source] Get information about specified tables. Follows best practices as specified in: Rajkumar et al, 2022 (https://arxiv.org/abs/2204.00498) If sample_rows_in_table_info, the specified number of sample rows will be appended to each table description. This can increase performance as demonstrated in the paper. Parameters table_names (Optional[List[str]]) – Return type str run_no_throw(command, fetch='all')[source] Execute a SQL command and return a string representing the results. If the statement returns rows, a string of the results is returned. If the statement returns no rows, an empty string is returned. If the statement throws an error, the error message is returned. Parameters command (str) – fetch (str) – Return type str class langchain.utilities.TextRequestsWrapper(*, headers=None, aiosession=None)[source] Bases: pydantic.main.BaseModel Lightweight wrapper around requests library. The main purpose of this wrapper is to always return a text output. Parameters headers (Optional[Dict[str, str]]) – aiosession (Optional[aiohttp.client.ClientSession]) – Return type None attribute aiosession: Optional[aiohttp.client.ClientSession] = None attribute headers: Optional[Dict[str, str]] = None async adelete(url, **kwargs)[source] DELETE the URL and return the text asynchronously. Parameters url (str) – kwargs (Any) –
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-33
Parameters url (str) – kwargs (Any) – Return type str async aget(url, **kwargs)[source] GET the URL and return the text asynchronously. Parameters url (str) – kwargs (Any) – Return type str async apatch(url, data, **kwargs)[source] PATCH the URL and return the text asynchronously. Parameters url (str) – data (Dict[str, Any]) – kwargs (Any) – Return type str async apost(url, data, **kwargs)[source] POST to the URL and return the text asynchronously. Parameters url (str) – data (Dict[str, Any]) – kwargs (Any) – Return type str async aput(url, data, **kwargs)[source] PUT the URL and return the text asynchronously. Parameters url (str) – data (Dict[str, Any]) – kwargs (Any) – Return type str delete(url, **kwargs)[source] DELETE the URL and return the text. Parameters url (str) – kwargs (Any) – Return type str get(url, **kwargs)[source] GET the URL and return the text. Parameters url (str) – kwargs (Any) – Return type str patch(url, data, **kwargs)[source] PATCH the URL and return the text. Parameters url (str) – data (Dict[str, Any]) – kwargs (Any) – Return type str post(url, data, **kwargs)[source] POST to the URL and return the text. Parameters url (str) –
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-34
POST to the URL and return the text. Parameters url (str) – data (Dict[str, Any]) – kwargs (Any) – Return type str put(url, data, **kwargs)[source] PUT the URL and return the text. Parameters url (str) – data (Dict[str, Any]) – kwargs (Any) – Return type str property requests: langchain.requests.Requests class langchain.utilities.TwilioAPIWrapper(*, client=None, account_sid=None, auth_token=None, from_number=None)[source] Bases: pydantic.main.BaseModel Messaging Client using Twilio. To use, you should have the twilio python package installed, and the environment variables TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN, and TWILIO_FROM_NUMBER, or pass account_sid, auth_token, and from_number as named parameters to the constructor. Example from langchain.utilities.twilio import TwilioAPIWrapper twilio = TwilioAPIWrapper( account_sid="ACxxx", auth_token="xxx", from_number="+10123456789" ) twilio.run('test', '+12484345508') Parameters client (Any) – account_sid (Optional[str]) – auth_token (Optional[str]) – from_number (Optional[str]) – Return type None attribute account_sid: Optional[str] = None Twilio account string identifier. attribute auth_token: Optional[str] = None Twilio auth token. attribute from_number: Optional[str] = None A Twilio phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164) format, an
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-35
format, an [alphanumeric sender ID](https://www.twilio.com/docs/sms/send-messages#use-an-alphanumeric-sender-id), or a [Channel Endpoint address](https://www.twilio.com/docs/sms/channels#channel-addresses) that is enabled for the type of message you want to send. Phone numbers or [short codes](https://www.twilio.com/docs/sms/api/short-code) purchased from Twilio also work here. You cannot, for example, spoof messages from a private cell phone number. If you are using messaging_service_sid, this parameter must be empty. run(body, to)[source] Run body through Twilio and respond with message sid. Parameters body (str) – The text of the message you want to send. Can be up to 1,600 characters in length. to (str) – The destination phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164) format for SMS/MMS or [Channel user address](https://www.twilio.com/docs/sms/channels#channel-addresses) for other 3rd-party channels. Return type str class langchain.utilities.WikipediaAPIWrapper(*, wiki_client=None, top_k_results=3, lang='en', load_all_available_meta=False, doc_content_chars_max=4000)[source] Bases: pydantic.main.BaseModel Wrapper around WikipediaAPI. To use, you should have the wikipedia python package installed. This wrapper will use the Wikipedia API to conduct searches and fetch page summaries. By default, it will return the page summaries of the top-k results. It limits the Document content by doc_content_chars_max. Parameters wiki_client (Any) – top_k_results (int) –
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-36
Parameters wiki_client (Any) – top_k_results (int) – lang (str) – load_all_available_meta (bool) – doc_content_chars_max (int) – Return type None attribute doc_content_chars_max: int = 4000 attribute lang: str = 'en' attribute load_all_available_meta: bool = False attribute top_k_results: int = 3 load(query)[source] Run Wikipedia search and get the article text plus the meta information. See Returns: a list of documents. Parameters query (str) – Return type List[langchain.schema.Document] run(query)[source] Run Wikipedia search and get page summaries. Parameters query (str) – Return type str class langchain.utilities.WolframAlphaAPIWrapper(*, wolfram_client=None, wolfram_alpha_appid=None)[source] Bases: pydantic.main.BaseModel Wrapper for Wolfram Alpha. Docs for using: Go to wolfram alpha and sign up for a developer account Create an app and get your APP ID Save your APP ID into WOLFRAM_ALPHA_APPID env variable pip install wolframalpha Parameters wolfram_client (Any) – wolfram_alpha_appid (Optional[str]) – Return type None attribute wolfram_alpha_appid: Optional[str] = None run(query)[source] Run query through WolframAlpha and parse result. Parameters query (str) – Return type str
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-37
Parameters query (str) – Return type str class langchain.utilities.ZapierNLAWrapper(*, zapier_nla_api_key, zapier_nla_oauth_access_token, zapier_nla_api_base='https://nla.zapier.com/api/v1/')[source] Bases: pydantic.main.BaseModel Wrapper for Zapier NLA. Full docs here: https://nla.zapier.com/start/ This wrapper supports both API Key and OAuth Credential auth methods. API Key is the fastest way to get started using this wrapper. Call this wrapper with either zapier_nla_api_key or zapier_nla_oauth_access_token arguments, or set the ZAPIER_NLA_API_KEY environment variable. If both arguments are set, the Access Token will take precedence. For use-cases where LangChain + Zapier NLA is powering a user-facing application, and LangChain needs access to the end-user’s connected accounts on Zapier.com, you’ll need to use OAuth. Review the full docs above to learn how to create your own provider and generate credentials. Parameters zapier_nla_api_key (str) – zapier_nla_oauth_access_token (str) – zapier_nla_api_base (str) – Return type None attribute zapier_nla_api_base: str = 'https://nla.zapier.com/api/v1/' attribute zapier_nla_api_key: str [Required] attribute zapier_nla_oauth_access_token: str [Required] async alist()[source] Returns a list of all exposed (enabled) actions associated with current user (associated with the set api_key). Change your exposed actions here: https://nla.zapier.com/demo/start/
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-38
actions here: https://nla.zapier.com/demo/start/ The return list can be empty if no actions exposed. Else will contain a list of action objects: [{β€œid”: str, β€œdescription”: str, β€œparams”: Dict[str, str] }] params will always contain an instructions key, the only required param. All others optional and if provided will override any AI guesses (see β€œunderstanding the AI guessing flow” here: https://nla.zapier.com/api/v1/docs) Return type List[Dict] async alist_as_str()[source] Same as list, but returns a stringified version of the JSON for insertting back into an LLM. Return type str async apreview(action_id, instructions, params=None)[source] Same as run, but instead of actually executing the action, will instead return a preview of params that have been guessed by the AI in case you need to explicitly review before executing. Parameters action_id (str) – instructions (str) – params (Optional[Dict]) – Return type Dict async apreview_as_str(*args, **kwargs)[source] Same as preview, but returns a stringified version of the JSON for insertting back into an LLM. Return type str async arun(action_id, instructions, params=None)[source] Executes an action that is identified by action_id, must be exposed (enabled) by the current user (associated with the set api_key). Change your exposed actions here: https://nla.zapier.com/demo/start/ The return JSON is guaranteed to be less than ~500 words (350 tokens) making it safe to inject into the prompt of another LLM call. Parameters action_id (str) –
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-39
call. Parameters action_id (str) – instructions (str) – params (Optional[Dict]) – Return type Dict async arun_as_str(*args, **kwargs)[source] Same as run, but returns a stringified version of the JSON for insertting back into an LLM. Return type str list()[source] Returns a list of all exposed (enabled) actions associated with current user (associated with the set api_key). Change your exposed actions here: https://nla.zapier.com/demo/start/ The return list can be empty if no actions exposed. Else will contain a list of action objects: [{β€œid”: str, β€œdescription”: str, β€œparams”: Dict[str, str] }] params will always contain an instructions key, the only required param. All others optional and if provided will override any AI guesses (see β€œunderstanding the AI guessing flow” here: https://nla.zapier.com/docs/using-the-api#ai-guessing) Return type List[Dict] list_as_str()[source] Same as list, but returns a stringified version of the JSON for insertting back into an LLM. Return type str preview(action_id, instructions, params=None)[source] Same as run, but instead of actually executing the action, will instead return a preview of params that have been guessed by the AI in case you need to explicitly review before executing. Parameters action_id (str) – instructions (str) – params (Optional[Dict]) – Return type Dict preview_as_str(*args, **kwargs)[source] Same as preview, but returns a stringified version of the JSON for
https://api.python.langchain.com/en/latest/modules/utilities.html
897ef9cfbf02-40
Same as preview, but returns a stringified version of the JSON for insertting back into an LLM. Return type str run(action_id, instructions, params=None)[source] Executes an action that is identified by action_id, must be exposed (enabled) by the current user (associated with the set api_key). Change your exposed actions here: https://nla.zapier.com/demo/start/ The return JSON is guaranteed to be less than ~500 words (350 tokens) making it safe to inject into the prompt of another LLM call. Parameters action_id (str) – instructions (str) – params (Optional[Dict]) – Return type Dict run_as_str(*args, **kwargs)[source] Same as run, but returns a stringified version of the JSON for insertting back into an LLM. Return type str
https://api.python.langchain.com/en/latest/modules/utilities.html
dd38a45722f7-0
Prompt Templates Prompt template classes. class langchain.prompts.AIMessagePromptTemplate(*, prompt, additional_kwargs=None)[source] Bases: langchain.prompts.chat.BaseStringMessagePromptTemplate Parameters prompt (langchain.prompts.base.StringPromptTemplate) – additional_kwargs (dict) – Return type None format(**kwargs)[source] To a BaseMessage. Parameters kwargs (Any) – Return type langchain.schema.BaseMessage class langchain.prompts.BaseChatPromptTemplate(*, input_variables, output_parser=None, partial_variables=None)[source] Bases: langchain.prompts.base.BasePromptTemplate, abc.ABC Parameters input_variables (List[str]) – output_parser (Optional[langchain.schema.BaseOutputParser]) – partial_variables (Mapping[str, Union[str, Callable[[], str]]]) – Return type None format(**kwargs)[source] Format the prompt with the inputs. Parameters kwargs (Any) – Any arguments to be passed to the prompt template. Returns A formatted string. Return type str Example: prompt.format(variable1="foo") abstract format_messages(**kwargs)[source] Format kwargs into a list of messages. Parameters kwargs (Any) – Return type List[langchain.schema.BaseMessage] format_prompt(**kwargs)[source] Create Chat Messages. Parameters kwargs (Any) – Return type langchain.schema.PromptValue class langchain.prompts.BasePromptTemplate(*, input_variables, output_parser=None, partial_variables=None)[source] Bases: langchain.load.serializable.Serializable, abc.ABC Base class for all prompt templates, returning a prompt. Parameters input_variables (List[str]) –
https://api.python.langchain.com/en/latest/modules/prompts.html
dd38a45722f7-1
Parameters input_variables (List[str]) – output_parser (Optional[langchain.schema.BaseOutputParser]) – partial_variables (Mapping[str, Union[str, Callable[[], str]]]) – Return type None attribute input_variables: List[str] [Required] A list of the names of the variables the prompt template expects. attribute output_parser: Optional[langchain.schema.BaseOutputParser] = None How to parse the output of calling an LLM on this formatted prompt. attribute partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional] dict(**kwargs)[source] Return dictionary representation of prompt. Parameters kwargs (Any) – Return type Dict abstract format(**kwargs)[source] Format the prompt with the inputs. Parameters kwargs (Any) – Any arguments to be passed to the prompt template. Returns A formatted string. Return type str Example: prompt.format(variable1="foo") abstract format_prompt(**kwargs)[source] Create Chat Messages. Parameters kwargs (Any) – Return type langchain.schema.PromptValue partial(**kwargs)[source] Return a partial of the prompt template. Parameters kwargs (Union[str, Callable[[], str]]) – Return type langchain.prompts.base.BasePromptTemplate save(file_path)[source] Save the prompt. Parameters file_path (Union[pathlib.Path, str]) – Path to directory to save prompt to. Return type None Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) property lc_serializable: bool Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/modules/prompts.html
dd38a45722f7-2
property lc_serializable: bool Return whether or not the class is serializable. class langchain.prompts.ChatMessagePromptTemplate(*, prompt, additional_kwargs=None, role)[source] Bases: langchain.prompts.chat.BaseStringMessagePromptTemplate Parameters prompt (langchain.prompts.base.StringPromptTemplate) – additional_kwargs (dict) – role (str) – Return type None attribute role: str [Required] format(**kwargs)[source] To a BaseMessage. Parameters kwargs (Any) – Return type langchain.schema.BaseMessage class langchain.prompts.ChatPromptTemplate(*, input_variables, output_parser=None, partial_variables=None, messages)[source] Bases: langchain.prompts.chat.BaseChatPromptTemplate, abc.ABC Parameters input_variables (List[str]) – output_parser (Optional[langchain.schema.BaseOutputParser]) – partial_variables (Mapping[str, Union[str, Callable[[], str]]]) – messages (List[Union[langchain.prompts.chat.BaseMessagePromptTemplate, langchain.schema.BaseMessage]]) – Return type None attribute input_variables: List[str] [Required] A list of the names of the variables the prompt template expects. attribute messages: List[Union[BaseMessagePromptTemplate, BaseMessage]] [Required] format(**kwargs)[source] Format the prompt with the inputs. Parameters kwargs (Any) – Any arguments to be passed to the prompt template. Returns A formatted string. Return type str Example: prompt.format(variable1="foo") format_messages(**kwargs)[source] Format kwargs into a list of messages. Parameters kwargs (Any) – Return type
https://api.python.langchain.com/en/latest/modules/prompts.html
dd38a45722f7-3
Parameters kwargs (Any) – Return type List[langchain.schema.BaseMessage] classmethod from_messages(messages)[source] Parameters messages (Sequence[Union[langchain.prompts.chat.BaseMessagePromptTemplate, langchain.schema.BaseMessage]]) – Return type langchain.prompts.chat.ChatPromptTemplate classmethod from_role_strings(string_messages)[source] Parameters string_messages (List[Tuple[str, str]]) – Return type langchain.prompts.chat.ChatPromptTemplate classmethod from_strings(string_messages)[source] Parameters string_messages (List[Tuple[Type[langchain.prompts.chat.BaseMessagePromptTemplate], str]]) – Return type langchain.prompts.chat.ChatPromptTemplate classmethod from_template(template, **kwargs)[source] Parameters template (str) – kwargs (Any) – Return type langchain.prompts.chat.ChatPromptTemplate partial(**kwargs)[source] Return a partial of the prompt template. Parameters kwargs (Union[str, Callable[[], str]]) – Return type langchain.prompts.base.BasePromptTemplate save(file_path)[source] Save the prompt. Parameters file_path (Union[pathlib.Path, str]) – Path to directory to save prompt to. Return type None Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) class langchain.prompts.FewShotPromptTemplate(*, input_variables, output_parser=None, partial_variables=None, examples=None, example_selector=None, example_prompt, suffix, example_separator='\n\n', prefix='', template_format='f-string', validate_template=True)[source] Bases: langchain.prompts.base.StringPromptTemplate Prompt template that contains few shot examples. Parameters input_variables (List[str]) –
https://api.python.langchain.com/en/latest/modules/prompts.html
dd38a45722f7-4
Prompt template that contains few shot examples. Parameters input_variables (List[str]) – output_parser (Optional[langchain.schema.BaseOutputParser]) – partial_variables (Mapping[str, Union[str, Callable[[], str]]]) – examples (Optional[List[dict]]) – example_selector (Optional[langchain.prompts.example_selector.base.BaseExampleSelector]) – example_prompt (langchain.prompts.prompt.PromptTemplate) – suffix (str) – example_separator (str) – prefix (str) – template_format (str) – validate_template (bool) – Return type None attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required] PromptTemplate used to format an individual example. attribute example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None ExampleSelector to choose the examples to format into the prompt. Either this or examples should be provided. attribute example_separator: str = '\n\n' String separator used to join the prefix, the examples, and suffix. attribute examples: Optional[List[dict]] = None Examples to format into the prompt. Either this or example_selector should be provided. attribute input_variables: List[str] [Required] A list of the names of the variables the prompt template expects. attribute prefix: str = '' A prompt template string to put before the examples. attribute suffix: str [Required] A prompt template string to put after the examples. attribute template_format: str = 'f-string' The format of the prompt template. Options are: β€˜f-string’, β€˜jinja2’. attribute validate_template: bool = True Whether or not to try validating the template.
https://api.python.langchain.com/en/latest/modules/prompts.html
dd38a45722f7-5
attribute validate_template: bool = True Whether or not to try validating the template. dict(**kwargs)[source] Return a dictionary of the prompt. Parameters kwargs (Any) – Return type Dict format(**kwargs)[source] Format the prompt with the inputs. Parameters kwargs (Any) – Any arguments to be passed to the prompt template. Returns A formatted string. Return type str Example: prompt.format(variable1="foo") property lc_serializable: bool Return whether or not the class is serializable. class langchain.prompts.FewShotPromptWithTemplates(*, input_variables, output_parser=None, partial_variables=None, examples=None, example_selector=None, example_prompt, suffix, example_separator='\n\n', prefix=None, template_format='f-string', validate_template=True)[source] Bases: langchain.prompts.base.StringPromptTemplate Prompt template that contains few shot examples. Parameters input_variables (List[str]) – output_parser (Optional[langchain.schema.BaseOutputParser]) – partial_variables (Mapping[str, Union[str, Callable[[], str]]]) – examples (Optional[List[dict]]) – example_selector (Optional[langchain.prompts.example_selector.base.BaseExampleSelector]) – example_prompt (langchain.prompts.prompt.PromptTemplate) – suffix (langchain.prompts.base.StringPromptTemplate) – example_separator (str) – prefix (Optional[langchain.prompts.base.StringPromptTemplate]) – template_format (str) – validate_template (bool) – Return type None attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required] PromptTemplate used to format an individual example.
https://api.python.langchain.com/en/latest/modules/prompts.html
dd38a45722f7-6
PromptTemplate used to format an individual example. attribute example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None ExampleSelector to choose the examples to format into the prompt. Either this or examples should be provided. attribute example_separator: str = '\n\n' String separator used to join the prefix, the examples, and suffix. attribute examples: Optional[List[dict]] = None Examples to format into the prompt. Either this or example_selector should be provided. attribute input_variables: List[str] [Required] A list of the names of the variables the prompt template expects. attribute prefix: Optional[langchain.prompts.base.StringPromptTemplate] = None A PromptTemplate to put before the examples. attribute suffix: langchain.prompts.base.StringPromptTemplate [Required] A PromptTemplate to put after the examples. attribute template_format: str = 'f-string' The format of the prompt template. Options are: β€˜f-string’, β€˜jinja2’. attribute validate_template: bool = True Whether or not to try validating the template. dict(**kwargs)[source] Return a dictionary of the prompt. Parameters kwargs (Any) – Return type Dict format(**kwargs)[source] Format the prompt with the inputs. Parameters kwargs (Any) – Any arguments to be passed to the prompt template. Returns A formatted string. Return type str Example: prompt.format(variable1="foo") class langchain.prompts.HumanMessagePromptTemplate(*, prompt, additional_kwargs=None)[source] Bases: langchain.prompts.chat.BaseStringMessagePromptTemplate Parameters prompt (langchain.prompts.base.StringPromptTemplate) – additional_kwargs (dict) –
https://api.python.langchain.com/en/latest/modules/prompts.html
dd38a45722f7-7
additional_kwargs (dict) – Return type None format(**kwargs)[source] To a BaseMessage. Parameters kwargs (Any) – Return type langchain.schema.BaseMessage class langchain.prompts.LengthBasedExampleSelector(*, examples, example_prompt, get_text_length=<function _get_length_based>, max_length=2048, example_text_lengths=[])[source] Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel Select examples based on length. Parameters examples (List[dict]) – example_prompt (langchain.prompts.prompt.PromptTemplate) – get_text_length (Callable[[str], int]) – max_length (int) – example_text_lengths (List[int]) – Return type None attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required] Prompt template used to format the examples. attribute examples: List[dict] [Required] A list of the examples that the prompt template expects. attribute get_text_length: Callable[[str], int] = <function _get_length_based> Function to measure prompt length. Defaults to word count. attribute max_length: int = 2048 Max length for the prompt, beyond which examples are cut. add_example(example)[source] Add new example to list. Parameters example (Dict[str, str]) – Return type None select_examples(input_variables)[source] Select which examples to use based on the input lengths. Parameters input_variables (Dict[str, str]) – Return type List[dict]
https://api.python.langchain.com/en/latest/modules/prompts.html
dd38a45722f7-8
input_variables (Dict[str, str]) – Return type List[dict] class langchain.prompts.MaxMarginalRelevanceExampleSelector(*, vectorstore, k=4, example_keys=None, input_keys=None, fetch_k=20)[source] Bases: langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector ExampleSelector that selects examples based on Max Marginal Relevance. This was shown to improve performance in this paper: https://arxiv.org/pdf/2211.13892.pdf Parameters vectorstore (langchain.vectorstores.base.VectorStore) – k (int) – example_keys (Optional[List[str]]) – input_keys (Optional[List[str]]) – fetch_k (int) – Return type None attribute example_keys: Optional[List[str]] = None Optional keys to filter examples to. attribute fetch_k: int = 20 Number of examples to fetch to rerank. attribute input_keys: Optional[List[str]] = None Optional keys to filter input to. If provided, the search is based on the input variables instead of all variables. attribute k: int = 4 Number of examples to select. attribute vectorstore: langchain.vectorstores.base.VectorStore [Required] VectorStore than contains information about examples. classmethod from_examples(examples, embeddings, vectorstore_cls, k=4, input_keys=None, fetch_k=20, **vectorstore_cls_kwargs)[source] Create k-shot example selector using example list and embeddings. Reshuffles examples dynamically based on query similarity. Parameters examples (List[dict]) – List of examples to use in the prompt.
https://api.python.langchain.com/en/latest/modules/prompts.html
dd38a45722f7-9
Parameters examples (List[dict]) – List of examples to use in the prompt. embeddings (langchain.embeddings.base.Embeddings) – An iniialized embedding API interface, e.g. OpenAIEmbeddings(). vectorstore_cls (Type[langchain.vectorstores.base.VectorStore]) – A vector store DB interface class, e.g. FAISS. k (int) – Number of examples to select input_keys (Optional[List[str]]) – If provided, the search is based on the input variables instead of all variables. vectorstore_cls_kwargs (Any) – optional kwargs containing url for vector store fetch_k (int) – Returns The ExampleSelector instantiated, backed by a vector store. Return type langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector select_examples(input_variables)[source] Select which examples to use based on semantic similarity. Parameters input_variables (Dict[str, str]) – Return type List[dict] class langchain.prompts.MessagesPlaceholder(*, variable_name)[source] Bases: langchain.prompts.chat.BaseMessagePromptTemplate Prompt template that assumes variable is already list of messages. Parameters variable_name (str) – Return type None attribute variable_name: str [Required] format_messages(**kwargs)[source] To a BaseMessage. Parameters kwargs (Any) – Return type List[langchain.schema.BaseMessage] property input_variables: List[str] Input variables for this prompt template. class langchain.prompts.NGramOverlapExampleSelector(*, examples, example_prompt, threshold=- 1.0)[source] Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel
https://api.python.langchain.com/en/latest/modules/prompts.html
dd38a45722f7-10
Select and order examples based on ngram overlap score (sentence_bleu score). https://www.nltk.org/_modules/nltk/translate/bleu_score.html https://aclanthology.org/P02-1040.pdf Parameters examples (List[dict]) – example_prompt (langchain.prompts.prompt.PromptTemplate) – threshold (float) – Return type None attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required] Prompt template used to format the examples. attribute examples: List[dict] [Required] A list of the examples that the prompt template expects. attribute threshold: float = -1.0 Threshold at which algorithm stops. Set to -1.0 by default. For negative threshold: select_examples sorts examples by ngram_overlap_score, but excludes none. For threshold greater than 1.0: select_examples excludes all examples, and returns an empty list. For threshold equal to 0.0: select_examples sorts examples by ngram_overlap_score, and excludes examples with no ngram overlap with input. add_example(example)[source] Add new example to list. Parameters example (Dict[str, str]) – Return type None select_examples(input_variables)[source] Return list of examples sorted by ngram_overlap_score with input. Descending order. Excludes any examples with ngram_overlap_score less than or equal to threshold. Parameters input_variables (Dict[str, str]) – Return type List[dict] class langchain.prompts.PipelinePromptTemplate(*, input_variables, output_parser=None, partial_variables=None, final_prompt, pipeline_prompts)[source] Bases: langchain.prompts.base.BasePromptTemplate A prompt template for composing multiple prompts together.
https://api.python.langchain.com/en/latest/modules/prompts.html
dd38a45722f7-11
A prompt template for composing multiple prompts together. This can be useful when you want to reuse parts of prompts. A PipelinePrompt consists of two main parts: final_prompt: This is the final prompt that is returned pipeline_prompts: This is a list of tuples, consistingof a string (name) and a Prompt Template. Each PromptTemplate will be formatted and then passed to future prompt templates as a variable with the same name as name Parameters input_variables (List[str]) – output_parser (Optional[langchain.schema.BaseOutputParser]) – partial_variables (Mapping[str, Union[str, Callable[[], str]]]) – final_prompt (langchain.prompts.base.BasePromptTemplate) – pipeline_prompts (List[Tuple[str, langchain.prompts.base.BasePromptTemplate]]) – Return type None attribute final_prompt: langchain.prompts.base.BasePromptTemplate [Required] attribute pipeline_prompts: List[Tuple[str, langchain.prompts.base.BasePromptTemplate]] [Required] format(**kwargs)[source] Format the prompt with the inputs. Parameters kwargs (Any) – Any arguments to be passed to the prompt template. Returns A formatted string. Return type str Example: prompt.format(variable1="foo") format_prompt(**kwargs)[source] Create Chat Messages. Parameters kwargs (Any) – Return type langchain.schema.PromptValue langchain.prompts.Prompt alias of langchain.prompts.prompt.PromptTemplate class langchain.prompts.PromptTemplate(*, input_variables, output_parser=None, partial_variables=None, template, template_format='f-string', validate_template=True)[source] Bases: langchain.prompts.base.StringPromptTemplate Schema to represent a prompt for an LLM. Example
https://api.python.langchain.com/en/latest/modules/prompts.html
dd38a45722f7-12
Schema to represent a prompt for an LLM. Example from langchain import PromptTemplate prompt = PromptTemplate(input_variables=["foo"], template="Say {foo}") Parameters input_variables (List[str]) – output_parser (Optional[langchain.schema.BaseOutputParser]) – partial_variables (Mapping[str, Union[str, Callable[[], str]]]) – template (str) – template_format (str) – validate_template (bool) – Return type None attribute input_variables: List[str] [Required] A list of the names of the variables the prompt template expects. attribute template: str [Required] The prompt template. attribute template_format: str = 'f-string' The format of the prompt template. Options are: β€˜f-string’, β€˜jinja2’. attribute validate_template: bool = True Whether or not to try validating the template. format(**kwargs)[source] Format the prompt with the inputs. Parameters kwargs (Any) – Any arguments to be passed to the prompt template. Returns A formatted string. Return type str Example: prompt.format(variable1="foo") classmethod from_examples(examples, suffix, input_variables, example_separator='\n\n', prefix='', **kwargs)[source] Take examples in list format with prefix and suffix to create a prompt. Intended to be used as a way to dynamically create a prompt from examples. Parameters examples (List[str]) – List of examples to use in the prompt. suffix (str) – String to go after the list of examples. Should generally set up the user’s input. input_variables (List[str]) – A list of variable names the final prompt template will expect. example_separator (str) – The separator to use in between examples. Defaults
https://api.python.langchain.com/en/latest/modules/prompts.html
dd38a45722f7-13
will expect. example_separator (str) – The separator to use in between examples. Defaults to two new line characters. prefix (str) – String that should go before any examples. Generally includes examples. Default to an empty string. kwargs (Any) – Returns The final prompt generated. Return type langchain.prompts.prompt.PromptTemplate classmethod from_file(template_file, input_variables, **kwargs)[source] Load a prompt from a file. Parameters template_file (Union[str, pathlib.Path]) – The path to the file containing the prompt template. input_variables (List[str]) – A list of variable names the final prompt template will expect. kwargs (Any) – Returns The prompt loaded from the file. Return type langchain.prompts.prompt.PromptTemplate classmethod from_template(template, **kwargs)[source] Load a prompt template from a template. Parameters template (str) – kwargs (Any) – Return type langchain.prompts.prompt.PromptTemplate property lc_attributes: Dict[str, Any] Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. class langchain.prompts.SemanticSimilarityExampleSelector(*, vectorstore, k=4, example_keys=None, input_keys=None)[source] Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel Example selector that selects examples based on SemanticSimilarity. Parameters vectorstore (langchain.vectorstores.base.VectorStore) – k (int) – example_keys (Optional[List[str]]) – input_keys (Optional[List[str]]) – Return type None attribute example_keys: Optional[List[str]] = None
https://api.python.langchain.com/en/latest/modules/prompts.html
dd38a45722f7-14
Return type None attribute example_keys: Optional[List[str]] = None Optional keys to filter examples to. attribute input_keys: Optional[List[str]] = None Optional keys to filter input to. If provided, the search is based on the input variables instead of all variables. attribute k: int = 4 Number of examples to select. attribute vectorstore: langchain.vectorstores.base.VectorStore [Required] VectorStore than contains information about examples. add_example(example)[source] Add new example to vectorstore. Parameters example (Dict[str, str]) – Return type str classmethod from_examples(examples, embeddings, vectorstore_cls, k=4, input_keys=None, **vectorstore_cls_kwargs)[source] Create k-shot example selector using example list and embeddings. Reshuffles examples dynamically based on query similarity. Parameters examples (List[dict]) – List of examples to use in the prompt. embeddings (langchain.embeddings.base.Embeddings) – An initialized embedding API interface, e.g. OpenAIEmbeddings(). vectorstore_cls (Type[langchain.vectorstores.base.VectorStore]) – A vector store DB interface class, e.g. FAISS. k (int) – Number of examples to select input_keys (Optional[List[str]]) – If provided, the search is based on the input variables instead of all variables. vectorstore_cls_kwargs (Any) – optional kwargs containing url for vector store Returns The ExampleSelector instantiated, backed by a vector store. Return type langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector select_examples(input_variables)[source] Select which examples to use based on semantic similarity. Parameters input_variables (Dict[str, str]) – Return type List[dict]
https://api.python.langchain.com/en/latest/modules/prompts.html
dd38a45722f7-15
input_variables (Dict[str, str]) – Return type List[dict] class langchain.prompts.StringPromptTemplate(*, input_variables, output_parser=None, partial_variables=None)[source] Bases: langchain.prompts.base.BasePromptTemplate, abc.ABC String prompt should expose the format method, returning a prompt. Parameters input_variables (List[str]) – output_parser (Optional[langchain.schema.BaseOutputParser]) – partial_variables (Mapping[str, Union[str, Callable[[], str]]]) – Return type None format_prompt(**kwargs)[source] Create Chat Messages. Parameters kwargs (Any) – Return type langchain.schema.PromptValue class langchain.prompts.SystemMessagePromptTemplate(*, prompt, additional_kwargs=None)[source] Bases: langchain.prompts.chat.BaseStringMessagePromptTemplate Parameters prompt (langchain.prompts.base.StringPromptTemplate) – additional_kwargs (dict) – Return type None format(**kwargs)[source] To a BaseMessage. Parameters kwargs (Any) – Return type langchain.schema.BaseMessage langchain.prompts.load_prompt(path)[source] Unified method for loading a prompt from LangChainHub or local fs. Parameters path (Union[str, pathlib.Path]) – Return type langchain.prompts.base.BasePromptTemplate
https://api.python.langchain.com/en/latest/modules/prompts.html
8e7952d3fdad-0
Tools Core toolkit implementations. class langchain.tools.AIPluginTool(*, name, description, args_schema=<class 'langchain.tools.plugin.AIPluginToolSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, plugin, api_spec)[source] Bases: langchain.tools.base.BaseTool Parameters name (str) – description (str) – args_schema (Type[langchain.tools.plugin.AIPluginToolSchema]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – plugin (langchain.tools.plugin.AIPlugin) – api_spec (str) – Return type None attribute api_spec: str [Required] attribute args_schema: Type[AIPluginToolSchema] = <class 'langchain.tools.plugin.AIPluginToolSchema'> Pydantic model class to validate and parse the tool’s input arguments. attribute plugin: AIPlugin [Required] classmethod from_plugin_url(url)[source] Parameters url (str) – Return type langchain.tools.plugin.AIPluginTool class langchain.tools.APIOperation(*, operation_id, description=None, base_url, path, method, properties, request_body=None)[source] Bases: pydantic.main.BaseModel A model for a single API operation. Parameters operation_id (str) – description (Optional[str]) – base_url (str) – path (str) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-1
base_url (str) – path (str) – method (langchain.utilities.openapi.HTTPVerb) – properties (Sequence[langchain.tools.openapi.utils.api_models.APIProperty]) – request_body (Optional[langchain.tools.openapi.utils.api_models.APIRequestBody]) – Return type None attribute base_url: str [Required] The base URL of the operation. attribute description: Optional[str] = None The description of the operation. attribute method: langchain.utilities.openapi.HTTPVerb [Required] The HTTP method of the operation. attribute operation_id: str [Required] The unique identifier of the operation. attribute path: str [Required] The path of the operation. attribute properties: Sequence[langchain.tools.openapi.utils.api_models.APIProperty] [Required] attribute request_body: Optional[langchain.tools.openapi.utils.api_models.APIRequestBody] = None The request body of the operation. classmethod from_openapi_spec(spec, path, method)[source] Create an APIOperation from an OpenAPI spec. Parameters spec (langchain.utilities.openapi.OpenAPISpec) – path (str) – method (str) – Return type langchain.tools.openapi.utils.api_models.APIOperation classmethod from_openapi_url(spec_url, path, method)[source] Create an APIOperation from an OpenAPI URL. Parameters spec_url (str) – path (str) – method (str) – Return type langchain.tools.openapi.utils.api_models.APIOperation to_typescript()[source] Get typescript string representation of the operation. Return type str static ts_type_from_python(type_)[source] Parameters
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-2
Return type str static ts_type_from_python(type_)[source] Parameters type_ (Union[str, Type, tuple, None, enum.Enum]) – Return type str property body_params: List[str] property path_params: List[str] property query_params: List[str] class langchain.tools.ArxivQueryRun(*, name='arxiv', description='A wrapper around Arxiv.org Useful for when you need to answer questions about Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance, Statistics, Electrical Engineering, and Economics from scientific articles on arxiv.org. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source] Bases: langchain.tools.base.BaseTool Tool that adds the capability to search using the Arxiv API. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_wrapper (langchain.utilities.arxiv.ArxivAPIWrapper) – Return type None attribute api_wrapper: langchain.utilities.arxiv.ArxivAPIWrapper [Optional]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-3
attribute api_wrapper: langchain.utilities.arxiv.ArxivAPIWrapper [Optional] class langchain.tools.AzureCogsFormRecognizerTool(*, name='azure_cognitive_services_form_recognizer', description='A wrapper around Azure Cognitive Services Form Recognizer. Useful for when you need to extract text, tables, and key-value pairs from documents. Input should be a url to a document.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, azure_cogs_key='', azure_cogs_endpoint='', doc_analysis_client=None)[source] Bases: langchain.tools.base.BaseTool Tool that queries the Azure Cognitive Services Form Recognizer API. In order to set this up, follow instructions at: https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/quickstarts/get-started-sdks-rest-api?view=form-recog-3.0.0&pivots=programming-language-python Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – azure_cogs_key (str) – azure_cogs_endpoint (str) – doc_analysis_client (Any) – Return type None
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-4
doc_analysis_client (Any) – Return type None class langchain.tools.AzureCogsImageAnalysisTool(*, name='azure_cognitive_services_image_analysis', description='A wrapper around Azure Cognitive Services Image Analysis. Useful for when you need to analyze images. Input should be a url to an image.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, azure_cogs_key='', azure_cogs_endpoint='', vision_service=None, analysis_options=None)[source] Bases: langchain.tools.base.BaseTool Tool that queries the Azure Cognitive Services Image Analysis API. In order to set this up, follow instructions at: https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40 Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – azure_cogs_key (str) – azure_cogs_endpoint (str) – vision_service (Any) – analysis_options (Any) – Return type None
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-5
analysis_options (Any) – Return type None class langchain.tools.AzureCogsSpeech2TextTool(*, name='azure_cognitive_services_speech2text', description='A wrapper around Azure Cognitive Services Speech2Text. Useful for when you need to transcribe audio to text. Input should be a url to an audio file.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, azure_cogs_key='', azure_cogs_region='', speech_language='en-US', speech_config=None)[source] Bases: langchain.tools.base.BaseTool Tool that queries the Azure Cognitive Services Speech2Text API. In order to set this up, follow instructions at: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-speech-to-text?pivots=programming-language-python Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – azure_cogs_key (str) – azure_cogs_region (str) – speech_language (str) – speech_config (Any) – Return type None
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-6
speech_config (Any) – Return type None class langchain.tools.AzureCogsText2SpeechTool(*, name='azure_cognitive_services_text2speech', description='A wrapper around Azure Cognitive Services Text2Speech. Useful for when you need to convert text to speech. ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, azure_cogs_key='', azure_cogs_region='', speech_language='en-US', speech_config=None)[source] Bases: langchain.tools.base.BaseTool Tool that queries the Azure Cognitive Services Text2Speech API. In order to set this up, follow instructions at: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?pivots=programming-language-python Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – azure_cogs_key (str) – azure_cogs_region (str) – speech_language (str) – speech_config (Any) – Return type None
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-7
speech_config (Any) – Return type None class langchain.tools.BaseGraphQLTool(*, name='query_graphql', description="Β Β Β  Input to this tool is a detailed and correct GraphQL query, output is a result from the API.\nΒ Β Β  If the query is not correct, an error message will be returned.\nΒ Β Β  If an error is returned with 'Bad request' in it, rewrite the query and try again.\nΒ Β Β  If an error is returned with 'Unauthorized' in it, do not try again, but tell the user to change their authentication.\n\nΒ Β Β  Example Input: query {{ allUsers {{ id, name, email }} }}Β Β Β  ", args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, graphql_wrapper)[source] Bases: langchain.tools.base.BaseTool Base tool for querying a GraphQL API. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – graphql_wrapper (langchain.utilities.graphql.GraphQLAPIWrapper) – Return type None attribute graphql_wrapper: langchain.utilities.graphql.GraphQLAPIWrapper [Required] class langchain.tools.BaseRequestsTool(*, requests_wrapper)[source] Bases: pydantic.main.BaseModel Base class for requests tools. Parameters requests_wrapper (langchain.requests.TextRequestsWrapper) – Return type None
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-8
Parameters requests_wrapper (langchain.requests.TextRequestsWrapper) – Return type None attribute requests_wrapper: langchain.requests.TextRequestsWrapper [Required] class langchain.tools.BaseSQLDatabaseTool(*, db)[source] Bases: pydantic.main.BaseModel Base tool for interacting with a SQL database. Parameters db (langchain.sql_database.SQLDatabase) – Return type None attribute db: langchain.sql_database.SQLDatabase [Required] class langchain.tools.BaseSparkSQLTool(*, db)[source] Bases: pydantic.main.BaseModel Base tool for interacting with Spark SQL. Parameters db (langchain.utilities.spark_sql.SparkSQL) – Return type None attribute db: langchain.utilities.spark_sql.SparkSQL [Required] class langchain.tools.BaseTool(*, name, description, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False)[source] Bases: abc.ABC, pydantic.main.BaseModel Interface LangChain tools must implement. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – Return type None attribute args_schema: Optional[Type[pydantic.main.BaseModel]] = None
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-9
attribute args_schema: Optional[Type[pydantic.main.BaseModel]] = None Pydantic model class to validate and parse the tool’s input arguments. attribute callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None Deprecated. Please use callbacks instead. attribute callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None Callbacks to be called during tool execution. attribute description: str [Required] Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False Handle the content of the ToolException thrown. attribute name: str [Required] The unique name of the tool that clearly communicates its purpose. attribute return_direct: bool = False Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. attribute verbose: bool = False Whether to log the tool’s progress. async arun(tool_input, verbose=None, start_color='green', color='green', callbacks=None, **kwargs)[source] Run the tool asynchronously. Parameters tool_input (Union[str, Dict]) – verbose (Optional[bool]) – start_color (Optional[str]) – color (Optional[str]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type Any
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-10
kwargs (Any) – Return type Any run(tool_input, verbose=None, start_color='green', color='green', callbacks=None, **kwargs)[source] Run the tool. Parameters tool_input (Union[str, Dict]) – verbose (Optional[bool]) – start_color (Optional[str]) – color (Optional[str]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type Any property args: dict property is_single_input: bool Whether the tool only accepts a single input. class langchain.tools.BingSearchResults(*, name='Bing Search Results JSON', description='A wrapper around Bing Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, num_results=4, api_wrapper)[source] Bases: langchain.tools.base.BaseTool Tool that has capability to query the Bing Search API and get back json. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – num_results (int) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-11
num_results (int) – api_wrapper (langchain.utilities.bing_search.BingSearchAPIWrapper) – Return type None attribute api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required] attribute num_results: int = 4 class langchain.tools.BingSearchRun(*, name='bing_search', description='A wrapper around Bing Search. Useful for when you need to answer questions about current events. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source] Bases: langchain.tools.base.BaseTool Tool that adds the capability to query the Bing search API. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_wrapper (langchain.utilities.bing_search.BingSearchAPIWrapper) – Return type None attribute api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required] class langchain.tools.BraveSearch(*, name='brave_search', description='a search engine. useful for when you need to answer questions about current events. input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, search_wrapper)[source]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-12
Bases: langchain.tools.base.BaseTool Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – search_wrapper (langchain.utilities.brave_search.BraveSearchWrapper) – Return type None attribute search_wrapper: BraveSearchWrapper [Required] classmethod from_api_key(api_key, search_kwargs=None, **kwargs)[source] Parameters api_key (str) – search_kwargs (Optional[dict]) – kwargs (Any) – Return type langchain.tools.brave_search.tool.BraveSearch class langchain.tools.ClickTool(*, name='click_element', description='Click on an element with the given CSS selector', args_schema=<class 'langchain.tools.playwright.click.ClickToolInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None, visible_only=True, playwright_strict=False, playwright_timeout=1000)[source] Bases: langchain.tools.playwright.base.BaseBrowserTool Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-13
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – sync_browser (Optional['SyncBrowser']) – async_browser (Optional['AsyncBrowser']) – visible_only (bool) – playwright_strict (bool) – playwright_timeout (float) – Return type None attribute args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.click.ClickToolInput'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Click on an element with the given CSS selector' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'click_element' The unique name of the tool that clearly communicates its purpose. attribute playwright_strict: bool = False Whether to employ Playwright’s strict mode when clicking on elements. attribute playwright_timeout: float = 1000 Timeout (in ms) for Playwright to wait for element to be ready. attribute visible_only: bool = True Whether to consider only visible elements. class langchain.tools.CopyFileTool(*, name='copy_file', description='Create a copy of a file in a specified location', args_schema=<class 'langchain.tools.file_management.copy.FileCopyInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source] Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool Parameters name (str) – description (str) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-14
Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – root_dir (Optional[str]) – Return type None attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.copy.FileCopyInput'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Create a copy of a file in a specified location' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'copy_file' The unique name of the tool that clearly communicates its purpose. class langchain.tools.CurrentWebPageTool(*, name='current_webpage', description='Returns the URL of the current page', args_schema=<class 'pydantic.main.BaseModel'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source] Bases: langchain.tools.playwright.base.BaseBrowserTool Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-15
return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – sync_browser (Optional['SyncBrowser']) – async_browser (Optional['AsyncBrowser']) – Return type None attribute args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Returns the URL of the current page' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'current_webpage' The unique name of the tool that clearly communicates its purpose. class langchain.tools.DeleteFileTool(*, name='file_delete', description='Delete a file', args_schema=<class 'langchain.tools.file_management.delete.FileDeleteInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source] Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-16
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – root_dir (Optional[str]) – Return type None attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.delete.FileDeleteInput'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Delete a file' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'file_delete' The unique name of the tool that clearly communicates its purpose. class langchain.tools.DuckDuckGoSearchResults(*, name='DuckDuckGo Results JSON', description='A wrapper around Duck Duck Go Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, num_results=4, api_wrapper=None)[source] Bases: langchain.tools.base.BaseTool Tool that queries the Duck Duck Go Search API and get back json. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-17
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – num_results (int) – api_wrapper (langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper) – Return type None attribute api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional] attribute num_results: int = 4 class langchain.tools.DuckDuckGoSearchRun(*, name='duckduckgo_search', description='A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source] Bases: langchain.tools.base.BaseTool Tool that adds the capability to query the DuckDuckGo search API. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_wrapper (langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper) – Return type None attribute api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-18
class langchain.tools.ExtractHyperlinksTool(*, name='extract_hyperlinks', description='Extract all hyperlinks on the current webpage', args_schema=<class 'langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source] Bases: langchain.tools.playwright.base.BaseBrowserTool Extract all hyperlinks on the page. Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – sync_browser (Optional['SyncBrowser']) – async_browser (Optional['AsyncBrowser']) – Return type None attribute args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Extract all hyperlinks on the current webpage' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'extract_hyperlinks' The unique name of the tool that clearly communicates its purpose. static scrape_page(page, html_content, absolute_urls)[source] Parameters page (Any) – html_content (str) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-19
Parameters page (Any) – html_content (str) – absolute_urls (bool) – Return type str class langchain.tools.ExtractTextTool(*, name='extract_text', description='Extract all the text on the current webpage', args_schema=<class 'pydantic.main.BaseModel'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source] Bases: langchain.tools.playwright.base.BaseBrowserTool Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – sync_browser (Optional['SyncBrowser']) – async_browser (Optional['AsyncBrowser']) – Return type None attribute args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Extract all the text on the current webpage' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'extract_text' The unique name of the tool that clearly communicates its purpose.
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-20
The unique name of the tool that clearly communicates its purpose. class langchain.tools.FileSearchTool(*, name='file_search', description='Recursively search for files in a subdirectory that match the regex pattern', args_schema=<class 'langchain.tools.file_management.file_search.FileSearchInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source] Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – root_dir (Optional[str]) – Return type None attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.file_search.FileSearchInput'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Recursively search for files in a subdirectory that match the regex pattern' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'file_search' The unique name of the tool that clearly communicates its purpose.
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-21
The unique name of the tool that clearly communicates its purpose. class langchain.tools.GetElementsTool(*, name='get_elements', description='Retrieve elements in the current web page matching the given CSS selector', args_schema=<class 'langchain.tools.playwright.get_elements.GetElementsToolInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source] Bases: langchain.tools.playwright.base.BaseBrowserTool Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – sync_browser (Optional['SyncBrowser']) – async_browser (Optional['AsyncBrowser']) – Return type None attribute args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.get_elements.GetElementsToolInput'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Retrieve elements in the current web page matching the given CSS selector' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'get_elements' The unique name of the tool that clearly communicates its purpose.
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-22
The unique name of the tool that clearly communicates its purpose. class langchain.tools.GmailCreateDraft(*, name='create_gmail_draft', description='Use this tool to create a draft email with the provided message fields.', args_schema=<class 'langchain.tools.gmail.create_draft.CreateDraftSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source] Bases: langchain.tools.gmail.base.GmailBaseTool Parameters name (str) – description (str) – args_schema (Type[langchain.tools.gmail.create_draft.CreateDraftSchema]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_resource (Resource) – Return type None attribute args_schema: Type[langchain.tools.gmail.create_draft.CreateDraftSchema] = <class 'langchain.tools.gmail.create_draft.CreateDraftSchema'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Use this tool to create a draft email with the provided message fields.' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'create_gmail_draft' The unique name of the tool that clearly communicates its purpose.
https://api.python.langchain.com/en/latest/modules/tools.html