id
stringlengths 14
16
| text
stringlengths 31
2.41k
| source
stringlengths 53
121
|
---|---|---|
46d47edcfcf7-12 | Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a TensorflowHub embedding model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.SagemakerEndpointEmbeddings(*, client=None, endpoint_name='', region_name='', credentials_profile_name=None, content_handler, model_kwargs=None, endpoint_kwargs=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
Parameters
client (Any) β
endpoint_name (str) β
region_name (str) β
credentials_profile_name (Optional[str]) β
content_handler (langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler) β
model_kwargs (Optional[Dict]) β
endpoint_kwargs (Optional[Dict]) β
Return type
None
attribute content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]ο
The content handler class that provides an input and
output transform functions to handle formats between LLM | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-13 | The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
attribute credentials_profile_name: Optional[str] = Noneο
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
attribute endpoint_kwargs: Optional[Dict] = Noneο
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
attribute endpoint_name: str = ''ο
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
attribute model_kwargs: Optional[Dict] = Noneο
Key word arguments to pass to the model.
attribute region_name: str = ''ο
The aws region where the Sagemaker model is deployed, eg. us-west-2.
embed_documents(texts, chunk_size=64)[source]ο
Compute doc embeddings using a SageMaker Inference Endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size (int) β The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a SageMaker inference endpoint.
Parameters
text (str) β The text to embed.
Returns | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-14 | Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.HuggingFaceInstructEmbeddings(*, client=None, model_name='hkunlp/instructor-large', cache_folder=None, model_kwargs=None, encode_kwargs=None, embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ')[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers
and InstructorEmbedding python packages installed.
Example
from langchain.embeddings import HuggingFaceInstructEmbeddings
model_name = "hkunlp/instructor-large"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': True}
hf = HuggingFaceInstructEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
Parameters
client (Any) β
model_name (str) β
cache_folder (Optional[str]) β
model_kwargs (Dict[str, Any]) β
encode_kwargs (Dict[str, Any]) β
embed_instruction (str) β
query_instruction (str) β
Return type
None
attribute cache_folder: Optional[str] = Noneο
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
attribute embed_instruction: str = 'Represent the document for retrieval: 'ο
Instruction to use for embedding documents.
attribute encode_kwargs: Dict[str, Any] [Optional]ο
Key word arguments to pass when calling the encode method of the model.
attribute model_kwargs: Dict[str, Any] [Optional]ο | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-15 | attribute model_kwargs: Dict[str, Any] [Optional]ο
Key word arguments to pass to the model.
attribute model_name: str = 'hkunlp/instructor-large'ο
Model name to use.
attribute query_instruction: str = 'Represent the question for retrieving supporting documents: 'ο
Instruction to use for embedding query.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a HuggingFace instruct model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.MosaicMLInstructorEmbeddings(*, endpoint_url='https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ', retry_sleep=1.0, mosaicml_api_token=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around MosaicMLβs embedding inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import MosaicMLInstructorEmbeddings
endpoint_url = (
"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict"
)
mosaic_llm = MosaicMLInstructorEmbeddings(
endpoint_url=endpoint_url, | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-16 | endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
Parameters
endpoint_url (str) β
embed_instruction (str) β
query_instruction (str) β
retry_sleep (float) β
mosaicml_api_token (Optional[str]) β
Return type
None
attribute embed_instruction: str = 'Represent the document for retrieval: 'ο
Instruction used to embed documents.
attribute endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict'ο
Endpoint URL to use.
attribute query_instruction: str = 'Represent the question for retrieving supporting documents: 'ο
Instruction used to embed the query.
attribute retry_sleep: float = 1.0ο
How long to try sleeping for if a rate limit is encountered
embed_documents(texts)[source]ο
Embed documents using a MosaicML deployed instructor embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using a MosaicML deployed instructor embedding model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.SelfHostedEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn, load_fn_kwargs=None, model_reqs=['./', 'torch'], inference_kwargs=None)[source]ο
Bases: langchain.llms.self_hosted.SelfHostedPipeline, langchain.embeddings.base.Embeddings | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-17 | Runs custom embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
def get_pipeline():
model_id = "facebook/bart-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu
model_reqs=["./", "torch", "transformers"],
)
Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings
import runhouse as rh
from transformers import pipeline
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
pipeline = pipeline(model="bert-base-uncased", task="feature-extraction")
rh.blob(pickle.dumps(pipeline),
path="models/pipeline.pkl").save().to(gpu, path="models")
embeddings = SelfHostedHFEmbeddings.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Parameters
cache (Optional[bool]) β
verbose (bool) β | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-18 | )
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
inference_kwargs (Any) β
Return type
None
attribute inference_fn: Callable = <function _embed_documents>ο
Inference function to extract the embeddings on the remote hardware.
attribute inference_kwargs: Any = Noneο
Any kwargs to pass to the modelβs inference function.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts (List[str]) β The list of texts to embed.s
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a HuggingFace transformer model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float] | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-19 | Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.SelfHostedHuggingFaceEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn=<function load_embedding_model>, load_fn_kwargs=None, model_reqs=['./', 'sentence_transformers', 'torch'], inference_kwargs=None, model_id='sentence-transformers/all-mpnet-base-v2')[source]ο
Bases: langchain.embeddings.self_hosted.SelfHostedEmbeddings
Runs sentence_transformers embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-20 | inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
inference_kwargs (Any) β
model_id (str) β
Return type
None
attribute hardware: Any = Noneο
Remote hardware to send the inference function to.
attribute inference_fn: Callable = <function _embed_documents>ο
Inference function to extract the embeddings.
attribute load_fn_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model load function.
attribute model_id: str = 'sentence-transformers/all-mpnet-base-v2'ο
Model name to use.
attribute model_load_fn: Callable = <function load_embedding_model>ο
Function to load the model remotely on the server.
attribute model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']ο
Requirements to install on hardware to inference the model.
class langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn=<function load_embedding_model>, load_fn_kwargs=None, model_reqs=['./', 'InstructorEmbedding', 'torch'], inference_kwargs=None, model_id='hkunlp/instructor-large', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ')[source]ο
Bases: langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings
Runs InstructorEmbedding embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure, | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-21 | Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings
import runhouse as rh
model_name = "hkunlp/instructor-large"
gpu = rh.cluster(name='rh-a10x', instance_type='A100:1')
hf = SelfHostedHuggingFaceInstructEmbeddings(
model_name=model_name, hardware=gpu)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
inference_kwargs (Any) β
model_id (str) β
embed_instruction (str) β
query_instruction (str) β
Return type
None
attribute embed_instruction: str = 'Represent the document for retrieval: 'ο
Instruction to use for embedding documents.
attribute model_id: str = 'hkunlp/instructor-large'ο
Model name to use.
attribute model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']ο
Requirements to install on hardware to inference the model. | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-22 | Requirements to install on hardware to inference the model.
attribute query_instruction: str = 'Represent the question for retrieving supporting documents: 'ο
Instruction to use for embedding query.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a HuggingFace instruct model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.FakeEmbeddings(*, size)[source]ο
Bases: langchain.embeddings.base.Embeddings, pydantic.main.BaseModel
Parameters
size (int) β
Return type
None
embed_documents(texts)[source]ο
Embed search docs.
Parameters
texts (List[str]) β
Return type
List[List[float]]
embed_query(text)[source]ο
Embed query text.
Parameters
text (str) β
Return type
List[float]
class langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper for Aleph Alphaβs Asymmetric Embeddings
AA provides you with an endpoint to embed a document and a query.
The models were optimized to make the embeddings of documents and
the query for a document as similar as possible. | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-23 | the query for a document as similar as possible.
To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/
Example
from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding
embeddings = AlephAlphaSymmetricSemanticEmbedding()
document = "This is a content of the document"
query = "What is the content of the document?"
doc_result = embeddings.embed_documents([document])
query_result = embeddings.embed_query(query)
Parameters
client (Any) β
model (Optional[str]) β
hosting (Optional[str]) β
normalize (Optional[bool]) β
compress_to_size (Optional[int]) β
contextual_control_threshold (Optional[int]) β
control_log_additive (Optional[bool]) β
aleph_alpha_api_key (Optional[str]) β
Return type
None
attribute aleph_alpha_api_key: Optional[str] = Noneο
API key for Aleph Alpha API.
attribute compress_to_size: Optional[int] = 128ο
Should the returned embeddings come back as an original 5120-dim vector,
or should it be compressed to 128-dim.
attribute contextual_control_threshold: Optional[int] = Noneο
Attention control parameters only apply to those tokens that have
explicitly been set in the request.
attribute control_log_additive: Optional[bool] = Trueο
Apply controls on prompt items by adding the log(control_factor)
to attention scores.
attribute hosting: Optional[str] = 'https://api.aleph-alpha.com'ο
Optional parameter that specifies which datacenters may process the request.
attribute model: Optional[str] = 'luminous-base'ο
Model name to use.
attribute normalize: Optional[bool] = Trueο
Should returned embeddings be normalized | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-24 | attribute normalize: Optional[bool] = Trueο
Should returned embeddings be normalized
embed_documents(texts)[source]ο
Call out to Aleph Alphaβs asymmetric Document endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to Aleph Alphaβs asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
Parameters
text (str) β
Return type
List[float]
class langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source]ο
Bases: langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding
The symmetric version of the Aleph Alphaβs semantic embeddings.
The main difference is that here, both the documents and
queries are embedded with a SemanticRepresentation.Symmetric
.. rubric:: Example
from aleph_alpha import AlephAlphaSymmetricSemanticEmbedding
embeddings = AlephAlphaAsymmetricSemanticEmbedding()
text = "This is a test text"
doc_result = embeddings.embed_documents([text])
query_result = embeddings.embed_query(text)
Parameters
client (Any) β
model (Optional[str]) β
hosting (Optional[str]) β
normalize (Optional[bool]) β
compress_to_size (Optional[int]) β
contextual_control_threshold (Optional[int]) β
control_log_additive (Optional[bool]) β
aleph_alpha_api_key (Optional[str]) β | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-25 | aleph_alpha_api_key (Optional[str]) β
Return type
None
embed_documents(texts)[source]ο
Call out to Aleph Alphaβs Document endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to Aleph Alphaβs asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
Parameters
text (str) β
Return type
List[float]
langchain.embeddings.SentenceTransformerEmbeddingsο
alias of langchain.embeddings.huggingface.HuggingFaceEmbeddings
class langchain.embeddings.MiniMaxEmbeddings(*, endpoint_url='https://api.minimax.chat/v1/embeddings', model='embo-01', embed_type_db='db', embed_type_query='query', minimax_group_id=None, minimax_api_key=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around MiniMaxβs embedding inference service.
To use, you should have the environment variable MINIMAX_GROUP_ID and
MINIMAX_API_KEY set with your API token, or pass it as a named parameter to
the constructor.
Example
from langchain.embeddings import MiniMaxEmbeddings
embeddings = MiniMaxEmbeddings()
query_text = "This is a test query."
query_result = embeddings.embed_query(query_text)
document_text = "This is a test document."
document_result = embeddings.embed_documents([document_text])
Parameters
endpoint_url (str) β
model (str) β
embed_type_db (str) β
embed_type_query (str) β
minimax_group_id (Optional[str]) β | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-26 | embed_type_query (str) β
minimax_group_id (Optional[str]) β
minimax_api_key (Optional[str]) β
Return type
None
attribute embed_type_db: str = 'db'ο
For embed_documents
attribute embed_type_query: str = 'query'ο
For embed_query
attribute endpoint_url: str = 'https://api.minimax.chat/v1/embeddings'ο
Endpoint URL to use.
attribute minimax_api_key: Optional[str] = Noneο
API Key for MiniMax API.
attribute minimax_group_id: Optional[str] = Noneο
Group ID for MiniMax API.
attribute model: str = 'embo-01'ο
Embeddings model name to use.
embed_documents(texts)[source]ο
Embed documents using a MiniMax embedding endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using a MiniMax embedding endpoint.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.BedrockEmbeddings(*, client=None, region_name=None, credentials_profile_name=None, model_id='amazon.titan-e1t-medium', model_kwargs=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Embeddings provider to invoke Bedrock embedding models.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-27 | If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Bedrock service.
Parameters
client (Any) β
region_name (Optional[str]) β
credentials_profile_name (Optional[str]) β
model_id (str) β
model_kwargs (Optional[Dict]) β
Return type
None
attribute credentials_profile_name: Optional[str] = Noneο
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
attribute model_id: str = 'amazon.titan-e1t-medium'ο
Id of the model to call, e.g., amazon.titan-e1t-medium, this is
equivalent to the modelId property in the list-foundation-models api
attribute model_kwargs: Optional[Dict] = Noneο
Key word arguments to pass to the model.
attribute region_name: Optional[str] = Noneο
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
embed_documents(texts, chunk_size=1)[source]ο
Compute doc embeddings using a Bedrock model.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size (int) β Bedrock currently only allows single string
inputs, so chunk size is always 1. This input is here
only for compatibility with the embeddings interface.
Returns | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-28 | only for compatibility with the embeddings interface.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a Bedrock model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.DeepInfraEmbeddings(*, model_id='sentence-transformers/clip-ViT-B-32', normalize=False, embed_instruction='passage: ', query_instruction='query: ', model_kwargs=None, deepinfra_api_token=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around Deep Infraβs embedding inference service.
To use, you should have the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
There are multiple embeddings models available,
see https://deepinfra.com/models?type=embeddings.
Example
from langchain.embeddings import DeepInfraEmbeddings
deepinfra_emb = DeepInfraEmbeddings(
model_id="sentence-transformers/clip-ViT-B-32",
deepinfra_api_token="my-api-key"
)
r1 = deepinfra_emb.embed_documents(
[
"Alpha is the first letter of Greek alphabet",
"Beta is the second letter of Greek alphabet",
]
)
r2 = deepinfra_emb.embed_query(
"What is the second letter of Greek alphabet"
)
Parameters
model_id (str) β
normalize (bool) β
embed_instruction (str) β
query_instruction (str) β
model_kwargs (Optional[dict]) β
deepinfra_api_token (Optional[str]) β
Return type
None | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-29 | deepinfra_api_token (Optional[str]) β
Return type
None
attribute embed_instruction: str = 'passage: 'ο
Instruction used to embed documents.
attribute model_id: str = 'sentence-transformers/clip-ViT-B-32'ο
Embeddings model to use.
attribute model_kwargs: Optional[dict] = Noneο
Other model keyword args
attribute normalize: bool = Falseο
whether to normalize the computed embeddings
attribute query_instruction: str = 'query: 'ο
Instruction used to embed the query.
embed_documents(texts)[source]ο
Embed documents using a Deep Infra deployed embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using a Deep Infra deployed embedding model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.DashScopeEmbeddings(*, client=None, model='text-embedding-v1', dashscope_api_key=None, max_retries=5)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around DashScope embedding models.
To use, you should have the dashscope python package installed, and the
environment variable DASHSCOPE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(dashscope_api_key="my-api-key")
Example
import os
os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY" | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-30 | os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY"
from langchain.embeddings.dashscope import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(
model="text-embedding-v1",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
Parameters
client (Any) β
model (str) β
dashscope_api_key (Optional[str]) β
max_retries (int) β
Return type
None
attribute dashscope_api_key: Optional[str] = Noneο
Maximum number of retries to make when generating.
embed_documents(texts)[source]ο
Call out to DashScopeβs embedding endpoint for embedding search docs.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size β The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to DashScopeβs embedding endpoint for embedding query text.
Parameters
text (str) β The text to embed.
Returns
Embedding for the text.
Return type
List[float]
class langchain.embeddings.EmbaasEmbeddings(*, model='e5-large-v2', instruction=None, api_url='https://api.embaas.io/v1/embeddings/', embaas_api_key=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around embaasβs embedding service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Initialise with default model and instruction | https://api.python.langchain.com/en/latest/modules/embeddings.html |
46d47edcfcf7-31 | it as a named parameter to the constructor.
Example
# Initialise with default model and instruction
from langchain.embeddings import EmbaasEmbeddings
emb = EmbaasEmbeddings()
# Initialise with custom model and instruction
from langchain.embeddings import EmbaasEmbeddings
emb_model = "instructor-large"
emb_inst = "Represent the Wikipedia document for retrieval"
emb = EmbaasEmbeddings(
model=emb_model,
instruction=emb_inst
)
Parameters
model (str) β
instruction (Optional[str]) β
api_url (str) β
embaas_api_key (Optional[str]) β
Return type
None
attribute api_url: str = 'https://api.embaas.io/v1/embeddings/'ο
The URL for the embaas embeddings API.
attribute instruction: Optional[str] = Noneο
Instruction used for domain-specific embeddings.
attribute model: str = 'e5-large-v2'ο
The model used for embeddings.
embed_documents(texts)[source]ο
Get embeddings for a list of texts.
Parameters
texts (List[str]) β The list of texts to get embeddings for.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Get embeddings for a single text.
Parameters
text (str) β The text to get embeddings for.
Returns
List of embeddings.
Return type
List[float] | https://api.python.langchain.com/en/latest/modules/embeddings.html |
897ef9cfbf02-0 | Utilitiesο
General utilities.
class langchain.utilities.ApifyWrapper(*, apify_client=None, apify_client_async=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around Apify.
To use, you should have the apify-client python package installed,
and the environment variable APIFY_API_TOKEN set with your API key, or pass
apify_api_token as a named parameter to the constructor.
Parameters
apify_client (Any) β
apify_client_async (Any) β
Return type
None
attribute apify_client: Any = Noneο
attribute apify_client_async: Any = Noneο
async acall_actor(actor_id, run_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]ο
Run an Actor on the Apify platform and wait for results to be ready.
Parameters
actor_id (str) β The ID or name of the Actor on the Apify platform.
run_input (Dict) β The input object of the Actor that youβre trying to run.
dataset_mapping_function (Callable) β A function that takes a single
dictionary (an Apify dataset item) and converts it to
an instance of the Document class.
build (str, optional) β Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) β Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) β Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from theActor runβs default dataset.
Return type
ApifyDatasetLoader
async acall_actor_task(task_id, task_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]ο | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-1 | Run a saved Actor task on Apify and wait for results to be ready.
Parameters
task_id (str) β The ID or name of the task on the Apify platform.
task_input (Dict) β The input object of the task that youβre trying to run.
Overrides the taskβs saved input.
dataset_mapping_function (Callable) β A function that takes a single
dictionary (an Apify dataset item) and converts it to an
instance of the Document class.
build (str, optional) β Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) β Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) β Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from thetask runβs default dataset.
Return type
ApifyDatasetLoader
call_actor(actor_id, run_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]ο
Run an Actor on the Apify platform and wait for results to be ready.
Parameters
actor_id (str) β The ID or name of the Actor on the Apify platform.
run_input (Dict) β The input object of the Actor that youβre trying to run.
dataset_mapping_function (Callable) β A function that takes a single
dictionary (an Apify dataset item) and converts it to an
instance of the Document class.
build (str, optional) β Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) β Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) β Optional timeout for the run, in seconds.
Returns | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-2 | timeout_secs (int, optional) β Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from theActor runβs default dataset.
Return type
ApifyDatasetLoader
call_actor_task(task_id, task_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]ο
Run a saved Actor task on Apify and wait for results to be ready.
Parameters
task_id (str) β The ID or name of the task on the Apify platform.
task_input (Dict) β The input object of the task that youβre trying to run.
Overrides the taskβs saved input.
dataset_mapping_function (Callable) β A function that takes a single
dictionary (an Apify dataset item) and converts it to an
instance of the Document class.
build (str, optional) β Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) β Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) β Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from thetask runβs default dataset.
Return type
ApifyDatasetLoader
class langchain.utilities.ArxivAPIWrapper(*, arxiv_search=None, arxiv_exceptions=None, top_k_results=3, load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000, ARXIV_MAX_QUERY_LENGTH=300)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around ArxivAPI.
To use, you should have the arxiv python package installed.
https://lukasschwab.me/arxiv.py/index.html
This wrapper will use the Arxiv API to conduct searches and | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-3 | This wrapper will use the Arxiv API to conduct searches and
fetch document summaries. By default, it will return the document summaries
of the top-k results.
It limits the Document content by doc_content_chars_max.
Set doc_content_chars_max=None if you donβt want to limit the content size.
Parameters
top_k_results (int) β number of the top-scored document used for the arxiv tool
ARXIV_MAX_QUERY_LENGTH (int) β the cut limit on the query used for the arxiv tool.
load_max_docs (int) β a limit to the number of loaded documents
load_all_available_meta (bool) β
if True: the metadata of the loaded Documents gets all available meta info(see https://lukasschwab.me/arxiv.py/index.html#Result),
if False: the metadata gets only the most informative fields.
arxiv_search (Any) β
arxiv_exceptions (Any) β
doc_content_chars_max (Optional[int]) β
Return type
None
attribute arxiv_exceptions: Any = Noneο
attribute doc_content_chars_max: Optional[int] = 4000ο
attribute load_all_available_meta: bool = Falseο
attribute load_max_docs: int = 100ο
attribute top_k_results: int = 3ο
load(query)[source]ο
Run Arxiv search and get the article texts plus the article meta information.
See https://lukasschwab.me/arxiv.py/index.html#Search
Returns: a list of documents with the document.page_content in text format
Parameters
query (str) β
Return type
List[langchain.schema.Document]
run(query)[source]ο
Run Arxiv search and get the article meta information.
See https://lukasschwab.me/arxiv.py/index.html#Search | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-4 | See https://lukasschwab.me/arxiv.py/index.html#Search
See https://lukasschwab.me/arxiv.py/index.html#Result
It uses only the most informative fields of article meta information.
Parameters
query (str) β
Return type
str
class langchain.utilities.BashProcess(strip_newlines=False, return_err_output=False, persistent=False)[source]ο
Bases: object
Executes bash commands and returns the output.
Parameters
strip_newlines (bool) β
return_err_output (bool) β
persistent (bool) β
run(commands)[source]ο
Run commands and return final output.
Parameters
commands (Union[str, List[str]]) β
Return type
str
process_output(output, command)[source]ο
Parameters
output (str) β
command (str) β
Return type
str
class langchain.utilities.BibtexparserWrapper[source]ο
Bases: pydantic.main.BaseModel
Wrapper around bibtexparser.
To use, you should have the bibtexparser python package installed.
https://bibtexparser.readthedocs.io/en/master/
This wrapper will use bibtexparser to load a collection of references from
a bibtex file and fetch document summaries.
Return type
None
get_metadata(entry, load_extra=False)[source]ο
Get metadata for the given entry.
Parameters
entry (Mapping[str, Any]) β
load_extra (bool) β
Return type
Dict[str, Any]
load_bibtex_entries(path)[source]ο
Load bibtex entries from the bibtex file at the given path.
Parameters
path (str) β
Return type
List[Dict[str, Any]] | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-5 | Parameters
path (str) β
Return type
List[Dict[str, Any]]
class langchain.utilities.BingSearchAPIWrapper(*, bing_subscription_key, bing_search_url, k=10)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Bing Search API.
In order to set this up, follow instructions at:
https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e
Parameters
bing_subscription_key (str) β
bing_search_url (str) β
k (int) β
Return type
None
attribute bing_search_url: str [Required]ο
attribute bing_subscription_key: str [Required]ο
attribute k: int = 10ο
results(query, num_results)[source]ο
Run query through BingSearch and return metadata.
Parameters
query (str) β The query to search for.
num_results (int) β The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query)[source]ο
Run query through BingSearch and parse result.
Parameters
query (str) β
Return type
str
class langchain.utilities.BraveSearchWrapper(*, api_key, search_kwargs=None)[source]ο
Bases: pydantic.main.BaseModel
Parameters
api_key (str) β
search_kwargs (dict) β
Return type
None
attribute api_key: str [Required]ο
attribute search_kwargs: dict [Optional]ο
run(query)[source]ο
Parameters
query (str) β
Return type
str | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-6 | Parameters
query (str) β
Return type
str
class langchain.utilities.DuckDuckGoSearchAPIWrapper(*, k=10, region='wt-wt', safesearch='moderate', time='y', max_results=5)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for DuckDuckGo Search API.
Free and does not require any setup
Parameters
k (int) β
region (Optional[str]) β
safesearch (str) β
time (Optional[str]) β
max_results (int) β
Return type
None
attribute k: int = 10ο
attribute max_results: int = 5ο
attribute region: Optional[str] = 'wt-wt'ο
attribute safesearch: str = 'moderate'ο
attribute time: Optional[str] = 'y'ο
get_snippets(query)[source]ο
Run query through DuckDuckGo and return concatenated results.
Parameters
query (str) β
Return type
List[str]
results(query, num_results)[source]ο
Run query through DuckDuckGo and return metadata.
Parameters
query (str) β The query to search for.
num_results (int) β The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query)[source]ο
Parameters
query (str) β
Return type
str
class langchain.utilities.GooglePlacesAPIWrapper(*, gplaces_api_key=None, google_map_client=None, top_k_results=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around Google Places API. | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-7 | Bases: pydantic.main.BaseModel
Wrapper around Google Places API.
To use, you should have the googlemaps python package installed,an API key for the google maps platform,
and the enviroment variable ββGPLACES_API_KEYββ
set with your API key , or pass βgplaces_api_keyβ
as a named parameter to the constructor.
By default, this will return the all the results on the input query.You can use the top_k_results argument to limit the number of results.
Example
from langchain import GooglePlacesAPIWrapper
gplaceapi = GooglePlacesAPIWrapper()
Parameters
gplaces_api_key (Optional[str]) β
google_map_client (Any) β
top_k_results (Optional[int]) β
Return type
None
attribute gplaces_api_key: Optional[str] = Noneο
attribute top_k_results: Optional[int] = Noneο
fetch_place_details(place_id)[source]ο
Parameters
place_id (str) β
Return type
Optional[str]
format_place_details(place_details)[source]ο
Parameters
place_details (Dict[str, Any]) β
Return type
Optional[str]
run(query)[source]ο
Run Places search and get k number of places that exists that match.
Parameters
query (str) β
Return type
str
class langchain.utilities.GoogleSearchAPIWrapper(*, search_engine=None, google_api_key=None, google_cse_id=None, k=10, siterestrict=False)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Google Search API.
Adapted from: Instructions adapted from https://stackoverflow.com/questions/
37083058/
programmatically-searching-google-in-python-using-custom-search
TODO: DOCS for using it
1. Install google-api-python-client | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-8 | TODO: DOCS for using it
1. Install google-api-python-client
- If you donβt already have a Google account, sign up.
- If you have never created a Google APIs Console project,
read the Managing Projects page and create a project in the Google API Console.
- Install the library using pip install google-api-python-client
The current version of the library is 2.70.0 at this time
2. To create an API key:
- Navigate to the APIs & ServicesβCredentials panel in Cloud Console.
- Select Create credentials, then select API key from the drop-down menu.
- The API key created dialog box displays your newly created key.
- You now have an API_KEY
3. Setup Custom Search Engine so you can search the entire web
- Create a custom search engine in this link.
- In Sites to search, add any valid URL (i.e. www.stackoverflow.com).
- Thatβs all you have to fill up, the rest doesnβt matter.
In the left-side menu, click Edit search engine β {your search engine name}
β Setup Set Search the entire web to ON. Remove the URL you added from
the list of Sites to search.
- Under Search engine ID youβll find the search-engine-ID.
4. Enable the Custom Search API
- Navigate to the APIs & ServicesβDashboard panel in Cloud Console.
- Click Enable APIs and Services.
- Search for Custom Search API and click on it.
- Click Enable.
URL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis
.com
Parameters
search_engine (Any) β
google_api_key (Optional[str]) β
google_cse_id (Optional[str]) β
k (int) β
siterestrict (bool) β
Return type
None | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-9 | siterestrict (bool) β
Return type
None
attribute google_api_key: Optional[str] = Noneο
attribute google_cse_id: Optional[str] = Noneο
attribute k: int = 10ο
attribute siterestrict: bool = Falseο
results(query, num_results)[source]ο
Run query through GoogleSearch and return metadata.
Parameters
query (str) β The query to search for.
num_results (int) β The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query)[source]ο
Run query through GoogleSearch and parse result.
Parameters
query (str) β
Return type
str
class langchain.utilities.GoogleSerperAPIWrapper(*, k=10, gl='us', hl='en', type='search', tbs=None, serper_api_key=None, aiosession=None, result_key_for_type={'images': 'images', 'news': 'news', 'places': 'places', 'search': 'organic'})[source]ο
Bases: pydantic.main.BaseModel
Wrapper around the Serper.dev Google Search API.
You can create a free API key at https://serper.dev.
To use, you should have the environment variable SERPER_API_KEY
set with your API key, or pass serper_api_key as a named parameter
to the constructor.
Example
from langchain import GoogleSerperAPIWrapper
google_serper = GoogleSerperAPIWrapper()
Parameters
k (int) β
gl (str) β
hl (str) β
type (Literal['news', 'search', 'places', 'images']) β | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-10 | type (Literal['news', 'search', 'places', 'images']) β
tbs (Optional[str]) β
serper_api_key (Optional[str]) β
aiosession (Optional[aiohttp.client.ClientSession]) β
result_key_for_type (dict) β
Return type
None
attribute aiosession: Optional[aiohttp.client.ClientSession] = Noneο
attribute gl: str = 'us'ο
attribute hl: str = 'en'ο
attribute k: int = 10ο
attribute serper_api_key: Optional[str] = Noneο
attribute tbs: Optional[str] = Noneο
attribute type: Literal['news', 'search', 'places', 'images'] = 'search'ο
async aresults(query, **kwargs)[source]ο
Run query through GoogleSearch.
Parameters
query (str) β
kwargs (Any) β
Return type
Dict
async arun(query, **kwargs)[source]ο
Run query through GoogleSearch and parse result async.
Parameters
query (str) β
kwargs (Any) β
Return type
str
results(query, **kwargs)[source]ο
Run query through GoogleSearch.
Parameters
query (str) β
kwargs (Any) β
Return type
Dict
run(query, **kwargs)[source]ο
Run query through GoogleSearch and parse result.
Parameters
query (str) β
kwargs (Any) β
Return type
str
class langchain.utilities.GraphQLAPIWrapper(*, custom_headers=None, graphql_endpoint, gql_client=None, gql_function)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around GraphQL API.
To use, you should have the gql python package installed. | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-11 | Wrapper around GraphQL API.
To use, you should have the gql python package installed.
This wrapper will use the GraphQL API to conduct queries.
Parameters
custom_headers (Optional[Dict[str, str]]) β
graphql_endpoint (str) β
gql_client (Any) β
gql_function (Callable[[str], Any]) β
Return type
None
attribute custom_headers: Optional[Dict[str, str]] = Noneο
attribute graphql_endpoint: str [Required]ο
run(query)[source]ο
Run a GraphQL query and get the results.
Parameters
query (str) β
Return type
str | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-12 | class langchain.utilities.JiraAPIWrapper(*, jira=None, confluence=None, jira_username=None, jira_api_token=None, jira_instance_url=None, operations=[{'mode': 'jql', 'name': 'JQL Query', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira jql API, useful when you need to search for Jira issues.\nΒ Β Β The input to this tool is a JQL query string, and will be passed into atlassian-python-api\'s Jira `jql` function,\nΒ Β Β For example, to find all the issues in project "Test" assigned to the me, you would pass in the following string:\nΒ Β Β project = Test AND assignee = currentUser()\nΒ Β Β or to find issues with summaries that contain the word "test", you would pass in the following string:\nΒ Β Β summary ~ \'test\'\nΒ Β Β '}, {'mode': 'get_projects', 'name': 'Get Projects', 'description': "\nΒ Β Β This tool is a wrapper around atlassian-python-api's Jira project API, \nΒ Β Β useful when you need to fetch all the projects the user has access to, find out how many projects there are, or as an intermediary step that involv searching by projects. \nΒ Β Β there is no input to this tool.\nΒ Β Β "}, {'mode': 'create_issue', 'name': 'Create Issue', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira issue_create API, useful when you need to create a Jira issue. \nΒ Β Β The input to this tool is a dictionary specifying the fields of the Jira issue, and will be passed into atlassian-python-api\'s Jira `issue_create` function.\nΒ Β Β For example, to create a low priority task called "test issue" with description "test description", you would pass in the following | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-13 | low priority task called "test issue" with description "test description", you would pass in the following dictionary: \nΒ Β Β {{"summary": "test issue", "description": "test description", "issuetype": {{"name": "Task"}}, "priority": {{"name": "Low"}}}}\nΒ Β Β '}, {'mode': 'other', 'name': 'Catch all Jira API call', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira API.\nΒ Β Β There are other dedicated tools for fetching all projects, and creating and searching for issues, \nΒ Β Β use this tool if you need to perform any other actions allowed by the atlassian-python-api Jira API.\nΒ Β Β The input to this tool is line of python code that calls a function from atlassian-python-api\'s Jira API\nΒ Β Β For example, to update the summary field of an issue, you would pass in the following string:\nΒ Β Β self.jira.update_issue_field(key, {{"summary": "New summary"}})\nΒ Β Β or to find out how many projects are in the Jira instance, you would pass in the following string:\nΒ Β Β self.jira.projects()\nΒ Β Β For more information on the Jira API, refer to https://atlassian-python-api.readthedocs.io/jira.html\nΒ Β Β '}, {'mode': 'create_page', 'name': 'Create confluence page', 'description': 'This tool is a wrapper around atlassian-python-api\'s Confluence \natlassian-python-api API, useful when you need to create a Confluence page. The input to this tool is a dictionary \nspecifying the fields of the Confluence page, and will be passed into atlassian-python-api\'s Confluence `create_page` \nfunction. For example, to create a page in the DEMO space titled "This is the title" with body "This is the body. You | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-14 | the DEMO space titled "This is the title" with body "This is the body. You can use \n<strong>HTML tags</strong>!", you would pass in the following dictionary: {{"space": "DEMO", "title":"This is the \ntitle","body":"This is the body. You can use <strong>HTML tags</strong>!"}} '}])[source]ο | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-15 | Bases: pydantic.main.BaseModel
Wrapper for Jira API.
Parameters
jira (Any) β
confluence (Any) β
jira_username (Optional[str]) β
jira_api_token (Optional[str]) β
jira_instance_url (Optional[str]) β
operations (List[Dict]) β
Return type
None
attribute confluence: Any = Noneο
attribute jira_api_token: Optional[str] = Noneο
attribute jira_instance_url: Optional[str] = Noneο
attribute jira_username: Optional[str] = Noneο | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-16 | attribute operations: List[Dict] = [{'mode': 'jql', 'name': 'JQL Query', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira jql API, useful when you need to search for Jira issues.\nΒ Β Β The input to this tool is a JQL query string, and will be passed into atlassian-python-api\'s Jira `jql` function,\nΒ Β Β For example, to find all the issues in project "Test" assigned to the me, you would pass in the following string:\nΒ Β Β project = Test AND assignee = currentUser()\nΒ Β Β or to find issues with summaries that contain the word "test", you would pass in the following string:\nΒ Β Β summary ~ \'test\'\nΒ Β Β '}, {'mode': 'get_projects', 'name': 'Get Projects', 'description': "\nΒ Β Β This tool is a wrapper around atlassian-python-api's Jira project API, \nΒ Β Β useful when you need to fetch all the projects the user has access to, find out how many projects there are, or as an intermediary step that involv searching by projects. \nΒ Β Β there is no input to this tool.\nΒ Β Β "}, {'mode': 'create_issue', 'name': 'Create Issue', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira issue_create API, useful when you need to create a Jira issue. \nΒ Β Β The input to this tool is a dictionary specifying the fields of the Jira issue, and will be passed into atlassian-python-api\'s Jira `issue_create` function.\nΒ Β Β For example, to create a low priority task called "test issue" with description "test description", you would pass in the following dictionary: \nΒ Β Β {{"summary": "test issue", "description": "test description", "issuetype": {{"name": | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-17 | "test issue", "description": "test description", "issuetype": {{"name": "Task"}}, "priority": {{"name": "Low"}}}}\nΒ Β Β '}, {'mode': 'other', 'name': 'Catch all Jira API call', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira API.\nΒ Β Β There are other dedicated tools for fetching all projects, and creating and searching for issues, \nΒ Β Β use this tool if you need to perform any other actions allowed by the atlassian-python-api Jira API.\nΒ Β Β The input to this tool is line of python code that calls a function from atlassian-python-api\'s Jira API\nΒ Β Β For example, to update the summary field of an issue, you would pass in the following string:\nΒ Β Β self.jira.update_issue_field(key, {{"summary": "New summary"}})\nΒ Β Β or to find out how many projects are in the Jira instance, you would pass in the following string:\nΒ Β Β self.jira.projects()\nΒ Β Β For more information on the Jira API, refer to https://atlassian-python-api.readthedocs.io/jira.html\nΒ Β Β '}, {'mode': 'create_page', 'name': 'Create confluence page', 'description': 'This tool is a wrapper around atlassian-python-api\'s Confluence \natlassian-python-api API, useful when you need to create a Confluence page. The input to this tool is a dictionary \nspecifying the fields of the Confluence page, and will be passed into atlassian-python-api\'s Confluence `create_page` \nfunction. For example, to create a page in the DEMO space titled "This is the title" with body "This is the body. You can use \n<strong>HTML tags</strong>!", you would pass in the following dictionary: {{"space": "DEMO", | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-18 | you would pass in the following dictionary: {{"space": "DEMO", "title":"This is the \ntitle","body":"This is the body. You can use <strong>HTML tags</strong>!"}} '}]ο | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-19 | issue_create(query)[source]ο
Parameters
query (str) β
Return type
str
list()[source]ο
Return type
List[Dict]
other(query)[source]ο
Parameters
query (str) β
Return type
str
page_create(query)[source]ο
Parameters
query (str) β
Return type
str
parse_issues(issues)[source]ο
Parameters
issues (Dict) β
Return type
List[dict]
parse_projects(projects)[source]ο
Parameters
projects (List[dict]) β
Return type
List[dict]
project()[source]ο
Return type
str
run(mode, query)[source]ο
Parameters
mode (str) β
query (str) β
Return type
str
search(query)[source]ο
Parameters
query (str) β
Return type
str
class langchain.utilities.LambdaWrapper(*, lambda_client=None, function_name=None, awslambda_tool_name=None, awslambda_tool_description=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for AWS Lambda SDK.
Docs for using:
pip install boto3
Create a lambda function using the AWS Console or CLI
Run aws configure and enter your AWS credentials
Parameters
lambda_client (Any) β
function_name (Optional[str]) β
awslambda_tool_name (Optional[str]) β
awslambda_tool_description (Optional[str]) β
Return type
None
attribute awslambda_tool_description: Optional[str] = Noneο
attribute awslambda_tool_name: Optional[str] = Noneο
attribute function_name: Optional[str] = Noneο
run(query)[source]ο
Invoke Lambda function and parse result.
Parameters | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-20 | run(query)[source]ο
Invoke Lambda function and parse result.
Parameters
query (str) β
Return type
str
class langchain.utilities.MaxComputeAPIWrapper(client)[source]ο
Bases: object
Interface for querying Alibaba Cloud MaxCompute tables.
Parameters
client (ODPS) β
classmethod from_params(endpoint, project, *, access_id=None, secret_access_key=None)[source]ο
Convenience constructor that builds the odsp.ODPS MaxCompute client fromgiven parameters.
Parameters
endpoint (str) β MaxCompute endpoint.
project (str) β A project is a basic organizational unit of MaxCompute, which is
similar to a database.
access_id (Optional[str]) β MaxCompute access ID. Should be passed in directly or set as the
environment variable MAX_COMPUTE_ACCESS_ID.
secret_access_key (Optional[str]) β MaxCompute secret access key. Should be passed in
directly or set as the environment variable
MAX_COMPUTE_SECRET_ACCESS_KEY.
Return type
langchain.utilities.max_compute.MaxComputeAPIWrapper
lazy_query(query)[source]ο
Parameters
query (str) β
Return type
Iterator[dict]
query(query)[source]ο
Parameters
query (str) β
Return type
List[dict]
class langchain.utilities.MetaphorSearchAPIWrapper(*, metaphor_api_key, k=10)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Metaphor Search API.
Parameters
metaphor_api_key (str) β
k (int) β
Return type
None
attribute k: int = 10ο
attribute metaphor_api_key: str [Required]ο | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-21 | attribute metaphor_api_key: str [Required]ο
results(query, num_results, include_domains=None, exclude_domains=None, start_crawl_date=None, end_crawl_date=None, start_published_date=None, end_published_date=None)[source]ο
Run query through Metaphor Search and return metadata.
Parameters
query (str) β The query to search for.
num_results (int) β The number of results to return.
include_domains (Optional[List[str]]) β
exclude_domains (Optional[List[str]]) β
start_crawl_date (Optional[str]) β
end_crawl_date (Optional[str]) β
start_published_date (Optional[str]) β
end_published_date (Optional[str]) β
Returns
title - The title of the
url - The url
author - Author of the content, if applicable. Otherwise, None.
published_date - Estimated date published
in YYYY-MM-DD format. Otherwise, None.
Return type
A list of dictionaries with the following keys
async results_async(query, num_results, include_domains=None, exclude_domains=None, start_crawl_date=None, end_crawl_date=None, start_published_date=None, end_published_date=None)[source]ο
Get results from the Metaphor Search API asynchronously.
Parameters
query (str) β
num_results (int) β
include_domains (Optional[List[str]]) β
exclude_domains (Optional[List[str]]) β
start_crawl_date (Optional[str]) β
end_crawl_date (Optional[str]) β
start_published_date (Optional[str]) β
end_published_date (Optional[str]) β
Return type
List[Dict]
class langchain.utilities.OpenWeatherMapAPIWrapper(*, owm=None, openweathermap_api_key=None)[source]ο
Bases: pydantic.main.BaseModel | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-22 | Bases: pydantic.main.BaseModel
Wrapper for OpenWeatherMap API using PyOWM.
Docs for using:
Go to OpenWeatherMap and sign up for an API key
Save your API KEY into OPENWEATHERMAP_API_KEY env variable
pip install pyowm
Parameters
owm (Any) β
openweathermap_api_key (Optional[str]) β
Return type
None
attribute openweathermap_api_key: Optional[str] = Noneο
attribute owm: Any = Noneο
run(location)[source]ο
Get the current weather information for a specified location.
Parameters
location (str) β
Return type
str
class langchain.utilities.PowerBIDataset(*, dataset_id, table_names, group_id=None, credential=None, token=None, impersonated_user_name=None, sample_rows_in_table_info=1, schemas=None, aiosession=None)[source]ο
Bases: pydantic.main.BaseModel
Create PowerBI engine from dataset ID and credential or token.
Use either the credential or a supplied token to authenticate.
If both are supplied the credential is used to generate a token.
The impersonated_user_name is the UPN of a user to be impersonated.
If the model is not RLS enabled, this will be ignored.
Parameters
dataset_id (str) β
table_names (List[str]) β
group_id (Optional[str]) β
credential (Optional[TokenCredential]) β
token (Optional[str]) β
impersonated_user_name (Optional[str]) β
sample_rows_in_table_info (langchain.utilities.powerbi.ConstrainedIntValue) β
schemas (Dict[str, str]) β
aiosession (Optional[aiohttp.client.ClientSession]) β
Return type
None | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-23 | aiosession (Optional[aiohttp.client.ClientSession]) β
Return type
None
attribute aiosession: Optional[aiohttp.ClientSession] = Noneο
attribute credential: Optional[TokenCredential] = Noneο
attribute dataset_id: str [Required]ο
attribute group_id: Optional[str] = Noneο
attribute impersonated_user_name: Optional[str] = Noneο
attribute sample_rows_in_table_info: int = 1ο
Constraints
exclusiveMinimum = 0
maximum = 10
attribute schemas: Dict[str, str] [Optional]ο
attribute table_names: List[str] [Required]ο
attribute token: Optional[str] = Noneο
async aget_table_info(table_names=None)[source]ο
Get information about specified tables.
Parameters
table_names (Optional[Union[List[str], str]]) β
Return type
str
async arun(command)[source]ο
Execute a DAX command and return the result asynchronously.
Parameters
command (str) β
Return type
Any
get_schemas()[source]ο
Get the available schemaβs.
Return type
str
get_table_info(table_names=None)[source]ο
Get information about specified tables.
Parameters
table_names (Optional[Union[List[str], str]]) β
Return type
str
get_table_names()[source]ο
Get names of tables available.
Return type
Iterable[str]
run(command)[source]ο
Execute a DAX command and return a json representing the results.
Parameters
command (str) β
Return type
Any
property headers: Dict[str, str]ο
Get the token.
property request_url: strο
Get the request url.
property table_info: strο
Information about all tables in the database. | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-24 | property table_info: strο
Information about all tables in the database.
class langchain.utilities.PubMedAPIWrapper(*, top_k_results=3, load_max_docs=25, doc_content_chars_max=2000, load_all_available_meta=False, email='your_email@example.com', base_url_esearch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?', base_url_efetch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?', max_retry=5, sleep_time=0.2, ARXIV_MAX_QUERY_LENGTH=300)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around PubMed API.
This wrapper will use the PubMed API to conduct searches and fetch
document summaries. By default, it will return the document summaries
of the top-k results of an input search.
Parameters
top_k_results (int) β number of the top-scored document used for the PubMed tool
load_max_docs (int) β a limit to the number of loaded documents
load_all_available_meta (bool) β
if True: the metadata of the loaded Documents gets all available meta info(see https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch)
if False: the metadata gets only the most informative fields.
doc_content_chars_max (int) β
email (str) β
base_url_esearch (str) β
base_url_efetch (str) β
max_retry (int) β
sleep_time (float) β
ARXIV_MAX_QUERY_LENGTH (int) β
Return type
None
attribute doc_content_chars_max: int = 2000ο
attribute email: str = 'your_email@example.com'ο
attribute load_all_available_meta: bool = Falseο | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-25 | attribute load_all_available_meta: bool = Falseο
attribute load_max_docs: int = 25ο
attribute top_k_results: int = 3ο
load(query)[source]ο
Search PubMed for documents matching the query.
Return a list of dictionaries containing the document metadata.
Parameters
query (str) β
Return type
List[dict]
load_docs(query)[source]ο
Parameters
query (str) β
Return type
List[langchain.schema.Document]
retrieve_article(uid, webenv)[source]ο
Parameters
uid (str) β
webenv (str) β
Return type
dict
run(query)[source]ο
Run PubMed search and get the article meta information.
See https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch
It uses only the most informative fields of article meta information.
Parameters
query (str) β
Return type
str
class langchain.utilities.PythonREPL(*, _globals=None, _locals=None)[source]ο
Bases: pydantic.main.BaseModel
Simulates a standalone Python REPL.
Parameters
_globals (Optional[Dict]) β
_locals (Optional[Dict]) β
Return type
None
attribute globals: Optional[Dict] [Optional] (alias '_globals')ο
attribute locals: Optional[Dict] [Optional] (alias '_locals')ο
run(command)[source]ο
Run command with own globals/locals and returns anything printed.
Parameters
command (str) β
Return type
str
pydantic settings langchain.utilities.SceneXplainAPIWrapper[source]ο
Bases: pydantic.env_settings.BaseSettings, pydantic.main.BaseModel
Wrapper for SceneXplain API. | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-26 | Wrapper for SceneXplain API.
In order to set this up, you need API key for the SceneXplain API.
You can obtain a key by following the steps below.
- Sign up for a free account at https://scenex.jina.ai/.
- Navigate to the API Access page (https://scenex.jina.ai/api)
and create a new API key.
Show JSON schema{
"title": "SceneXplainAPIWrapper",
"description": "Wrapper for SceneXplain API.\n\nIn order to set this up, you need API key for the SceneXplain API.\nYou can obtain a key by following the steps below.\n- Sign up for a free account at https://scenex.jina.ai/.\n- Navigate to the API Access page (https://scenex.jina.ai/api)\n and create a new API key.",
"type": "object",
"properties": {
"scenex_api_key": {
"title": "Scenex Api Key",
"env": "SCENEX_API_KEY",
"env_names": "{'scenex_api_key'}",
"type": "string"
},
"scenex_api_url": {
"title": "Scenex Api Url",
"default": "https://us-central1-causal-diffusion.cloudfunctions.net/describe",
"env_names": "{'scenex_api_url'}",
"type": "string"
}
},
"required": [
"scenex_api_key"
],
"additionalProperties": false
}
Fields
scenex_api_key (str)
scenex_api_url (str) | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-27 | Fields
scenex_api_key (str)
scenex_api_url (str)
attribute scenex_api_key: str [Required]ο
attribute scenex_api_url: str = 'https://us-central1-causal-diffusion.cloudfunctions.net/describe'ο
run(image)[source]ο
Run SceneXplain image explainer.
Parameters
image (str) β
Return type
str
validator validate_environment » all fields[source]ο
Validate that api key exists in environment.
Parameters
values (Dict) β
Return type
Dict
class langchain.utilities.SearxSearchWrapper(*, searx_host='', unsecure=False, params=None, headers=None, engines=[], categories=[], query_suffix='', k=10, aiosession=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Searx API.
To use you need to provide the searx host by passing the named parameter
searx_host or exporting the environment variable SEARX_HOST.
In some situations you might want to disable SSL verification, for example
if you are running searx locally. You can do this by passing the named parameter
unsecure. You can also pass the host url scheme as http to disable SSL.
Example
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://localhost:8888")
Example with SSL disabled:from langchain.utilities import SearxSearchWrapper
# note the unsecure parameter is not needed if you pass the url scheme as
# http
searx = SearxSearchWrapper(searx_host="http://localhost:8888",
unsecure=True)
Parameters
searx_host (str) β
unsecure (bool) β | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-28 | Parameters
searx_host (str) β
unsecure (bool) β
params (dict) β
headers (Optional[dict]) β
engines (Optional[List[str]]) β
categories (Optional[List[str]]) β
query_suffix (Optional[str]) β
k (int) β
aiosession (Optional[Any]) β
Return type
None
attribute aiosession: Optional[Any] = Noneο
attribute categories: Optional[List[str]] = []ο
attribute engines: Optional[List[str]] = []ο
attribute headers: Optional[dict] = Noneο
attribute k: int = 10ο
attribute params: dict [Optional]ο
attribute query_suffix: Optional[str] = ''ο
attribute searx_host: str = ''ο
attribute unsecure: bool = Falseο
async aresults(query, num_results, engines=None, query_suffix='', **kwargs)[source]ο
Asynchronously query with json results.
Uses aiohttp. See results for more info.
Parameters
query (str) β
num_results (int) β
engines (Optional[List[str]]) β
query_suffix (Optional[str]) β
kwargs (Any) β
Return type
List[Dict]
async arun(query, engines=None, query_suffix='', **kwargs)[source]ο
Asynchronously version of run.
Parameters
query (str) β
engines (Optional[List[str]]) β
query_suffix (Optional[str]) β
kwargs (Any) β
Return type
str
results(query, num_results, engines=None, categories=None, query_suffix='', **kwargs)[source]ο
Run query through Searx API and returns the results with metadata.
Parameters
query (str) β The query to search for. | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-29 | Parameters
query (str) β The query to search for.
query_suffix (Optional[str]) β Extra suffix appended to the query.
num_results (int) β Limit the number of results to return.
engines (Optional[List[str]]) β List of engines to use for the query.
categories (Optional[List[str]]) β List of categories to use for the query.
**kwargs β extra parameters to pass to the searx API.
kwargs (Any) β
Returns
{snippet: The description of the result.
title: The title of the result.
link: The link to the result.
engines: The engines used for the result.
category: Searx category of the result.
}
Return type
Dict with the following keys
run(query, engines=None, categories=None, query_suffix='', **kwargs)[source]ο
Run query through Searx API and parse results.
You can pass any other params to the searx query API.
Parameters
query (str) β The query to search for.
query_suffix (Optional[str]) β Extra suffix appended to the query.
engines (Optional[List[str]]) β List of engines to use for the query.
categories (Optional[List[str]]) β List of categories to use for the query.
**kwargs β extra parameters to pass to the searx API.
kwargs (Any) β
Returns
The result of the query.
Return type
str
Raises
ValueError β If an error occurred with the query.
Example
This will make a query to the qwant engine:
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://my.searx.host")
searx.run("what is the weather in France ?", engine="qwant") | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-30 | searx.run("what is the weather in France ?", engine="qwant")
# the same result can be achieved using the `!` syntax of searx
# to select the engine using `query_suffix`
searx.run("what is the weather in France ?", query_suffix="!qwant")
class langchain.utilities.SerpAPIWrapper(*, search_engine=None, params={'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}, serpapi_api_key=None, aiosession=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around SerpAPI.
To use, you should have the google-search-results python package installed,
and the environment variable SERPAPI_API_KEY set with your API key, or pass
serpapi_api_key as a named parameter to the constructor.
Example
from langchain.utilities import SerpAPIWrapper
serpapi = SerpAPIWrapper()
Parameters
search_engine (Any) β
params (dict) β
serpapi_api_key (Optional[str]) β
aiosession (Optional[aiohttp.client.ClientSession]) β
Return type
None
attribute aiosession: Optional[aiohttp.client.ClientSession] = Noneο
attribute params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}ο
attribute serpapi_api_key: Optional[str] = Noneο
async aresults(query)[source]ο
Use aiohttp to run query through SerpAPI and return the results async.
Parameters
query (str) β
Return type
dict
async arun(query, **kwargs)[source]ο
Run query through SerpAPI and parse result async.
Parameters | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-31 | Run query through SerpAPI and parse result async.
Parameters
query (str) β
kwargs (Any) β
Return type
str
get_params(query)[source]ο
Get parameters for SerpAPI.
Parameters
query (str) β
Return type
Dict[str, str]
results(query)[source]ο
Run query through SerpAPI and return the raw result.
Parameters
query (str) β
Return type
dict
run(query, **kwargs)[source]ο
Run query through SerpAPI and parse result.
Parameters
query (str) β
kwargs (Any) β
Return type
str
class langchain.utilities.SparkSQL(spark_session=None, catalog=None, schema=None, ignore_tables=None, include_tables=None, sample_rows_in_table_info=3)[source]ο
Bases: object
Parameters
spark_session (Optional[SparkSession]) β
catalog (Optional[str]) β
schema (Optional[str]) β
ignore_tables (Optional[List[str]]) β
include_tables (Optional[List[str]]) β
sample_rows_in_table_info (int) β
classmethod from_uri(database_uri, engine_args=None, **kwargs)[source]ο
Creating a remote Spark Session via Spark connect.
For example: SparkSQL.from_uri(βsc://localhost:15002β)
Parameters
database_uri (str) β
engine_args (Optional[dict]) β
kwargs (Any) β
Return type
langchain.utilities.spark_sql.SparkSQL
get_usable_table_names()[source]ο
Get names of tables available.
Return type
Iterable[str]
get_table_info(table_names=None)[source]ο
Parameters
table_names (Optional[List[str]]) β
Return type
str
run(command, fetch='all')[source]ο | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-32 | Return type
str
run(command, fetch='all')[source]ο
Parameters
command (str) β
fetch (str) β
Return type
str
get_table_info_no_throw(table_names=None)[source]ο
Get information about specified tables.
Follows best practices as specified in: Rajkumar et al, 2022
(https://arxiv.org/abs/2204.00498)
If sample_rows_in_table_info, the specified number of sample rows will be
appended to each table description. This can increase performance as
demonstrated in the paper.
Parameters
table_names (Optional[List[str]]) β
Return type
str
run_no_throw(command, fetch='all')[source]ο
Execute a SQL command and return a string representing the results.
If the statement returns rows, a string of the results is returned.
If the statement returns no rows, an empty string is returned.
If the statement throws an error, the error message is returned.
Parameters
command (str) β
fetch (str) β
Return type
str
class langchain.utilities.TextRequestsWrapper(*, headers=None, aiosession=None)[source]ο
Bases: pydantic.main.BaseModel
Lightweight wrapper around requests library.
The main purpose of this wrapper is to always return a text output.
Parameters
headers (Optional[Dict[str, str]]) β
aiosession (Optional[aiohttp.client.ClientSession]) β
Return type
None
attribute aiosession: Optional[aiohttp.client.ClientSession] = Noneο
attribute headers: Optional[Dict[str, str]] = Noneο
async adelete(url, **kwargs)[source]ο
DELETE the URL and return the text asynchronously.
Parameters
url (str) β
kwargs (Any) β | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-33 | Parameters
url (str) β
kwargs (Any) β
Return type
str
async aget(url, **kwargs)[source]ο
GET the URL and return the text asynchronously.
Parameters
url (str) β
kwargs (Any) β
Return type
str
async apatch(url, data, **kwargs)[source]ο
PATCH the URL and return the text asynchronously.
Parameters
url (str) β
data (Dict[str, Any]) β
kwargs (Any) β
Return type
str
async apost(url, data, **kwargs)[source]ο
POST to the URL and return the text asynchronously.
Parameters
url (str) β
data (Dict[str, Any]) β
kwargs (Any) β
Return type
str
async aput(url, data, **kwargs)[source]ο
PUT the URL and return the text asynchronously.
Parameters
url (str) β
data (Dict[str, Any]) β
kwargs (Any) β
Return type
str
delete(url, **kwargs)[source]ο
DELETE the URL and return the text.
Parameters
url (str) β
kwargs (Any) β
Return type
str
get(url, **kwargs)[source]ο
GET the URL and return the text.
Parameters
url (str) β
kwargs (Any) β
Return type
str
patch(url, data, **kwargs)[source]ο
PATCH the URL and return the text.
Parameters
url (str) β
data (Dict[str, Any]) β
kwargs (Any) β
Return type
str
post(url, data, **kwargs)[source]ο
POST to the URL and return the text.
Parameters
url (str) β | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-34 | POST to the URL and return the text.
Parameters
url (str) β
data (Dict[str, Any]) β
kwargs (Any) β
Return type
str
put(url, data, **kwargs)[source]ο
PUT the URL and return the text.
Parameters
url (str) β
data (Dict[str, Any]) β
kwargs (Any) β
Return type
str
property requests: langchain.requests.Requestsο
class langchain.utilities.TwilioAPIWrapper(*, client=None, account_sid=None, auth_token=None, from_number=None)[source]ο
Bases: pydantic.main.BaseModel
Messaging Client using Twilio.
To use, you should have the twilio python package installed,
and the environment variables TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN, and
TWILIO_FROM_NUMBER, or pass account_sid, auth_token, and from_number as
named parameters to the constructor.
Example
from langchain.utilities.twilio import TwilioAPIWrapper
twilio = TwilioAPIWrapper(
account_sid="ACxxx",
auth_token="xxx",
from_number="+10123456789"
)
twilio.run('test', '+12484345508')
Parameters
client (Any) β
account_sid (Optional[str]) β
auth_token (Optional[str]) β
from_number (Optional[str]) β
Return type
None
attribute account_sid: Optional[str] = Noneο
Twilio account string identifier.
attribute auth_token: Optional[str] = Noneο
Twilio auth token.
attribute from_number: Optional[str] = Noneο
A Twilio phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164)
format, an | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-35 | format, an
[alphanumeric sender ID](https://www.twilio.com/docs/sms/send-messages#use-an-alphanumeric-sender-id),
or a [Channel Endpoint address](https://www.twilio.com/docs/sms/channels#channel-addresses)
that is enabled for the type of message you want to send. Phone numbers or
[short codes](https://www.twilio.com/docs/sms/api/short-code) purchased from
Twilio also work here. You cannot, for example, spoof messages from a private
cell phone number. If you are using messaging_service_sid, this parameter
must be empty.
run(body, to)[source]ο
Run body through Twilio and respond with message sid.
Parameters
body (str) β The text of the message you want to send. Can be up to 1,600
characters in length.
to (str) β The destination phone number in
[E.164](https://www.twilio.com/docs/glossary/what-e164) format for
SMS/MMS or
[Channel user address](https://www.twilio.com/docs/sms/channels#channel-addresses)
for other 3rd-party channels.
Return type
str
class langchain.utilities.WikipediaAPIWrapper(*, wiki_client=None, top_k_results=3, lang='en', load_all_available_meta=False, doc_content_chars_max=4000)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around WikipediaAPI.
To use, you should have the wikipedia python package installed.
This wrapper will use the Wikipedia API to conduct searches and
fetch page summaries. By default, it will return the page summaries
of the top-k results.
It limits the Document content by doc_content_chars_max.
Parameters
wiki_client (Any) β
top_k_results (int) β | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-36 | Parameters
wiki_client (Any) β
top_k_results (int) β
lang (str) β
load_all_available_meta (bool) β
doc_content_chars_max (int) β
Return type
None
attribute doc_content_chars_max: int = 4000ο
attribute lang: str = 'en'ο
attribute load_all_available_meta: bool = Falseο
attribute top_k_results: int = 3ο
load(query)[source]ο
Run Wikipedia search and get the article text plus the meta information.
See
Returns: a list of documents.
Parameters
query (str) β
Return type
List[langchain.schema.Document]
run(query)[source]ο
Run Wikipedia search and get page summaries.
Parameters
query (str) β
Return type
str
class langchain.utilities.WolframAlphaAPIWrapper(*, wolfram_client=None, wolfram_alpha_appid=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Wolfram Alpha.
Docs for using:
Go to wolfram alpha and sign up for a developer account
Create an app and get your APP ID
Save your APP ID into WOLFRAM_ALPHA_APPID env variable
pip install wolframalpha
Parameters
wolfram_client (Any) β
wolfram_alpha_appid (Optional[str]) β
Return type
None
attribute wolfram_alpha_appid: Optional[str] = Noneο
run(query)[source]ο
Run query through WolframAlpha and parse result.
Parameters
query (str) β
Return type
str | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-37 | Parameters
query (str) β
Return type
str
class langchain.utilities.ZapierNLAWrapper(*, zapier_nla_api_key, zapier_nla_oauth_access_token, zapier_nla_api_base='https://nla.zapier.com/api/v1/')[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Zapier NLA.
Full docs here: https://nla.zapier.com/start/
This wrapper supports both API Key and OAuth Credential auth methods. API Key
is the fastest way to get started using this wrapper.
Call this wrapper with either zapier_nla_api_key or
zapier_nla_oauth_access_token arguments, or set the ZAPIER_NLA_API_KEY
environment variable. If both arguments are set, the Access Token will take
precedence.
For use-cases where LangChain + Zapier NLA is powering a user-facing application,
and LangChain needs access to the end-userβs connected accounts on Zapier.com,
youβll need to use OAuth. Review the full docs above to learn how to create
your own provider and generate credentials.
Parameters
zapier_nla_api_key (str) β
zapier_nla_oauth_access_token (str) β
zapier_nla_api_base (str) β
Return type
None
attribute zapier_nla_api_base: str = 'https://nla.zapier.com/api/v1/'ο
attribute zapier_nla_api_key: str [Required]ο
attribute zapier_nla_oauth_access_token: str [Required]ο
async alist()[source]ο
Returns a list of all exposed (enabled) actions associated with
current user (associated with the set api_key). Change your exposed
actions here: https://nla.zapier.com/demo/start/ | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-38 | actions here: https://nla.zapier.com/demo/start/
The return list can be empty if no actions exposed. Else will contain
a list of action objects:
[{βidβ: str,
βdescriptionβ: str,
βparamsβ: Dict[str, str]
}]
params will always contain an instructions key, the only required
param. All others optional and if provided will override any AI guesses
(see βunderstanding the AI guessing flowβ here:
https://nla.zapier.com/api/v1/docs)
Return type
List[Dict]
async alist_as_str()[source]ο
Same as list, but returns a stringified version of the JSON for
insertting back into an LLM.
Return type
str
async apreview(action_id, instructions, params=None)[source]ο
Same as run, but instead of actually executing the action, will
instead return a preview of params that have been guessed by the AI in
case you need to explicitly review before executing.
Parameters
action_id (str) β
instructions (str) β
params (Optional[Dict]) β
Return type
Dict
async apreview_as_str(*args, **kwargs)[source]ο
Same as preview, but returns a stringified version of the JSON for
insertting back into an LLM.
Return type
str
async arun(action_id, instructions, params=None)[source]ο
Executes an action that is identified by action_id, must be exposed
(enabled) by the current user (associated with the set api_key). Change
your exposed actions here: https://nla.zapier.com/demo/start/
The return JSON is guaranteed to be less than ~500 words (350
tokens) making it safe to inject into the prompt of another LLM
call.
Parameters
action_id (str) β | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-39 | call.
Parameters
action_id (str) β
instructions (str) β
params (Optional[Dict]) β
Return type
Dict
async arun_as_str(*args, **kwargs)[source]ο
Same as run, but returns a stringified version of the JSON for
insertting back into an LLM.
Return type
str
list()[source]ο
Returns a list of all exposed (enabled) actions associated with
current user (associated with the set api_key). Change your exposed
actions here: https://nla.zapier.com/demo/start/
The return list can be empty if no actions exposed. Else will contain
a list of action objects:
[{βidβ: str,
βdescriptionβ: str,
βparamsβ: Dict[str, str]
}]
params will always contain an instructions key, the only required
param. All others optional and if provided will override any AI guesses
(see βunderstanding the AI guessing flowβ here:
https://nla.zapier.com/docs/using-the-api#ai-guessing)
Return type
List[Dict]
list_as_str()[source]ο
Same as list, but returns a stringified version of the JSON for
insertting back into an LLM.
Return type
str
preview(action_id, instructions, params=None)[source]ο
Same as run, but instead of actually executing the action, will
instead return a preview of params that have been guessed by the AI in
case you need to explicitly review before executing.
Parameters
action_id (str) β
instructions (str) β
params (Optional[Dict]) β
Return type
Dict
preview_as_str(*args, **kwargs)[source]ο
Same as preview, but returns a stringified version of the JSON for | https://api.python.langchain.com/en/latest/modules/utilities.html |
897ef9cfbf02-40 | Same as preview, but returns a stringified version of the JSON for
insertting back into an LLM.
Return type
str
run(action_id, instructions, params=None)[source]ο
Executes an action that is identified by action_id, must be exposed
(enabled) by the current user (associated with the set api_key). Change
your exposed actions here: https://nla.zapier.com/demo/start/
The return JSON is guaranteed to be less than ~500 words (350
tokens) making it safe to inject into the prompt of another LLM
call.
Parameters
action_id (str) β
instructions (str) β
params (Optional[Dict]) β
Return type
Dict
run_as_str(*args, **kwargs)[source]ο
Same as run, but returns a stringified version of the JSON for
insertting back into an LLM.
Return type
str | https://api.python.langchain.com/en/latest/modules/utilities.html |
dd38a45722f7-0 | Prompt Templatesο
Prompt template classes.
class langchain.prompts.AIMessagePromptTemplate(*, prompt, additional_kwargs=None)[source]ο
Bases: langchain.prompts.chat.BaseStringMessagePromptTemplate
Parameters
prompt (langchain.prompts.base.StringPromptTemplate) β
additional_kwargs (dict) β
Return type
None
format(**kwargs)[source]ο
To a BaseMessage.
Parameters
kwargs (Any) β
Return type
langchain.schema.BaseMessage
class langchain.prompts.BaseChatPromptTemplate(*, input_variables, output_parser=None, partial_variables=None)[source]ο
Bases: langchain.prompts.base.BasePromptTemplate, abc.ABC
Parameters
input_variables (List[str]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
partial_variables (Mapping[str, Union[str, Callable[[], str]]]) β
Return type
None
format(**kwargs)[source]ο
Format the prompt with the inputs.
Parameters
kwargs (Any) β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Return type
str
Example:
prompt.format(variable1="foo")
abstract format_messages(**kwargs)[source]ο
Format kwargs into a list of messages.
Parameters
kwargs (Any) β
Return type
List[langchain.schema.BaseMessage]
format_prompt(**kwargs)[source]ο
Create Chat Messages.
Parameters
kwargs (Any) β
Return type
langchain.schema.PromptValue
class langchain.prompts.BasePromptTemplate(*, input_variables, output_parser=None, partial_variables=None)[source]ο
Bases: langchain.load.serializable.Serializable, abc.ABC
Base class for all prompt templates, returning a prompt.
Parameters
input_variables (List[str]) β | https://api.python.langchain.com/en/latest/modules/prompts.html |
dd38a45722f7-1 | Parameters
input_variables (List[str]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
partial_variables (Mapping[str, Union[str, Callable[[], str]]]) β
Return type
None
attribute input_variables: List[str] [Required]ο
A list of the names of the variables the prompt template expects.
attribute output_parser: Optional[langchain.schema.BaseOutputParser] = Noneο
How to parse the output of calling an LLM on this formatted prompt.
attribute partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]ο
dict(**kwargs)[source]ο
Return dictionary representation of prompt.
Parameters
kwargs (Any) β
Return type
Dict
abstract format(**kwargs)[source]ο
Format the prompt with the inputs.
Parameters
kwargs (Any) β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Return type
str
Example:
prompt.format(variable1="foo")
abstract format_prompt(**kwargs)[source]ο
Create Chat Messages.
Parameters
kwargs (Any) β
Return type
langchain.schema.PromptValue
partial(**kwargs)[source]ο
Return a partial of the prompt template.
Parameters
kwargs (Union[str, Callable[[], str]]) β
Return type
langchain.prompts.base.BasePromptTemplate
save(file_path)[source]ο
Save the prompt.
Parameters
file_path (Union[pathlib.Path, str]) β Path to directory to save prompt to.
Return type
None
Example:
.. code-block:: python
prompt.save(file_path=βpath/prompt.yamlβ)
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/latest/modules/prompts.html |
dd38a45722f7-2 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.prompts.ChatMessagePromptTemplate(*, prompt, additional_kwargs=None, role)[source]ο
Bases: langchain.prompts.chat.BaseStringMessagePromptTemplate
Parameters
prompt (langchain.prompts.base.StringPromptTemplate) β
additional_kwargs (dict) β
role (str) β
Return type
None
attribute role: str [Required]ο
format(**kwargs)[source]ο
To a BaseMessage.
Parameters
kwargs (Any) β
Return type
langchain.schema.BaseMessage
class langchain.prompts.ChatPromptTemplate(*, input_variables, output_parser=None, partial_variables=None, messages)[source]ο
Bases: langchain.prompts.chat.BaseChatPromptTemplate, abc.ABC
Parameters
input_variables (List[str]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
partial_variables (Mapping[str, Union[str, Callable[[], str]]]) β
messages (List[Union[langchain.prompts.chat.BaseMessagePromptTemplate, langchain.schema.BaseMessage]]) β
Return type
None
attribute input_variables: List[str] [Required]ο
A list of the names of the variables the prompt template expects.
attribute messages: List[Union[BaseMessagePromptTemplate, BaseMessage]] [Required]ο
format(**kwargs)[source]ο
Format the prompt with the inputs.
Parameters
kwargs (Any) β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Return type
str
Example:
prompt.format(variable1="foo")
format_messages(**kwargs)[source]ο
Format kwargs into a list of messages.
Parameters
kwargs (Any) β
Return type | https://api.python.langchain.com/en/latest/modules/prompts.html |
dd38a45722f7-3 | Parameters
kwargs (Any) β
Return type
List[langchain.schema.BaseMessage]
classmethod from_messages(messages)[source]ο
Parameters
messages (Sequence[Union[langchain.prompts.chat.BaseMessagePromptTemplate, langchain.schema.BaseMessage]]) β
Return type
langchain.prompts.chat.ChatPromptTemplate
classmethod from_role_strings(string_messages)[source]ο
Parameters
string_messages (List[Tuple[str, str]]) β
Return type
langchain.prompts.chat.ChatPromptTemplate
classmethod from_strings(string_messages)[source]ο
Parameters
string_messages (List[Tuple[Type[langchain.prompts.chat.BaseMessagePromptTemplate], str]]) β
Return type
langchain.prompts.chat.ChatPromptTemplate
classmethod from_template(template, **kwargs)[source]ο
Parameters
template (str) β
kwargs (Any) β
Return type
langchain.prompts.chat.ChatPromptTemplate
partial(**kwargs)[source]ο
Return a partial of the prompt template.
Parameters
kwargs (Union[str, Callable[[], str]]) β
Return type
langchain.prompts.base.BasePromptTemplate
save(file_path)[source]ο
Save the prompt.
Parameters
file_path (Union[pathlib.Path, str]) β Path to directory to save prompt to.
Return type
None
Example:
.. code-block:: python
prompt.save(file_path=βpath/prompt.yamlβ)
class langchain.prompts.FewShotPromptTemplate(*, input_variables, output_parser=None, partial_variables=None, examples=None, example_selector=None, example_prompt, suffix, example_separator='\n\n', prefix='', template_format='f-string', validate_template=True)[source]ο
Bases: langchain.prompts.base.StringPromptTemplate
Prompt template that contains few shot examples.
Parameters
input_variables (List[str]) β | https://api.python.langchain.com/en/latest/modules/prompts.html |
dd38a45722f7-4 | Prompt template that contains few shot examples.
Parameters
input_variables (List[str]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
partial_variables (Mapping[str, Union[str, Callable[[], str]]]) β
examples (Optional[List[dict]]) β
example_selector (Optional[langchain.prompts.example_selector.base.BaseExampleSelector]) β
example_prompt (langchain.prompts.prompt.PromptTemplate) β
suffix (str) β
example_separator (str) β
prefix (str) β
template_format (str) β
validate_template (bool) β
Return type
None
attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]ο
PromptTemplate used to format an individual example.
attribute example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = Noneο
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
attribute example_separator: str = '\n\n'ο
String separator used to join the prefix, the examples, and suffix.
attribute examples: Optional[List[dict]] = Noneο
Examples to format into the prompt.
Either this or example_selector should be provided.
attribute input_variables: List[str] [Required]ο
A list of the names of the variables the prompt template expects.
attribute prefix: str = ''ο
A prompt template string to put before the examples.
attribute suffix: str [Required]ο
A prompt template string to put after the examples.
attribute template_format: str = 'f-string'ο
The format of the prompt template. Options are: βf-stringβ, βjinja2β.
attribute validate_template: bool = Trueο
Whether or not to try validating the template. | https://api.python.langchain.com/en/latest/modules/prompts.html |
dd38a45722f7-5 | attribute validate_template: bool = Trueο
Whether or not to try validating the template.
dict(**kwargs)[source]ο
Return a dictionary of the prompt.
Parameters
kwargs (Any) β
Return type
Dict
format(**kwargs)[source]ο
Format the prompt with the inputs.
Parameters
kwargs (Any) β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Return type
str
Example:
prompt.format(variable1="foo")
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.prompts.FewShotPromptWithTemplates(*, input_variables, output_parser=None, partial_variables=None, examples=None, example_selector=None, example_prompt, suffix, example_separator='\n\n', prefix=None, template_format='f-string', validate_template=True)[source]ο
Bases: langchain.prompts.base.StringPromptTemplate
Prompt template that contains few shot examples.
Parameters
input_variables (List[str]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
partial_variables (Mapping[str, Union[str, Callable[[], str]]]) β
examples (Optional[List[dict]]) β
example_selector (Optional[langchain.prompts.example_selector.base.BaseExampleSelector]) β
example_prompt (langchain.prompts.prompt.PromptTemplate) β
suffix (langchain.prompts.base.StringPromptTemplate) β
example_separator (str) β
prefix (Optional[langchain.prompts.base.StringPromptTemplate]) β
template_format (str) β
validate_template (bool) β
Return type
None
attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]ο
PromptTemplate used to format an individual example. | https://api.python.langchain.com/en/latest/modules/prompts.html |
dd38a45722f7-6 | PromptTemplate used to format an individual example.
attribute example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = Noneο
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
attribute example_separator: str = '\n\n'ο
String separator used to join the prefix, the examples, and suffix.
attribute examples: Optional[List[dict]] = Noneο
Examples to format into the prompt.
Either this or example_selector should be provided.
attribute input_variables: List[str] [Required]ο
A list of the names of the variables the prompt template expects.
attribute prefix: Optional[langchain.prompts.base.StringPromptTemplate] = Noneο
A PromptTemplate to put before the examples.
attribute suffix: langchain.prompts.base.StringPromptTemplate [Required]ο
A PromptTemplate to put after the examples.
attribute template_format: str = 'f-string'ο
The format of the prompt template. Options are: βf-stringβ, βjinja2β.
attribute validate_template: bool = Trueο
Whether or not to try validating the template.
dict(**kwargs)[source]ο
Return a dictionary of the prompt.
Parameters
kwargs (Any) β
Return type
Dict
format(**kwargs)[source]ο
Format the prompt with the inputs.
Parameters
kwargs (Any) β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Return type
str
Example:
prompt.format(variable1="foo")
class langchain.prompts.HumanMessagePromptTemplate(*, prompt, additional_kwargs=None)[source]ο
Bases: langchain.prompts.chat.BaseStringMessagePromptTemplate
Parameters
prompt (langchain.prompts.base.StringPromptTemplate) β
additional_kwargs (dict) β | https://api.python.langchain.com/en/latest/modules/prompts.html |
dd38a45722f7-7 | additional_kwargs (dict) β
Return type
None
format(**kwargs)[source]ο
To a BaseMessage.
Parameters
kwargs (Any) β
Return type
langchain.schema.BaseMessage
class langchain.prompts.LengthBasedExampleSelector(*, examples, example_prompt, get_text_length=<function _get_length_based>, max_length=2048, example_text_lengths=[])[source]ο
Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel
Select examples based on length.
Parameters
examples (List[dict]) β
example_prompt (langchain.prompts.prompt.PromptTemplate) β
get_text_length (Callable[[str], int]) β
max_length (int) β
example_text_lengths (List[int]) β
Return type
None
attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]ο
Prompt template used to format the examples.
attribute examples: List[dict] [Required]ο
A list of the examples that the prompt template expects.
attribute get_text_length: Callable[[str], int] = <function _get_length_based>ο
Function to measure prompt length. Defaults to word count.
attribute max_length: int = 2048ο
Max length for the prompt, beyond which examples are cut.
add_example(example)[source]ο
Add new example to list.
Parameters
example (Dict[str, str]) β
Return type
None
select_examples(input_variables)[source]ο
Select which examples to use based on the input lengths.
Parameters
input_variables (Dict[str, str]) β
Return type
List[dict] | https://api.python.langchain.com/en/latest/modules/prompts.html |
dd38a45722f7-8 | input_variables (Dict[str, str]) β
Return type
List[dict]
class langchain.prompts.MaxMarginalRelevanceExampleSelector(*, vectorstore, k=4, example_keys=None, input_keys=None, fetch_k=20)[source]ο
Bases: langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector
ExampleSelector that selects examples based on Max Marginal Relevance.
This was shown to improve performance in this paper:
https://arxiv.org/pdf/2211.13892.pdf
Parameters
vectorstore (langchain.vectorstores.base.VectorStore) β
k (int) β
example_keys (Optional[List[str]]) β
input_keys (Optional[List[str]]) β
fetch_k (int) β
Return type
None
attribute example_keys: Optional[List[str]] = Noneο
Optional keys to filter examples to.
attribute fetch_k: int = 20ο
Number of examples to fetch to rerank.
attribute input_keys: Optional[List[str]] = Noneο
Optional keys to filter input to. If provided, the search is based on
the input variables instead of all variables.
attribute k: int = 4ο
Number of examples to select.
attribute vectorstore: langchain.vectorstores.base.VectorStore [Required]ο
VectorStore than contains information about examples.
classmethod from_examples(examples, embeddings, vectorstore_cls, k=4, input_keys=None, fetch_k=20, **vectorstore_cls_kwargs)[source]ο
Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples (List[dict]) β List of examples to use in the prompt. | https://api.python.langchain.com/en/latest/modules/prompts.html |
dd38a45722f7-9 | Parameters
examples (List[dict]) β List of examples to use in the prompt.
embeddings (langchain.embeddings.base.Embeddings) β An iniialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls (Type[langchain.vectorstores.base.VectorStore]) β A vector store DB interface class, e.g. FAISS.
k (int) β Number of examples to select
input_keys (Optional[List[str]]) β If provided, the search is based on the input variables
instead of all variables.
vectorstore_cls_kwargs (Any) β optional kwargs containing url for vector store
fetch_k (int) β
Returns
The ExampleSelector instantiated, backed by a vector store.
Return type
langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector
select_examples(input_variables)[source]ο
Select which examples to use based on semantic similarity.
Parameters
input_variables (Dict[str, str]) β
Return type
List[dict]
class langchain.prompts.MessagesPlaceholder(*, variable_name)[source]ο
Bases: langchain.prompts.chat.BaseMessagePromptTemplate
Prompt template that assumes variable is already list of messages.
Parameters
variable_name (str) β
Return type
None
attribute variable_name: str [Required]ο
format_messages(**kwargs)[source]ο
To a BaseMessage.
Parameters
kwargs (Any) β
Return type
List[langchain.schema.BaseMessage]
property input_variables: List[str]ο
Input variables for this prompt template.
class langchain.prompts.NGramOverlapExampleSelector(*, examples, example_prompt, threshold=- 1.0)[source]ο
Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel | https://api.python.langchain.com/en/latest/modules/prompts.html |
dd38a45722f7-10 | Select and order examples based on ngram overlap score (sentence_bleu score).
https://www.nltk.org/_modules/nltk/translate/bleu_score.html
https://aclanthology.org/P02-1040.pdf
Parameters
examples (List[dict]) β
example_prompt (langchain.prompts.prompt.PromptTemplate) β
threshold (float) β
Return type
None
attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]ο
Prompt template used to format the examples.
attribute examples: List[dict] [Required]ο
A list of the examples that the prompt template expects.
attribute threshold: float = -1.0ο
Threshold at which algorithm stops. Set to -1.0 by default.
For negative threshold:
select_examples sorts examples by ngram_overlap_score, but excludes none.
For threshold greater than 1.0:
select_examples excludes all examples, and returns an empty list.
For threshold equal to 0.0:
select_examples sorts examples by ngram_overlap_score,
and excludes examples with no ngram overlap with input.
add_example(example)[source]ο
Add new example to list.
Parameters
example (Dict[str, str]) β
Return type
None
select_examples(input_variables)[source]ο
Return list of examples sorted by ngram_overlap_score with input.
Descending order.
Excludes any examples with ngram_overlap_score less than or equal to threshold.
Parameters
input_variables (Dict[str, str]) β
Return type
List[dict]
class langchain.prompts.PipelinePromptTemplate(*, input_variables, output_parser=None, partial_variables=None, final_prompt, pipeline_prompts)[source]ο
Bases: langchain.prompts.base.BasePromptTemplate
A prompt template for composing multiple prompts together. | https://api.python.langchain.com/en/latest/modules/prompts.html |
dd38a45722f7-11 | A prompt template for composing multiple prompts together.
This can be useful when you want to reuse parts of prompts.
A PipelinePrompt consists of two main parts:
final_prompt: This is the final prompt that is returned
pipeline_prompts: This is a list of tuples, consistingof a string (name) and a Prompt Template.
Each PromptTemplate will be formatted and then passed
to future prompt templates as a variable with
the same name as name
Parameters
input_variables (List[str]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
partial_variables (Mapping[str, Union[str, Callable[[], str]]]) β
final_prompt (langchain.prompts.base.BasePromptTemplate) β
pipeline_prompts (List[Tuple[str, langchain.prompts.base.BasePromptTemplate]]) β
Return type
None
attribute final_prompt: langchain.prompts.base.BasePromptTemplate [Required]ο
attribute pipeline_prompts: List[Tuple[str, langchain.prompts.base.BasePromptTemplate]] [Required]ο
format(**kwargs)[source]ο
Format the prompt with the inputs.
Parameters
kwargs (Any) β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Return type
str
Example:
prompt.format(variable1="foo")
format_prompt(**kwargs)[source]ο
Create Chat Messages.
Parameters
kwargs (Any) β
Return type
langchain.schema.PromptValue
langchain.prompts.Promptο
alias of langchain.prompts.prompt.PromptTemplate
class langchain.prompts.PromptTemplate(*, input_variables, output_parser=None, partial_variables=None, template, template_format='f-string', validate_template=True)[source]ο
Bases: langchain.prompts.base.StringPromptTemplate
Schema to represent a prompt for an LLM.
Example | https://api.python.langchain.com/en/latest/modules/prompts.html |
dd38a45722f7-12 | Schema to represent a prompt for an LLM.
Example
from langchain import PromptTemplate
prompt = PromptTemplate(input_variables=["foo"], template="Say {foo}")
Parameters
input_variables (List[str]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
partial_variables (Mapping[str, Union[str, Callable[[], str]]]) β
template (str) β
template_format (str) β
validate_template (bool) β
Return type
None
attribute input_variables: List[str] [Required]ο
A list of the names of the variables the prompt template expects.
attribute template: str [Required]ο
The prompt template.
attribute template_format: str = 'f-string'ο
The format of the prompt template. Options are: βf-stringβ, βjinja2β.
attribute validate_template: bool = Trueο
Whether or not to try validating the template.
format(**kwargs)[source]ο
Format the prompt with the inputs.
Parameters
kwargs (Any) β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Return type
str
Example:
prompt.format(variable1="foo")
classmethod from_examples(examples, suffix, input_variables, example_separator='\n\n', prefix='', **kwargs)[source]ο
Take examples in list format with prefix and suffix to create a prompt.
Intended to be used as a way to dynamically create a prompt from examples.
Parameters
examples (List[str]) β List of examples to use in the prompt.
suffix (str) β String to go after the list of examples. Should generally
set up the userβs input.
input_variables (List[str]) β A list of variable names the final prompt template
will expect.
example_separator (str) β The separator to use in between examples. Defaults | https://api.python.langchain.com/en/latest/modules/prompts.html |
dd38a45722f7-13 | will expect.
example_separator (str) β The separator to use in between examples. Defaults
to two new line characters.
prefix (str) β String that should go before any examples. Generally includes
examples. Default to an empty string.
kwargs (Any) β
Returns
The final prompt generated.
Return type
langchain.prompts.prompt.PromptTemplate
classmethod from_file(template_file, input_variables, **kwargs)[source]ο
Load a prompt from a file.
Parameters
template_file (Union[str, pathlib.Path]) β The path to the file containing the prompt template.
input_variables (List[str]) β A list of variable names the final prompt template
will expect.
kwargs (Any) β
Returns
The prompt loaded from the file.
Return type
langchain.prompts.prompt.PromptTemplate
classmethod from_template(template, **kwargs)[source]ο
Load a prompt template from a template.
Parameters
template (str) β
kwargs (Any) β
Return type
langchain.prompts.prompt.PromptTemplate
property lc_attributes: Dict[str, Any]ο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
class langchain.prompts.SemanticSimilarityExampleSelector(*, vectorstore, k=4, example_keys=None, input_keys=None)[source]ο
Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel
Example selector that selects examples based on SemanticSimilarity.
Parameters
vectorstore (langchain.vectorstores.base.VectorStore) β
k (int) β
example_keys (Optional[List[str]]) β
input_keys (Optional[List[str]]) β
Return type
None
attribute example_keys: Optional[List[str]] = Noneο | https://api.python.langchain.com/en/latest/modules/prompts.html |
dd38a45722f7-14 | Return type
None
attribute example_keys: Optional[List[str]] = Noneο
Optional keys to filter examples to.
attribute input_keys: Optional[List[str]] = Noneο
Optional keys to filter input to. If provided, the search is based on
the input variables instead of all variables.
attribute k: int = 4ο
Number of examples to select.
attribute vectorstore: langchain.vectorstores.base.VectorStore [Required]ο
VectorStore than contains information about examples.
add_example(example)[source]ο
Add new example to vectorstore.
Parameters
example (Dict[str, str]) β
Return type
str
classmethod from_examples(examples, embeddings, vectorstore_cls, k=4, input_keys=None, **vectorstore_cls_kwargs)[source]ο
Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples (List[dict]) β List of examples to use in the prompt.
embeddings (langchain.embeddings.base.Embeddings) β An initialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls (Type[langchain.vectorstores.base.VectorStore]) β A vector store DB interface class, e.g. FAISS.
k (int) β Number of examples to select
input_keys (Optional[List[str]]) β If provided, the search is based on the input variables
instead of all variables.
vectorstore_cls_kwargs (Any) β optional kwargs containing url for vector store
Returns
The ExampleSelector instantiated, backed by a vector store.
Return type
langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector
select_examples(input_variables)[source]ο
Select which examples to use based on semantic similarity.
Parameters
input_variables (Dict[str, str]) β
Return type
List[dict] | https://api.python.langchain.com/en/latest/modules/prompts.html |
dd38a45722f7-15 | input_variables (Dict[str, str]) β
Return type
List[dict]
class langchain.prompts.StringPromptTemplate(*, input_variables, output_parser=None, partial_variables=None)[source]ο
Bases: langchain.prompts.base.BasePromptTemplate, abc.ABC
String prompt should expose the format method, returning a prompt.
Parameters
input_variables (List[str]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
partial_variables (Mapping[str, Union[str, Callable[[], str]]]) β
Return type
None
format_prompt(**kwargs)[source]ο
Create Chat Messages.
Parameters
kwargs (Any) β
Return type
langchain.schema.PromptValue
class langchain.prompts.SystemMessagePromptTemplate(*, prompt, additional_kwargs=None)[source]ο
Bases: langchain.prompts.chat.BaseStringMessagePromptTemplate
Parameters
prompt (langchain.prompts.base.StringPromptTemplate) β
additional_kwargs (dict) β
Return type
None
format(**kwargs)[source]ο
To a BaseMessage.
Parameters
kwargs (Any) β
Return type
langchain.schema.BaseMessage
langchain.prompts.load_prompt(path)[source]ο
Unified method for loading a prompt from LangChainHub or local fs.
Parameters
path (Union[str, pathlib.Path]) β
Return type
langchain.prompts.base.BasePromptTemplate | https://api.python.langchain.com/en/latest/modules/prompts.html |
8e7952d3fdad-0 | Toolsο
Core toolkit implementations.
class langchain.tools.AIPluginTool(*, name, description, args_schema=<class 'langchain.tools.plugin.AIPluginToolSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, plugin, api_spec)[source]ο
Bases: langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[langchain.tools.plugin.AIPluginToolSchema]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
plugin (langchain.tools.plugin.AIPlugin) β
api_spec (str) β
Return type
None
attribute api_spec: str [Required]ο
attribute args_schema: Type[AIPluginToolSchema] = <class 'langchain.tools.plugin.AIPluginToolSchema'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute plugin: AIPlugin [Required]ο
classmethod from_plugin_url(url)[source]ο
Parameters
url (str) β
Return type
langchain.tools.plugin.AIPluginTool
class langchain.tools.APIOperation(*, operation_id, description=None, base_url, path, method, properties, request_body=None)[source]ο
Bases: pydantic.main.BaseModel
A model for a single API operation.
Parameters
operation_id (str) β
description (Optional[str]) β
base_url (str) β
path (str) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-1 | base_url (str) β
path (str) β
method (langchain.utilities.openapi.HTTPVerb) β
properties (Sequence[langchain.tools.openapi.utils.api_models.APIProperty]) β
request_body (Optional[langchain.tools.openapi.utils.api_models.APIRequestBody]) β
Return type
None
attribute base_url: str [Required]ο
The base URL of the operation.
attribute description: Optional[str] = Noneο
The description of the operation.
attribute method: langchain.utilities.openapi.HTTPVerb [Required]ο
The HTTP method of the operation.
attribute operation_id: str [Required]ο
The unique identifier of the operation.
attribute path: str [Required]ο
The path of the operation.
attribute properties: Sequence[langchain.tools.openapi.utils.api_models.APIProperty] [Required]ο
attribute request_body: Optional[langchain.tools.openapi.utils.api_models.APIRequestBody] = Noneο
The request body of the operation.
classmethod from_openapi_spec(spec, path, method)[source]ο
Create an APIOperation from an OpenAPI spec.
Parameters
spec (langchain.utilities.openapi.OpenAPISpec) β
path (str) β
method (str) β
Return type
langchain.tools.openapi.utils.api_models.APIOperation
classmethod from_openapi_url(spec_url, path, method)[source]ο
Create an APIOperation from an OpenAPI URL.
Parameters
spec_url (str) β
path (str) β
method (str) β
Return type
langchain.tools.openapi.utils.api_models.APIOperation
to_typescript()[source]ο
Get typescript string representation of the operation.
Return type
str
static ts_type_from_python(type_)[source]ο
Parameters | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-2 | Return type
str
static ts_type_from_python(type_)[source]ο
Parameters
type_ (Union[str, Type, tuple, None, enum.Enum]) β
Return type
str
property body_params: List[str]ο
property path_params: List[str]ο
property query_params: List[str]ο
class langchain.tools.ArxivQueryRun(*, name='arxiv', description='A wrapper around Arxiv.org Useful for when you need to answer questions about Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance, Statistics, Electrical Engineering, and Economics from scientific articles on arxiv.org. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to search using the Arxiv API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.arxiv.ArxivAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.arxiv.ArxivAPIWrapper [Optional]ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-3 | attribute api_wrapper: langchain.utilities.arxiv.ArxivAPIWrapper [Optional]ο
class langchain.tools.AzureCogsFormRecognizerTool(*, name='azure_cognitive_services_form_recognizer', description='A wrapper around Azure Cognitive Services Form Recognizer. Useful for when you need to extract text, tables, and key-value pairs from documents. Input should be a url to a document.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, azure_cogs_key='', azure_cogs_endpoint='', doc_analysis_client=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that queries the Azure Cognitive Services Form Recognizer API.
In order to set this up, follow instructions at:
https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/quickstarts/get-started-sdks-rest-api?view=form-recog-3.0.0&pivots=programming-language-python
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
azure_cogs_key (str) β
azure_cogs_endpoint (str) β
doc_analysis_client (Any) β
Return type
None | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-4 | doc_analysis_client (Any) β
Return type
None
class langchain.tools.AzureCogsImageAnalysisTool(*, name='azure_cognitive_services_image_analysis', description='A wrapper around Azure Cognitive Services Image Analysis. Useful for when you need to analyze images. Input should be a url to an image.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, azure_cogs_key='', azure_cogs_endpoint='', vision_service=None, analysis_options=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that queries the Azure Cognitive Services Image Analysis API.
In order to set this up, follow instructions at:
https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
azure_cogs_key (str) β
azure_cogs_endpoint (str) β
vision_service (Any) β
analysis_options (Any) β
Return type
None | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-5 | analysis_options (Any) β
Return type
None
class langchain.tools.AzureCogsSpeech2TextTool(*, name='azure_cognitive_services_speech2text', description='A wrapper around Azure Cognitive Services Speech2Text. Useful for when you need to transcribe audio to text. Input should be a url to an audio file.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, azure_cogs_key='', azure_cogs_region='', speech_language='en-US', speech_config=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that queries the Azure Cognitive Services Speech2Text API.
In order to set this up, follow instructions at:
https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-speech-to-text?pivots=programming-language-python
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
azure_cogs_key (str) β
azure_cogs_region (str) β
speech_language (str) β
speech_config (Any) β
Return type
None | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-6 | speech_config (Any) β
Return type
None
class langchain.tools.AzureCogsText2SpeechTool(*, name='azure_cognitive_services_text2speech', description='A wrapper around Azure Cognitive Services Text2Speech. Useful for when you need to convert text to speech. ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, azure_cogs_key='', azure_cogs_region='', speech_language='en-US', speech_config=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that queries the Azure Cognitive Services Text2Speech API.
In order to set this up, follow instructions at:
https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?pivots=programming-language-python
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
azure_cogs_key (str) β
azure_cogs_region (str) β
speech_language (str) β
speech_config (Any) β
Return type
None | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-7 | speech_config (Any) β
Return type
None
class langchain.tools.BaseGraphQLTool(*, name='query_graphql', description="Β Β Β Input to this tool is a detailed and correct GraphQL query, output is a result from the API.\nΒ Β Β If the query is not correct, an error message will be returned.\nΒ Β Β If an error is returned with 'Bad request' in it, rewrite the query and try again.\nΒ Β Β If an error is returned with 'Unauthorized' in it, do not try again, but tell the user to change their authentication.\n\nΒ Β Β Example Input: query {{ allUsers {{ id, name, email }} }}Β Β Β ", args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, graphql_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Base tool for querying a GraphQL API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
graphql_wrapper (langchain.utilities.graphql.GraphQLAPIWrapper) β
Return type
None
attribute graphql_wrapper: langchain.utilities.graphql.GraphQLAPIWrapper [Required]ο
class langchain.tools.BaseRequestsTool(*, requests_wrapper)[source]ο
Bases: pydantic.main.BaseModel
Base class for requests tools.
Parameters
requests_wrapper (langchain.requests.TextRequestsWrapper) β
Return type
None | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-8 | Parameters
requests_wrapper (langchain.requests.TextRequestsWrapper) β
Return type
None
attribute requests_wrapper: langchain.requests.TextRequestsWrapper [Required]ο
class langchain.tools.BaseSQLDatabaseTool(*, db)[source]ο
Bases: pydantic.main.BaseModel
Base tool for interacting with a SQL database.
Parameters
db (langchain.sql_database.SQLDatabase) β
Return type
None
attribute db: langchain.sql_database.SQLDatabase [Required]ο
class langchain.tools.BaseSparkSQLTool(*, db)[source]ο
Bases: pydantic.main.BaseModel
Base tool for interacting with Spark SQL.
Parameters
db (langchain.utilities.spark_sql.SparkSQL) β
Return type
None
attribute db: langchain.utilities.spark_sql.SparkSQL [Required]ο
class langchain.tools.BaseTool(*, name, description, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False)[source]ο
Bases: abc.ABC, pydantic.main.BaseModel
Interface LangChain tools must implement.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
Return type
None
attribute args_schema: Optional[Type[pydantic.main.BaseModel]] = Noneο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-9 | attribute args_schema: Optional[Type[pydantic.main.BaseModel]] = Noneο
Pydantic model class to validate and parse the toolβs input arguments.
attribute callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = Noneο
Deprecated. Please use callbacks instead.
attribute callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = Noneο
Callbacks to be called during tool execution.
attribute description: str [Required]ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = Falseο
Handle the content of the ToolException thrown.
attribute name: str [Required]ο
The unique name of the tool that clearly communicates its purpose.
attribute return_direct: bool = Falseο
Whether to return the toolβs output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
attribute verbose: bool = Falseο
Whether to log the toolβs progress.
async arun(tool_input, verbose=None, start_color='green', color='green', callbacks=None, **kwargs)[source]ο
Run the tool asynchronously.
Parameters
tool_input (Union[str, Dict]) β
verbose (Optional[bool]) β
start_color (Optional[str]) β
color (Optional[str]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Any | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-10 | kwargs (Any) β
Return type
Any
run(tool_input, verbose=None, start_color='green', color='green', callbacks=None, **kwargs)[source]ο
Run the tool.
Parameters
tool_input (Union[str, Dict]) β
verbose (Optional[bool]) β
start_color (Optional[str]) β
color (Optional[str]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Any
property args: dictο
property is_single_input: boolο
Whether the tool only accepts a single input.
class langchain.tools.BingSearchResults(*, name='Bing Search Results JSON', description='A wrapper around Bing Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, num_results=4, api_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that has capability to query the Bing Search API and get back json.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
num_results (int) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-11 | num_results (int) β
api_wrapper (langchain.utilities.bing_search.BingSearchAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]ο
attribute num_results: int = 4ο
class langchain.tools.BingSearchRun(*, name='bing_search', description='A wrapper around Bing Search. Useful for when you need to answer questions about current events. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to query the Bing search API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.bing_search.BingSearchAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]ο
class langchain.tools.BraveSearch(*, name='brave_search', description='a search engine. useful for when you need to answer questions about current events. input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, search_wrapper)[source]ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-12 | Bases: langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
search_wrapper (langchain.utilities.brave_search.BraveSearchWrapper) β
Return type
None
attribute search_wrapper: BraveSearchWrapper [Required]ο
classmethod from_api_key(api_key, search_kwargs=None, **kwargs)[source]ο
Parameters
api_key (str) β
search_kwargs (Optional[dict]) β
kwargs (Any) β
Return type
langchain.tools.brave_search.tool.BraveSearch
class langchain.tools.ClickTool(*, name='click_element', description='Click on an element with the given CSS selector', args_schema=<class 'langchain.tools.playwright.click.ClickToolInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None, visible_only=True, playwright_strict=False, playwright_timeout=1000)[source]ο
Bases: langchain.tools.playwright.base.BaseBrowserTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-13 | callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
sync_browser (Optional['SyncBrowser']) β
async_browser (Optional['AsyncBrowser']) β
visible_only (bool) β
playwright_strict (bool) β
playwright_timeout (float) β
Return type
None
attribute args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.click.ClickToolInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Click on an element with the given CSS selector'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'click_element'ο
The unique name of the tool that clearly communicates its purpose.
attribute playwright_strict: bool = Falseο
Whether to employ Playwrightβs strict mode when clicking on elements.
attribute playwright_timeout: float = 1000ο
Timeout (in ms) for Playwright to wait for element to be ready.
attribute visible_only: bool = Trueο
Whether to consider only visible elements.
class langchain.tools.CopyFileTool(*, name='copy_file', description='Create a copy of a file in a specified location', args_schema=<class 'langchain.tools.file_management.copy.FileCopyInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]ο
Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-14 | Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
root_dir (Optional[str]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.copy.FileCopyInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Create a copy of a file in a specified location'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'copy_file'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.CurrentWebPageTool(*, name='current_webpage', description='Returns the URL of the current page', args_schema=<class 'pydantic.main.BaseModel'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]ο
Bases: langchain.tools.playwright.base.BaseBrowserTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-15 | return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
sync_browser (Optional['SyncBrowser']) β
async_browser (Optional['AsyncBrowser']) β
Return type
None
attribute args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Returns the URL of the current page'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'current_webpage'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.DeleteFileTool(*, name='file_delete', description='Delete a file', args_schema=<class 'langchain.tools.file_management.delete.FileDeleteInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]ο
Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-16 | callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
root_dir (Optional[str]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.delete.FileDeleteInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Delete a file'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'file_delete'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.DuckDuckGoSearchResults(*, name='DuckDuckGo Results JSON', description='A wrapper around Duck Duck Go Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, num_results=4, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that queries the Duck Duck Go Search API and get back json.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-17 | callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
num_results (int) β
api_wrapper (langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]ο
attribute num_results: int = 4ο
class langchain.tools.DuckDuckGoSearchRun(*, name='duckduckgo_search', description='A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to query the DuckDuckGo search API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-18 | class langchain.tools.ExtractHyperlinksTool(*, name='extract_hyperlinks', description='Extract all hyperlinks on the current webpage', args_schema=<class 'langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]ο
Bases: langchain.tools.playwright.base.BaseBrowserTool
Extract all hyperlinks on the page.
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
sync_browser (Optional['SyncBrowser']) β
async_browser (Optional['AsyncBrowser']) β
Return type
None
attribute args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Extract all hyperlinks on the current webpage'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'extract_hyperlinks'ο
The unique name of the tool that clearly communicates its purpose.
static scrape_page(page, html_content, absolute_urls)[source]ο
Parameters
page (Any) β
html_content (str) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-19 | Parameters
page (Any) β
html_content (str) β
absolute_urls (bool) β
Return type
str
class langchain.tools.ExtractTextTool(*, name='extract_text', description='Extract all the text on the current webpage', args_schema=<class 'pydantic.main.BaseModel'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]ο
Bases: langchain.tools.playwright.base.BaseBrowserTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
sync_browser (Optional['SyncBrowser']) β
async_browser (Optional['AsyncBrowser']) β
Return type
None
attribute args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Extract all the text on the current webpage'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'extract_text'ο
The unique name of the tool that clearly communicates its purpose. | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-20 | The unique name of the tool that clearly communicates its purpose.
class langchain.tools.FileSearchTool(*, name='file_search', description='Recursively search for files in a subdirectory that match the regex pattern', args_schema=<class 'langchain.tools.file_management.file_search.FileSearchInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]ο
Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
root_dir (Optional[str]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.file_search.FileSearchInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Recursively search for files in a subdirectory that match the regex pattern'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'file_search'ο
The unique name of the tool that clearly communicates its purpose. | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-21 | The unique name of the tool that clearly communicates its purpose.
class langchain.tools.GetElementsTool(*, name='get_elements', description='Retrieve elements in the current web page matching the given CSS selector', args_schema=<class 'langchain.tools.playwright.get_elements.GetElementsToolInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]ο
Bases: langchain.tools.playwright.base.BaseBrowserTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
sync_browser (Optional['SyncBrowser']) β
async_browser (Optional['AsyncBrowser']) β
Return type
None
attribute args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.get_elements.GetElementsToolInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Retrieve elements in the current web page matching the given CSS selector'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'get_elements'ο
The unique name of the tool that clearly communicates its purpose. | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-22 | The unique name of the tool that clearly communicates its purpose.
class langchain.tools.GmailCreateDraft(*, name='create_gmail_draft', description='Use this tool to create a draft email with the provided message fields.', args_schema=<class 'langchain.tools.gmail.create_draft.CreateDraftSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source]ο
Bases: langchain.tools.gmail.base.GmailBaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[langchain.tools.gmail.create_draft.CreateDraftSchema]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_resource (Resource) β
Return type
None
attribute args_schema: Type[langchain.tools.gmail.create_draft.CreateDraftSchema] = <class 'langchain.tools.gmail.create_draft.CreateDraftSchema'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Use this tool to create a draft email with the provided message fields.'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'create_gmail_draft'ο
The unique name of the tool that clearly communicates its purpose. | https://api.python.langchain.com/en/latest/modules/tools.html |