id
stringlengths 36
36
| text
stringlengths 114
429k
| url
stringlengths 54
121
|
---|---|---|
d9a37339-4bad-4cf3-9fb5-1dd03debfa69 | Modelsο
LangChain provides interfaces and integrations for a number of different types of models.
LLMs
Chat Models | https://api.python.langchain.com/en/latest/models.html |
8ff91ce1-7d62-4479-9049-f8f70b82ef58 | Model I/Oο
LangChain provides interfaces and integrations for working with language models.
Prompts
Models
Output Parsers | https://api.python.langchain.com/en/latest/model_io.html |
2412f67f-00bb-42fb-9975-249ace261d8e | Promptsο
The reference guides here all relate to objects for working with Prompts.
Prompt Templates
Example Selector | https://api.python.langchain.com/en/latest/prompts.html |
f7f922b9-f340-4163-bd4d-a15304000f03 | Data connectionο
LangChain has a number of modules that help you load, structure, store, and retrieve documents.
Document Loaders
Document Transformers
Embeddings
Vector Stores
Retrievers | https://api.python.langchain.com/en/latest/data_connection.html |
583cb37c-049b-4ce1-bba8-5e96effe2285 | Embeddingsο
Wrappers around embedding modules.
class langchain.embeddings.OpenAIEmbeddings(*, client=None, model='text-embedding-ada-002', deployment='text-embedding-ada-002', openai_api_version=None, openai_api_base=None, openai_api_type=None, openai_proxy=None, embedding_ctx_length=8191, openai_api_key=None, openai_organization=None, allowed_special={}, disallowed_special='all', chunk_size=1000, max_retries=6, request_timeout=None, headers=None, tiktoken_model_name=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around OpenAI embedding models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="my-api-key")
In order to use the library with Microsoft Azure endpoints, you need to set
the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.
The OPENAI_API_TYPE must be set to βazureβ and the others correspond to
the properties of your endpoint.
In addition, the deployment name must be passed as the model parameter.
Example
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080"
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
deployment="your-embeddings-deployment-name",
model="your-embeddings-model-name",
openai_api_base="https://your-endpoint.openai.azure.com/",
openai_api_type="azure",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
Parameters
client (Any) β
model (str) β
deployment (str) β
openai_api_version (Optional[str]) β
openai_api_base (Optional[str]) β
openai_api_type (Optional[str]) β
openai_proxy (Optional[str]) β
embedding_ctx_length (int) β
openai_api_key (Optional[str]) β
openai_organization (Optional[str]) β
allowed_special (Union[Literal['all'], typing.Set[str]]) β
disallowed_special (Union[Literal['all'], typing.Set[str], typing.Sequence[str]]) β
chunk_size (int) β
max_retries (int) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
headers (Any) β
tiktoken_model_name (Optional[str]) β
Return type
None
attribute chunk_size: int = 1000ο
Maximum number of texts to embed in each batch
attribute max_retries: int = 6ο
Maximum number of retries to make when generating.
attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = Noneο
Timeout in seconds for the OpenAPI request.
attribute tiktoken_model_name: Optional[str] = Noneο
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
async aembed_documents(texts, chunk_size=0)[source]ο
Call out to OpenAIβs embedding endpoint async for embedding search docs.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size (Optional[int]) β The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
async aembed_query(text)[source]ο
Call out to OpenAIβs embedding endpoint async for embedding query text.
Parameters
text (str) β The text to embed.
Returns
Embedding for the text.
Return type
List[float]
embed_documents(texts, chunk_size=0)[source]ο
Call out to OpenAIβs embedding endpoint for embedding search docs.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size (Optional[int]) β The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to OpenAIβs embedding endpoint for embedding query text.
Parameters
text (str) β The text to embed.
Returns
Embedding for the text.
Return type
List[float]
class langchain.embeddings.HuggingFaceEmbeddings(*, client=None, model_name='sentence-transformers/all-mpnet-base-v2', cache_folder=None, model_kwargs=None, encode_kwargs=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers python package installed.
Example
from langchain.embeddings import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-mpnet-base-v2"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': False}
hf = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
Parameters
client (Any) β
model_name (str) β
cache_folder (Optional[str]) β
model_kwargs (Dict[str, Any]) β
encode_kwargs (Dict[str, Any]) β
Return type
None
attribute cache_folder: Optional[str] = Noneο
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
attribute encode_kwargs: Dict[str, Any] [Optional]ο
Key word arguments to pass when calling the encode method of the model.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Key word arguments to pass to the model.
attribute model_name: str = 'sentence-transformers/all-mpnet-base-v2'ο
Model name to use.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a HuggingFace transformer model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.CohereEmbeddings(*, client=None, model='embed-english-v2.0', truncate=None, cohere_api_key=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around Cohere embedding models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import CohereEmbeddings
cohere = CohereEmbeddings(
model="embed-english-light-v2.0", cohere_api_key="my-api-key"
)
Parameters
client (Any) β
model (str) β
truncate (Optional[str]) β
cohere_api_key (Optional[str]) β
Return type
None
attribute model: str = 'embed-english-v2.0'ο
Model name to use.
attribute truncate: Optional[str] = Noneο
Truncate embeddings that are too long from start or end (βNONEβ|βSTARTβ|βENDβ)
embed_documents(texts)[source]ο
Call out to Cohereβs embedding endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to Cohereβs embedding endpoint.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.ElasticsearchEmbeddings(client, model_id, *, input_field='text_field')[source]ο
Bases: langchain.embeddings.base.Embeddings
Wrapper around Elasticsearch embedding models.
This class provides an interface to generate embeddings using a model deployed
in an Elasticsearch cluster. It requires an Elasticsearch connection object
and the model_id of the model deployed in the cluster.
In Elasticsearch you need to have an embedding model loaded and deployed.
- https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html
- https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html
Parameters
client (MlClient) β
model_id (str) β
input_field (str) β
classmethod from_credentials(model_id, *, es_cloud_id=None, es_user=None, es_password=None, input_field='text_field')[source]ο
Instantiate embeddings from Elasticsearch credentials.
Parameters
model_id (str) β The model_id of the model deployed in the Elasticsearch
cluster.
input_field (str) β The name of the key for the input text field in the
document. Defaults to βtext_fieldβ.
es_cloud_id (Optional[str]) β (str, optional): The Elasticsearch cloud ID to connect to.
es_user (Optional[str]) β (str, optional): Elasticsearch username.
es_password (Optional[str]) β (str, optional): Elasticsearch password.
Return type
langchain.embeddings.elasticsearch.ElasticsearchEmbeddings
Example
from langchain.embeddings import ElasticsearchEmbeddings
# Define the model ID and input field name (if different from default)
model_id = "your_model_id"
# Optional, only if different from 'text_field'
input_field = "your_input_field"
# Credentials can be passed in two ways. Either set the env vars
# ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically
# pulled in, or pass them in directly as kwargs.
embeddings = ElasticsearchEmbeddings.from_credentials(
model_id,
input_field=input_field,
# es_cloud_id="foo",
# es_user="bar",
# es_password="baz",
)
documents = [
"This is an example document.",
"Another example document to generate embeddings for.",
]
embeddings_generator.embed_documents(documents)
classmethod from_es_connection(model_id, es_connection, input_field='text_field')[source]ο
Instantiate embeddings from an existing Elasticsearch connection.
This method provides a way to create an instance of the ElasticsearchEmbeddings
class using an existing Elasticsearch connection. The connection object is used
to create an MlClient, which is then used to initialize the
ElasticsearchEmbeddings instance.
Args:
model_id (str): The model_id of the model deployed in the Elasticsearch cluster.
es_connection (elasticsearch.Elasticsearch): An existing Elasticsearch
connection object. input_field (str, optional): The name of the key for the
input text field in the document. Defaults to βtext_fieldβ.
Returns:
ElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class.
Example
from elasticsearch import Elasticsearch
from langchain.embeddings import ElasticsearchEmbeddings
# Define the model ID and input field name (if different from default)
model_id = "your_model_id"
# Optional, only if different from 'text_field'
input_field = "your_input_field"
# Create Elasticsearch connection
es_connection = Elasticsearch(
hosts=["localhost:9200"], http_auth=("user", "password")
)
# Instantiate ElasticsearchEmbeddings using the existing connection
embeddings = ElasticsearchEmbeddings.from_es_connection(
model_id,
es_connection,
input_field=input_field,
)
documents = [
"This is an example document.",
"Another example document to generate embeddings for.",
]
embeddings_generator.embed_documents(documents)
Parameters
model_id (str) β
es_connection (Elasticsearch) β
input_field (str) β
Return type
ElasticsearchEmbeddings
embed_documents(texts)[source]ο
Generate embeddings for a list of documents.
Parameters
texts (List[str]) β A list of document text strings to generate embeddings
for.
Returns
A list of embeddings, one for each document in the inputlist.
Return type
List[List[float]]
embed_query(text)[source]ο
Generate an embedding for a single query text.
Parameters
text (str) β The query text to generate an embedding for.
Returns
The embedding for the input query text.
Return type
List[float]
class langchain.embeddings.LlamaCppEmbeddings(*, client=None, model_path, n_ctx=512, n_parts=- 1, seed=- 1, f16_kv=False, logits_all=False, vocab_only=False, use_mlock=False, n_threads=None, n_batch=8, n_gpu_layers=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around llama.cpp embedding models.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: https://github.com/abetlen/llama-cpp-python
Example
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model.bin")
Parameters
client (Any) β
model_path (str) β
n_ctx (int) β
n_parts (int) β
seed (int) β
f16_kv (bool) β
logits_all (bool) β
vocab_only (bool) β
use_mlock (bool) β
n_threads (Optional[int]) β
n_batch (Optional[int]) β
n_gpu_layers (Optional[int]) β
Return type
None
attribute f16_kv: bool = Falseο
Use half-precision for key/value cache.
attribute logits_all: bool = Falseο
Return logits for all tokens, not just the last token.
attribute n_batch: Optional[int] = 8ο
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
attribute n_ctx: int = 512ο
Token context window.
attribute n_gpu_layers: Optional[int] = Noneο
Number of layers to be loaded into gpu memory. Default None.
attribute n_parts: int = -1ο
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
attribute n_threads: Optional[int] = Noneο
Number of threads to use. If None, the number
of threads is automatically determined.
attribute seed: int = -1ο
Seed. If -1, a random seed is used.
attribute use_mlock: bool = Falseο
Force system to keep model in RAM.
attribute vocab_only: bool = Falseο
Only load the vocabulary, no weights.
embed_documents(texts)[source]ο
Embed a list of documents using the Llama model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using the Llama model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.HuggingFaceHubEmbeddings(*, client=None, repo_id='sentence-transformers/all-mpnet-base-v2', task='feature-extraction', model_kwargs=None, huggingfacehub_api_token=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around HuggingFaceHub embedding models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.embeddings import HuggingFaceHubEmbeddings
repo_id = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceHubEmbeddings(
repo_id=repo_id,
task="feature-extraction",
huggingfacehub_api_token="my-api-key",
)
Parameters
client (Any) β
repo_id (str) β
task (Optional[str]) β
model_kwargs (Optional[dict]) β
huggingfacehub_api_token (Optional[str]) β
Return type
None
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model.
attribute repo_id: str = 'sentence-transformers/all-mpnet-base-v2'ο
Model name to use.
attribute task: Optional[str] = 'feature-extraction'ο
Task to call the model with.
embed_documents(texts)[source]ο
Call out to HuggingFaceHubβs embedding endpoint for embedding search docs.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to HuggingFaceHubβs embedding endpoint for embedding query text.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.ModelScopeEmbeddings(*, embed=None, model_id='damo/nlp_corom_sentence-embedding_english-base')[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around modelscope_hub embedding models.
To use, you should have the modelscope python package installed.
Example
from langchain.embeddings import ModelScopeEmbeddings
model_id = "damo/nlp_corom_sentence-embedding_english-base"
embed = ModelScopeEmbeddings(model_id=model_id)
Parameters
embed (Any) β
model_id (str) β
Return type
None
attribute model_id: str = 'damo/nlp_corom_sentence-embedding_english-base'ο
Model name to use.
embed_documents(texts)[source]ο
Compute doc embeddings using a modelscope embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a modelscope embedding model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.TensorflowHubEmbeddings(*, embed=None, model_url='https://tfhub.dev/google/universal-sentence-encoder-multilingual/3')[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around tensorflow_hub embedding models.
To use, you should have the tensorflow_text python package installed.
Example
from langchain.embeddings import TensorflowHubEmbeddings
url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"
tf = TensorflowHubEmbeddings(model_url=url)
Parameters
embed (Any) β
model_url (str) β
Return type
None
attribute model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3'ο
Model name to use.
embed_documents(texts)[source]ο
Compute doc embeddings using a TensorflowHub embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a TensorflowHub embedding model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.SagemakerEndpointEmbeddings(*, client=None, endpoint_name='', region_name='', credentials_profile_name=None, content_handler, model_kwargs=None, endpoint_kwargs=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
Parameters
client (Any) β
endpoint_name (str) β
region_name (str) β
credentials_profile_name (Optional[str]) β
content_handler (langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler) β
model_kwargs (Optional[Dict]) β
endpoint_kwargs (Optional[Dict]) β
Return type
None
attribute content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]ο
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
attribute credentials_profile_name: Optional[str] = Noneο
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
attribute endpoint_kwargs: Optional[Dict] = Noneο
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
attribute endpoint_name: str = ''ο
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
attribute model_kwargs: Optional[Dict] = Noneο
Key word arguments to pass to the model.
attribute region_name: str = ''ο
The aws region where the Sagemaker model is deployed, eg. us-west-2.
embed_documents(texts, chunk_size=64)[source]ο
Compute doc embeddings using a SageMaker Inference Endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size (int) β The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a SageMaker inference endpoint.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.HuggingFaceInstructEmbeddings(*, client=None, model_name='hkunlp/instructor-large', cache_folder=None, model_kwargs=None, encode_kwargs=None, embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ')[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers
and InstructorEmbedding python packages installed.
Example
from langchain.embeddings import HuggingFaceInstructEmbeddings
model_name = "hkunlp/instructor-large"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': True}
hf = HuggingFaceInstructEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
Parameters
client (Any) β
model_name (str) β
cache_folder (Optional[str]) β
model_kwargs (Dict[str, Any]) β
encode_kwargs (Dict[str, Any]) β
embed_instruction (str) β
query_instruction (str) β
Return type
None
attribute cache_folder: Optional[str] = Noneο
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
attribute embed_instruction: str = 'Represent the document for retrieval: 'ο
Instruction to use for embedding documents.
attribute encode_kwargs: Dict[str, Any] [Optional]ο
Key word arguments to pass when calling the encode method of the model.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Key word arguments to pass to the model.
attribute model_name: str = 'hkunlp/instructor-large'ο
Model name to use.
attribute query_instruction: str = 'Represent the question for retrieving supporting documents: 'ο
Instruction to use for embedding query.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a HuggingFace instruct model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.MosaicMLInstructorEmbeddings(*, endpoint_url='https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ', retry_sleep=1.0, mosaicml_api_token=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around MosaicMLβs embedding inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import MosaicMLInstructorEmbeddings
endpoint_url = (
"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict"
)
mosaic_llm = MosaicMLInstructorEmbeddings(
endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
Parameters
endpoint_url (str) β
embed_instruction (str) β
query_instruction (str) β
retry_sleep (float) β
mosaicml_api_token (Optional[str]) β
Return type
None
attribute embed_instruction: str = 'Represent the document for retrieval: 'ο
Instruction used to embed documents.
attribute endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict'ο
Endpoint URL to use.
attribute query_instruction: str = 'Represent the question for retrieving supporting documents: 'ο
Instruction used to embed the query.
attribute retry_sleep: float = 1.0ο
How long to try sleeping for if a rate limit is encountered
embed_documents(texts)[source]ο
Embed documents using a MosaicML deployed instructor embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using a MosaicML deployed instructor embedding model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.SelfHostedEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn, load_fn_kwargs=None, model_reqs=['./', 'torch'], inference_kwargs=None)[source]ο
Bases: langchain.llms.self_hosted.SelfHostedPipeline, langchain.embeddings.base.Embeddings
Runs custom embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
def get_pipeline():
model_id = "facebook/bart-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu
model_reqs=["./", "torch", "transformers"],
)
Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings
import runhouse as rh
from transformers import pipeline
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
pipeline = pipeline(model="bert-base-uncased", task="feature-extraction")
rh.blob(pickle.dumps(pipeline),
path="models/pipeline.pkl").save().to(gpu, path="models")
embeddings = SelfHostedHFEmbeddings.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
inference_kwargs (Any) β
Return type
None
attribute inference_fn: Callable = <function _embed_documents>ο
Inference function to extract the embeddings on the remote hardware.
attribute inference_kwargs: Any = Noneο
Any kwargs to pass to the modelβs inference function.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts (List[str]) β The list of texts to embed.s
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a HuggingFace transformer model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.SelfHostedHuggingFaceEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn=<function load_embedding_model>, load_fn_kwargs=None, model_reqs=['./', 'sentence_transformers', 'torch'], inference_kwargs=None, model_id='sentence-transformers/all-mpnet-base-v2')[source]ο
Bases: langchain.embeddings.self_hosted.SelfHostedEmbeddings
Runs sentence_transformers embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
inference_kwargs (Any) β
model_id (str) β
Return type
None
attribute hardware: Any = Noneο
Remote hardware to send the inference function to.
attribute inference_fn: Callable = <function _embed_documents>ο
Inference function to extract the embeddings.
attribute load_fn_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model load function.
attribute model_id: str = 'sentence-transformers/all-mpnet-base-v2'ο
Model name to use.
attribute model_load_fn: Callable = <function load_embedding_model>ο
Function to load the model remotely on the server.
attribute model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']ο
Requirements to install on hardware to inference the model.
class langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn=<function load_embedding_model>, load_fn_kwargs=None, model_reqs=['./', 'InstructorEmbedding', 'torch'], inference_kwargs=None, model_id='hkunlp/instructor-large', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ')[source]ο
Bases: langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings
Runs InstructorEmbedding embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings
import runhouse as rh
model_name = "hkunlp/instructor-large"
gpu = rh.cluster(name='rh-a10x', instance_type='A100:1')
hf = SelfHostedHuggingFaceInstructEmbeddings(
model_name=model_name, hardware=gpu)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
inference_kwargs (Any) β
model_id (str) β
embed_instruction (str) β
query_instruction (str) β
Return type
None
attribute embed_instruction: str = 'Represent the document for retrieval: 'ο
Instruction to use for embedding documents.
attribute model_id: str = 'hkunlp/instructor-large'ο
Model name to use.
attribute model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']ο
Requirements to install on hardware to inference the model.
attribute query_instruction: str = 'Represent the question for retrieving supporting documents: 'ο
Instruction to use for embedding query.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a HuggingFace instruct model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.FakeEmbeddings(*, size)[source]ο
Bases: langchain.embeddings.base.Embeddings, pydantic.main.BaseModel
Parameters
size (int) β
Return type
None
embed_documents(texts)[source]ο
Embed search docs.
Parameters
texts (List[str]) β
Return type
List[List[float]]
embed_query(text)[source]ο
Embed query text.
Parameters
text (str) β
Return type
List[float]
class langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper for Aleph Alphaβs Asymmetric Embeddings
AA provides you with an endpoint to embed a document and a query.
The models were optimized to make the embeddings of documents and
the query for a document as similar as possible.
To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/
Example
from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding
embeddings = AlephAlphaSymmetricSemanticEmbedding()
document = "This is a content of the document"
query = "What is the content of the document?"
doc_result = embeddings.embed_documents([document])
query_result = embeddings.embed_query(query)
Parameters
client (Any) β
model (Optional[str]) β
hosting (Optional[str]) β
normalize (Optional[bool]) β
compress_to_size (Optional[int]) β
contextual_control_threshold (Optional[int]) β
control_log_additive (Optional[bool]) β
aleph_alpha_api_key (Optional[str]) β
Return type
None
attribute aleph_alpha_api_key: Optional[str] = Noneο
API key for Aleph Alpha API.
attribute compress_to_size: Optional[int] = 128ο
Should the returned embeddings come back as an original 5120-dim vector,
or should it be compressed to 128-dim.
attribute contextual_control_threshold: Optional[int] = Noneο
Attention control parameters only apply to those tokens that have
explicitly been set in the request.
attribute control_log_additive: Optional[bool] = Trueο
Apply controls on prompt items by adding the log(control_factor)
to attention scores.
attribute hosting: Optional[str] = 'https://api.aleph-alpha.com'ο
Optional parameter that specifies which datacenters may process the request.
attribute model: Optional[str] = 'luminous-base'ο
Model name to use.
attribute normalize: Optional[bool] = Trueο
Should returned embeddings be normalized
embed_documents(texts)[source]ο
Call out to Aleph Alphaβs asymmetric Document endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to Aleph Alphaβs asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
Parameters
text (str) β
Return type
List[float]
class langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source]ο
Bases: langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding
The symmetric version of the Aleph Alphaβs semantic embeddings.
The main difference is that here, both the documents and
queries are embedded with a SemanticRepresentation.Symmetric
.. rubric:: Example
from aleph_alpha import AlephAlphaSymmetricSemanticEmbedding
embeddings = AlephAlphaAsymmetricSemanticEmbedding()
text = "This is a test text"
doc_result = embeddings.embed_documents([text])
query_result = embeddings.embed_query(text)
Parameters
client (Any) β
model (Optional[str]) β
hosting (Optional[str]) β
normalize (Optional[bool]) β
compress_to_size (Optional[int]) β
contextual_control_threshold (Optional[int]) β
control_log_additive (Optional[bool]) β
aleph_alpha_api_key (Optional[str]) β
Return type
None
embed_documents(texts)[source]ο
Call out to Aleph Alphaβs Document endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to Aleph Alphaβs asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
Parameters
text (str) β
Return type
List[float]
langchain.embeddings.SentenceTransformerEmbeddingsο
alias of langchain.embeddings.huggingface.HuggingFaceEmbeddings
class langchain.embeddings.MiniMaxEmbeddings(*, endpoint_url='https://api.minimax.chat/v1/embeddings', model='embo-01', embed_type_db='db', embed_type_query='query', minimax_group_id=None, minimax_api_key=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around MiniMaxβs embedding inference service.
To use, you should have the environment variable MINIMAX_GROUP_ID and
MINIMAX_API_KEY set with your API token, or pass it as a named parameter to
the constructor.
Example
from langchain.embeddings import MiniMaxEmbeddings
embeddings = MiniMaxEmbeddings()
query_text = "This is a test query."
query_result = embeddings.embed_query(query_text)
document_text = "This is a test document."
document_result = embeddings.embed_documents([document_text])
Parameters
endpoint_url (str) β
model (str) β
embed_type_db (str) β
embed_type_query (str) β
minimax_group_id (Optional[str]) β
minimax_api_key (Optional[str]) β
Return type
None
attribute embed_type_db: str = 'db'ο
For embed_documents
attribute embed_type_query: str = 'query'ο
For embed_query
attribute endpoint_url: str = 'https://api.minimax.chat/v1/embeddings'ο
Endpoint URL to use.
attribute minimax_api_key: Optional[str] = Noneο
API Key for MiniMax API.
attribute minimax_group_id: Optional[str] = Noneο
Group ID for MiniMax API.
attribute model: str = 'embo-01'ο
Embeddings model name to use.
embed_documents(texts)[source]ο
Embed documents using a MiniMax embedding endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using a MiniMax embedding endpoint.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.BedrockEmbeddings(*, client=None, region_name=None, credentials_profile_name=None, model_id='amazon.titan-e1t-medium', model_kwargs=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Embeddings provider to invoke Bedrock embedding models.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Bedrock service.
Parameters
client (Any) β
region_name (Optional[str]) β
credentials_profile_name (Optional[str]) β
model_id (str) β
model_kwargs (Optional[Dict]) β
Return type
None
attribute credentials_profile_name: Optional[str] = Noneο
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
attribute model_id: str = 'amazon.titan-e1t-medium'ο
Id of the model to call, e.g., amazon.titan-e1t-medium, this is
equivalent to the modelId property in the list-foundation-models api
attribute model_kwargs: Optional[Dict] = Noneο
Key word arguments to pass to the model.
attribute region_name: Optional[str] = Noneο
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
embed_documents(texts, chunk_size=1)[source]ο
Compute doc embeddings using a Bedrock model.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size (int) β Bedrock currently only allows single string
inputs, so chunk size is always 1. This input is here
only for compatibility with the embeddings interface.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a Bedrock model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.DeepInfraEmbeddings(*, model_id='sentence-transformers/clip-ViT-B-32', normalize=False, embed_instruction='passage: ', query_instruction='query: ', model_kwargs=None, deepinfra_api_token=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around Deep Infraβs embedding inference service.
To use, you should have the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
There are multiple embeddings models available,
see https://deepinfra.com/models?type=embeddings.
Example
from langchain.embeddings import DeepInfraEmbeddings
deepinfra_emb = DeepInfraEmbeddings(
model_id="sentence-transformers/clip-ViT-B-32",
deepinfra_api_token="my-api-key"
)
r1 = deepinfra_emb.embed_documents(
[
"Alpha is the first letter of Greek alphabet",
"Beta is the second letter of Greek alphabet",
]
)
r2 = deepinfra_emb.embed_query(
"What is the second letter of Greek alphabet"
)
Parameters
model_id (str) β
normalize (bool) β
embed_instruction (str) β
query_instruction (str) β
model_kwargs (Optional[dict]) β
deepinfra_api_token (Optional[str]) β
Return type
None
attribute embed_instruction: str = 'passage: 'ο
Instruction used to embed documents.
attribute model_id: str = 'sentence-transformers/clip-ViT-B-32'ο
Embeddings model to use.
attribute model_kwargs: Optional[dict] = Noneο
Other model keyword args
attribute normalize: bool = Falseο
whether to normalize the computed embeddings
attribute query_instruction: str = 'query: 'ο
Instruction used to embed the query.
embed_documents(texts)[source]ο
Embed documents using a Deep Infra deployed embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using a Deep Infra deployed embedding model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.DashScopeEmbeddings(*, client=None, model='text-embedding-v1', dashscope_api_key=None, max_retries=5)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around DashScope embedding models.
To use, you should have the dashscope python package installed, and the
environment variable DASHSCOPE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(dashscope_api_key="my-api-key")
Example
import os
os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY"
from langchain.embeddings.dashscope import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(
model="text-embedding-v1",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
Parameters
client (Any) β
model (str) β
dashscope_api_key (Optional[str]) β
max_retries (int) β
Return type
None
attribute dashscope_api_key: Optional[str] = Noneο
Maximum number of retries to make when generating.
embed_documents(texts)[source]ο
Call out to DashScopeβs embedding endpoint for embedding search docs.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size β The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to DashScopeβs embedding endpoint for embedding query text.
Parameters
text (str) β The text to embed.
Returns
Embedding for the text.
Return type
List[float]
class langchain.embeddings.EmbaasEmbeddings(*, model='e5-large-v2', instruction=None, api_url='https://api.embaas.io/v1/embeddings/', embaas_api_key=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around embaasβs embedding service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Initialise with default model and instruction
from langchain.embeddings import EmbaasEmbeddings
emb = EmbaasEmbeddings()
# Initialise with custom model and instruction
from langchain.embeddings import EmbaasEmbeddings
emb_model = "instructor-large"
emb_inst = "Represent the Wikipedia document for retrieval"
emb = EmbaasEmbeddings(
model=emb_model,
instruction=emb_inst
)
Parameters
model (str) β
instruction (Optional[str]) β
api_url (str) β
embaas_api_key (Optional[str]) β
Return type
None
attribute api_url: str = 'https://api.embaas.io/v1/embeddings/'ο
The URL for the embaas embeddings API.
attribute instruction: Optional[str] = Noneο
Instruction used for domain-specific embeddings.
attribute model: str = 'e5-large-v2'ο
The model used for embeddings.
embed_documents(texts)[source]ο
Get embeddings for a list of texts.
Parameters
texts (List[str]) β The list of texts to get embeddings for.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Get embeddings for a single text.
Parameters
text (str) β The text to get embeddings for.
Returns
List of embeddings.
Return type
List[float] | https://api.python.langchain.com/en/latest/modules/embeddings.html |
c3d05344-a68a-42c1-a6e5-0cf256519de1 | Memoryο
class langchain.memory.CassandraChatMessageHistory(contact_points, session_id, port=9042, username='cassandra', password='cassandra', keyspace_name='chat_history', table_name='message_store')[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history that stores history in Cassandra.
Parameters
contact_points (List[str]) β list of ips to connect to Cassandra cluster
session_id (str) β arbitrary key that is used to store the messages
of a single chat session.
port (int) β port to connect to Cassandra cluster
username (str) β username to connect to Cassandra cluster
password (str) β password to connect to Cassandra cluster
keyspace_name (str) β name of the keyspace to use
table_name (str) β name of the table to use
property messages: List[langchain.schema.BaseMessage]ο
Retrieve the messages from Cassandra
add_message(message)[source]ο
Append the message to the record in Cassandra
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from Cassandra
Return type
None
class langchain.memory.ChatMessageHistory(*, messages=[])[source]ο
Bases: langchain.schema.BaseChatMessageHistory, pydantic.main.BaseModel
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
None
attribute messages: List[langchain.schema.BaseMessage] = []ο
add_message(message)[source]ο
Add a self-created message to the store
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Remove all messages from the store
Return type
None
class langchain.memory.CombinedMemory(*, memories)[source]ο
Bases: langchain.schema.BaseMemory
Class for combining multiple memoriesβ data together.
Parameters
memories (List[langchain.schema.BaseMemory]) β
Return type
None
attribute memories: List[langchain.schema.BaseMemory] [Required]ο
For tracking all the memories that should be accessed.
clear()[source]ο
Clear context from this session for every memory.
Return type
None
load_memory_variables(inputs)[source]ο
Load all vars from sub-memories.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, str]
save_context(inputs, outputs)[source]ο
Save context from this session for every memory.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property memory_variables: List[str]ο
All the memory variables that this instance provides.
class langchain.memory.ConversationBufferMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', memory_key='history')[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory
Buffer for storing conversation memory.
Parameters
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
human_prefix (str) β
ai_prefix (str) β
memory_key (str) β
Return type
None
attribute ai_prefix: str = 'AI'ο
attribute human_prefix: str = 'Human'ο
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Any]
property buffer: Anyο
String buffer of memory.
class langchain.memory.ConversationBufferWindowMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', memory_key='history', k=5)[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory
Buffer for storing conversation memory.
Parameters
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
human_prefix (str) β
ai_prefix (str) β
memory_key (str) β
k (int) β
Return type
None
attribute ai_prefix: str = 'AI'ο
attribute human_prefix: str = 'Human'ο
attribute k: int = 5ο
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, str]
property buffer: List[langchain.schema.BaseMessage]ο
String buffer of memory.
class langchain.memory.ConversationEntityMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', llm, entity_extraction_prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True), entity_summarization_prompt=PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\n\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\n\nFull conversation history (for context):\n{history}\n\nEntity to summarize:\n{entity}\n\nExisting summary of {entity}:\n{summary}\n\nLast line of conversation:\nHuman: {input}\nUpdated summary:', template_format='f-string', validate_template=True), entity_cache=[], k=3, chat_history_key='history', entity_store=None)[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory
Entity extractor & summarizer memory.
Extracts named entities from the recent chat history and generates summaries.
With a swapable entity store, persisting entities across conversations.
Defaults to an in-memory entity store, and can be swapped out for a Redis,
SQLite, or other entity store.
Parameters
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
human_prefix (str) β
ai_prefix (str) β
llm (langchain.base_language.BaseLanguageModel) β
entity_extraction_prompt (langchain.prompts.base.BasePromptTemplate) β
entity_summarization_prompt (langchain.prompts.base.BasePromptTemplate) β
entity_cache (List[str]) β
k (int) β
chat_history_key (str) β
entity_store (langchain.memory.entity.BaseEntityStore) β
Return type
None
attribute ai_prefix: str = 'AI'ο
attribute chat_history_key: str = 'history'ο
attribute entity_cache: List[str] = []ο
attribute entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True)ο
attribute entity_store: langchain.memory.entity.BaseEntityStore [Optional]ο
attribute entity_summarization_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\n\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\n\nFull conversation history (for context):\n{history}\n\nEntity to summarize:\n{entity}\n\nExisting summary of {entity}:\n{summary}\n\nLast line of conversation:\nHuman: {input}\nUpdated summary:', template_format='f-string', validate_template=True)ο
attribute human_prefix: str = 'Human'ο
attribute k: int = 3ο
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
clear()[source]ο
Clear memory contents.
Return type
None
load_memory_variables(inputs)[source]ο
Returns chat history and all generated entities with summaries if available,
and updates or clears the recent entity cache.
New entity name can be found when calling this method, before the entity
summaries are generated, so the entity cache values may be empty if no entity
descriptions are generated yet.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Any]
save_context(inputs, outputs)[source]ο
Save context from this conversation history to the entity store.
Generates a summary for each entity in the entity cache by prompting
the model, and saves these summaries to the entity store.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property buffer: List[langchain.schema.BaseMessage]ο
Access chat memory messages.
class langchain.memory.ConversationKGMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, k=2, human_prefix='Human', ai_prefix='AI', kg=None, knowledge_extraction_prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nConversation history:\nPerson #1: Did you hear aliens landed in Area 51?\nAI: No, I didn't hear that. What do you know about Area 51?\nPerson #1: It's a secret military base in Nevada.\nAI: What do you know about Nevada?\nLast line of conversation:\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: Hello.\nAI: Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversation:\nPerson #1: I'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\nLast line of conversation:\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:", template_format='f-string', validate_template=True), entity_extraction_prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True), llm, summary_message_cls=<class 'langchain.schema.SystemMessage'>, memory_key='history')[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory
Knowledge graph memory for storing conversation memory.
Integrates with external knowledge graph to store and retrieve
information about knowledge triples in the conversation.
Parameters
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
k (int) β
human_prefix (str) β
ai_prefix (str) β
kg (langchain.graphs.networkx_graph.NetworkxEntityGraph) β
knowledge_extraction_prompt (langchain.prompts.base.BasePromptTemplate) β
entity_extraction_prompt (langchain.prompts.base.BasePromptTemplate) β
llm (langchain.base_language.BaseLanguageModel) β
summary_message_cls (Type[langchain.schema.BaseMessage]) β
memory_key (str) β
Return type
None
attribute ai_prefix: str = 'AI'ο
attribute entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True)ο
attribute human_prefix: str = 'Human'ο
attribute k: int = 2ο
attribute kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional]ο
attribute knowledge_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nConversation history:\nPerson #1: Did you hear aliens landed in Area 51?\nAI: No, I didn't hear that. What do you know about Area 51?\nPerson #1: It's a secret military base in Nevada.\nAI: What do you know about Nevada?\nLast line of conversation:\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: Hello.\nAI: Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversation:\nPerson #1: I'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\nLast line of conversation:\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:", template_format='f-string', validate_template=True)ο
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
attribute summary_message_cls: Type[langchain.schema.BaseMessage] = <class 'langchain.schema.SystemMessage'>ο
Number of previous utterances to include in the context.
clear()[source]ο
Clear memory contents.
Return type
None
get_current_entities(input_string)[source]ο
Parameters
input_string (str) β
Return type
List[str]
get_knowledge_triplets(input_string)[source]ο
Parameters
input_string (str) β
Return type
List[langchain.graphs.networkx_graph.KnowledgeTriple]
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Any]
save_context(inputs, outputs)[source]ο
Save context from this conversation to buffer.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
class langchain.memory.ConversationStringBufferMemory(*, human_prefix='Human', ai_prefix='AI', buffer='', output_key=None, input_key=None, memory_key='history')[source]ο
Bases: langchain.schema.BaseMemory
Buffer for storing conversation memory.
Parameters
human_prefix (str) β
ai_prefix (str) β
buffer (str) β
output_key (Optional[str]) β
input_key (Optional[str]) β
memory_key (str) β
Return type
None
attribute ai_prefix: str = 'AI'ο
Prefix to use for AI generated responses.
attribute buffer: str = ''ο
attribute human_prefix: str = 'Human'ο
attribute input_key: Optional[str] = Noneο
attribute output_key: Optional[str] = Noneο
clear()[source]ο
Clear memory contents.
Return type
None
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, str]
save_context(inputs, outputs)[source]ο
Save context from this conversation to buffer.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property memory_variables: List[str]ο
Will always return list of memory variables.
:meta private:
class langchain.memory.ConversationSummaryBufferMemory(*, human_prefix='Human', ai_prefix='AI', llm, prompt=PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True), summary_message_cls=<class 'langchain.schema.SystemMessage'>, chat_memory=None, output_key=None, input_key=None, return_messages=False, max_token_limit=2000, moving_summary_buffer='', memory_key='history')[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory, langchain.memory.summary.SummarizerMixin
Buffer with summarizer for storing conversation memory.
Parameters
human_prefix (str) β
ai_prefix (str) β
llm (langchain.base_language.BaseLanguageModel) β
prompt (langchain.prompts.base.BasePromptTemplate) β
summary_message_cls (Type[langchain.schema.BaseMessage]) β
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
max_token_limit (int) β
moving_summary_buffer (str) β
memory_key (str) β
Return type
None
attribute max_token_limit: int = 2000ο
attribute memory_key: str = 'history'ο
attribute moving_summary_buffer: str = ''ο
clear()[source]ο
Clear memory contents.
Return type
None
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Any]
prune()[source]ο
Prune buffer if it exceeds max token limit
Return type
None
save_context(inputs, outputs)[source]ο
Save context from this conversation to buffer.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property buffer: List[langchain.schema.BaseMessage]ο
class langchain.memory.ConversationSummaryMemory(*, human_prefix='Human', ai_prefix='AI', llm, prompt=PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True), summary_message_cls=<class 'langchain.schema.SystemMessage'>, chat_memory=None, output_key=None, input_key=None, return_messages=False, buffer='', memory_key='history')[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory, langchain.memory.summary.SummarizerMixin
Conversation summarizer to memory.
Parameters
human_prefix (str) β
ai_prefix (str) β
llm (langchain.base_language.BaseLanguageModel) β
prompt (langchain.prompts.base.BasePromptTemplate) β
summary_message_cls (Type[langchain.schema.BaseMessage]) β
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
buffer (str) β
memory_key (str) β
Return type
None
attribute buffer: str = ''ο
clear()[source]ο
Clear memory contents.
Return type
None
classmethod from_messages(llm, chat_memory, *, summarize_step=2, **kwargs)[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
chat_memory (langchain.schema.BaseChatMessageHistory) β
summarize_step (int) β
kwargs (Any) β
Return type
langchain.memory.summary.ConversationSummaryMemory
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Any]
save_context(inputs, outputs)[source]ο
Save context from this conversation to buffer.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
class langchain.memory.ConversationTokenBufferMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', llm, memory_key='history', max_token_limit=2000)[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory
Buffer for storing conversation memory.
Parameters
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
human_prefix (str) β
ai_prefix (str) β
llm (langchain.base_language.BaseLanguageModel) β
memory_key (str) β
max_token_limit (int) β
Return type
None
attribute ai_prefix: str = 'AI'ο
attribute human_prefix: str = 'Human'ο
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
attribute max_token_limit: int = 2000ο
attribute memory_key: str = 'history'ο
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Any]
save_context(inputs, outputs)[source]ο
Save context from this conversation to buffer. Pruned.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property buffer: List[langchain.schema.BaseMessage]ο
String buffer of memory.
class langchain.memory.CosmosDBChatMessageHistory(cosmos_endpoint, cosmos_database, cosmos_container, session_id, user_id, credential=None, connection_string=None, ttl=None, cosmos_client_kwargs=None)[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat history backed by Azure CosmosDB.
Parameters
cosmos_endpoint (str) β
cosmos_database (str) β
cosmos_container (str) β
session_id (str) β
user_id (str) β
credential (Any) β
connection_string (Optional[str]) β
ttl (Optional[int]) β
cosmos_client_kwargs (Optional[dict]) β
prepare_cosmos()[source]ο
Prepare the CosmosDB client.
Use this function or the context manager to make sure your database is ready.
Return type
None
load_messages()[source]ο
Retrieve the messages from Cosmos
Return type
None
add_message(message)[source]ο
Add a self-created message to the store
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
upsert_messages()[source]ο
Update the cosmosdb item.
Return type
None
clear()[source]ο
Clear session memory from this memory and cosmos.
Return type
None
class langchain.memory.DynamoDBChatMessageHistory(table_name, session_id, endpoint_url=None)[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history that stores history in AWS DynamoDB.
This class expects that a DynamoDB table with name table_name
and a partition Key of SessionId is present.
Parameters
table_name (str) β name of the DynamoDB table
session_id (str) β arbitrary key that is used to store the messages
of a single chat session.
endpoint_url (Optional[str]) β URL of the AWS endpoint to connect to. This argument
is optional and useful for test purposes, like using Localstack.
If you plan to use AWS cloud service, you normally donβt have to
worry about setting the endpoint_url.
property messages: List[langchain.schema.BaseMessage]ο
Retrieve the messages from DynamoDB
add_message(message)[source]ο
Append the message to the record in DynamoDB
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from DynamoDB
Return type
None
class langchain.memory.FileChatMessageHistory(file_path)[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history that stores history in a local file.
Parameters
file_path (str) β path of the local file to store the messages.
property messages: List[langchain.schema.BaseMessage]ο
Retrieve the messages from the local file
add_message(message)[source]ο
Append the message to the record in the local file
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from the local file
Return type
None
class langchain.memory.InMemoryEntityStore(*, store={})[source]ο
Bases: langchain.memory.entity.BaseEntityStore
Basic in-memory entity store.
Parameters
store (Dict[str, Optional[str]]) β
Return type
None
attribute store: Dict[str, Optional[str]] = {}ο
clear()[source]ο
Delete all entities from store.
Return type
None
delete(key)[source]ο
Delete entity value from store.
Parameters
key (str) β
Return type
None
exists(key)[source]ο
Check if entity exists in store.
Parameters
key (str) β
Return type
bool
get(key, default=None)[source]ο
Get entity value from store.
Parameters
key (str) β
default (Optional[str]) β
Return type
Optional[str]
set(key, value)[source]ο
Set entity value in store.
Parameters
key (str) β
value (Optional[str]) β
Return type
None
class langchain.memory.MomentoChatMessageHistory(session_id, cache_client, cache_name, *, key_prefix='message_store:', ttl=None, ensure_cache_exists=True)[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history cache that uses Momento as a backend.
See https://gomomento.com/
Parameters
session_id (str) β
cache_client (momento.CacheClient) β
cache_name (str) β
key_prefix (str) β
ttl (Optional[timedelta]) β
ensure_cache_exists (bool) β
classmethod from_client_params(session_id, cache_name, ttl, *, configuration=None, auth_token=None, **kwargs)[source]ο
Construct cache from CacheClient parameters.
Parameters
session_id (str) β
cache_name (str) β
ttl (timedelta) β
configuration (Optional[momento.config.Configuration]) β
auth_token (Optional[str]) β
kwargs (Any) β
Return type
MomentoChatMessageHistory
property messages: list[langchain.schema.BaseMessage]ο
Retrieve the messages from Momento.
Raises
SdkException β Momento service or network error
Exception β Unexpected response
Returns
List of cached messages
Return type
list[BaseMessage]
add_message(message)[source]ο
Store a message in the cache.
Parameters
message (BaseMessage) β The message object to store.
Raises
SdkException β Momento service or network error.
Exception β Unexpected response.
Return type
None
clear()[source]ο
Remove the sessionβs messages from the cache.
Raises
SdkException β Momento service or network error.
Exception β Unexpected response.
Return type
None
class langchain.memory.MongoDBChatMessageHistory(connection_string, session_id, database_name='chat_history', collection_name='message_store')[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history that stores history in MongoDB.
Parameters
connection_string (str) β connection string to connect to MongoDB
session_id (str) β arbitrary key that is used to store the messages
of a single chat session.
database_name (str) β name of the database to use
collection_name (str) β name of the collection to use
property messages: List[langchain.schema.BaseMessage]ο
Retrieve the messages from MongoDB
add_message(message)[source]ο
Append the message to the record in MongoDB
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from MongoDB
Return type
None
class langchain.memory.MotorheadMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, url='https://api.getmetal.io/v1/motorhead', session_id, context=None, api_key=None, client_id=None, timeout=3000, memory_key='history')[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory
Parameters
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
url (str) β
session_id (str) β
context (Optional[str]) β
api_key (Optional[str]) β
client_id (Optional[str]) β
timeout (int) β
memory_key (str) β
Return type
None
attribute api_key: Optional[str] = Noneο
attribute client_id: Optional[str] = Noneο
attribute context: Optional[str] = Noneο
attribute session_id: str [Required]ο
attribute url: str = 'https://api.getmetal.io/v1/motorhead'ο
delete_session()[source]ο
Delete a session
Return type
None
async init()[source]ο
Return type
None
load_memory_variables(values)[source]ο
Return key-value pairs given the text input to the chain.
If None, return all memories
Parameters
values (Dict[str, Any]) β
Return type
Dict[str, Any]
save_context(inputs, outputs)[source]ο
Save context from this conversation to buffer.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property memory_variables: List[str]ο
Input keys this memory class will load dynamically.
class langchain.memory.PostgresChatMessageHistory(session_id, connection_string='postgresql://postgres:mypassword@localhost/chat_history', table_name='message_store')[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history stored in a Postgres database.
Parameters
session_id (str) β
connection_string (str) β
table_name (str) β
property messages: List[langchain.schema.BaseMessage]ο
Retrieve the messages from PostgreSQL
add_message(message)[source]ο
Append the message to the record in PostgreSQL
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from PostgreSQL
Return type
None
class langchain.memory.ReadOnlySharedMemory(*, memory)[source]ο
Bases: langchain.schema.BaseMemory
A memory wrapper that is read-only and cannot be changed.
Parameters
memory (langchain.schema.BaseMemory) β
Return type
None
attribute memory: langchain.schema.BaseMemory [Required]ο
clear()[source]ο
Nothing to clear, got a memory like a vault.
Return type
None
load_memory_variables(inputs)[source]ο
Load memory variables from memory.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, str]
save_context(inputs, outputs)[source]ο
Nothing should be saved or changed
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property memory_variables: List[str]ο
Return memory variables.
class langchain.memory.RedisChatMessageHistory(session_id, url='redis://localhost:6379/0', key_prefix='message_store:', ttl=None)[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history stored in a Redis database.
Parameters
session_id (str) β
url (str) β
key_prefix (str) β
ttl (Optional[int]) β
property key: strο
Construct the record key to use
property messages: List[langchain.schema.BaseMessage]ο
Retrieve the messages from Redis
add_message(message)[source]ο
Append the message to the record in Redis
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from Redis
Return type
None
class langchain.memory.RedisEntityStore(session_id='default', url='redis://localhost:6379/0', key_prefix='memory_store', ttl=86400, recall_ttl=259200, *args, redis_client=None)[source]ο
Bases: langchain.memory.entity.BaseEntityStore
Redis-backed Entity store. Entities get a TTL of 1 day by default, and
that TTL is extended by 3 days every time the entity is read back.
Parameters
session_id (str) β
url (str) β
key_prefix (str) β
ttl (Optional[int]) β
recall_ttl (Optional[int]) β
args (Any) β
redis_client (Any) β
Return type
None
attribute key_prefix: str = 'memory_store'ο
attribute recall_ttl: Optional[int] = 259200ο
attribute redis_client: Any = Noneο
attribute session_id: str = 'default'ο
attribute ttl: Optional[int] = 86400ο
clear()[source]ο
Delete all entities from store.
Return type
None
delete(key)[source]ο
Delete entity value from store.
Parameters
key (str) β
Return type
None
exists(key)[source]ο
Check if entity exists in store.
Parameters
key (str) β
Return type
bool
get(key, default=None)[source]ο
Get entity value from store.
Parameters
key (str) β
default (Optional[str]) β
Return type
Optional[str]
set(key, value)[source]ο
Set entity value in store.
Parameters
key (str) β
value (Optional[str]) β
Return type
None
property full_key_prefix: strο
class langchain.memory.SQLChatMessageHistory(session_id, connection_string, table_name='message_store')[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history stored in an SQL database.
Parameters
session_id (str) β
connection_string (str) β
table_name (str) β
property messages: List[langchain.schema.BaseMessage]ο
Retrieve all messages from db
add_message(message)[source]ο
Append the message to the record in db
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from db
Return type
None
class langchain.memory.SQLiteEntityStore(session_id='default', db_file='entities.db', table_name='memory_store', *args)[source]ο
Bases: langchain.memory.entity.BaseEntityStore
SQLite-backed Entity store
Parameters
session_id (str) β
db_file (str) β
table_name (str) β
args (Any) β
Return type
None
attribute session_id: str = 'default'ο
attribute table_name: str = 'memory_store'ο
clear()[source]ο
Delete all entities from store.
Return type
None
delete(key)[source]ο
Delete entity value from store.
Parameters
key (str) β
Return type
None
exists(key)[source]ο
Check if entity exists in store.
Parameters
key (str) β
Return type
bool
get(key, default=None)[source]ο
Get entity value from store.
Parameters
key (str) β
default (Optional[str]) β
Return type
Optional[str]
set(key, value)[source]ο
Set entity value in store.
Parameters
key (str) β
value (Optional[str]) β
Return type
None
property full_table_name: strο
class langchain.memory.SimpleMemory(*, memories={})[source]ο
Bases: langchain.schema.BaseMemory
Simple memory for storing context or other bits of information that shouldnβt
ever change between prompts.
Parameters
memories (Dict[str, Any]) β
Return type
None
attribute memories: Dict[str, Any] = {}ο
clear()[source]ο
Nothing to clear, got a memory like a vault.
Return type
None
load_memory_variables(inputs)[source]ο
Return key-value pairs given the text input to the chain.
If None, return all memories
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, str]
save_context(inputs, outputs)[source]ο
Nothing should be saved or changed, my memory is set in stone.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property memory_variables: List[str]ο
Input keys this memory class will load dynamically.
class langchain.memory.VectorStoreRetrieverMemory(*, retriever, memory_key='history', input_key=None, return_docs=False)[source]ο
Bases: langchain.schema.BaseMemory
Class for a VectorStore-backed memory object.
Parameters
retriever (langchain.vectorstores.base.VectorStoreRetriever) β
memory_key (str) β
input_key (Optional[str]) β
return_docs (bool) β
Return type
None
attribute input_key: Optional[str] = Noneο
Key name to index the inputs to load_memory_variables.
attribute memory_key: str = 'history'ο
Key name to locate the memories in the result of load_memory_variables.
attribute retriever: langchain.vectorstores.base.VectorStoreRetriever [Required]ο
VectorStoreRetriever object to connect to.
attribute return_docs: bool = Falseο
Whether or not to return the result of querying the database directly.
clear()[source]ο
Nothing to clear.
Return type
None
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Union[List[langchain.schema.Document], str]]
save_context(inputs, outputs)[source]ο
Save context from this conversation to buffer.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property memory_variables: List[str]ο
The list of keys emitted from the load_memory_variables method.
class langchain.memory.ZepChatMessageHistory(session_id, url='http://localhost:8000')[source]ο
Bases: langchain.schema.BaseChatMessageHistory
A ChatMessageHistory implementation that uses Zep as a backend.
Recommended usage:
# Set up Zep Chat History
zep_chat_history = ZepChatMessageHistory(
session_id=session_id,
url=ZEP_API_URL,
)
# Use a standard ConversationBufferMemory to encapsulate the Zep chat history
memory = ConversationBufferMemory(
memory_key="chat_history", chat_memory=zep_chat_history
)
Zep provides long-term conversation storage for LLM apps. The server stores,
summarizes, embeds, indexes, and enriches conversational AI chat
histories, and exposes them via simple, low-latency APIs.
For server installation instructions and more, see: https://getzep.github.io/
This class is a thin wrapper around the zep-python package. Additional
Zep functionality is exposed via the zep_summary and zep_messages
properties.
For more information on the zep-python package, see:
https://github.com/getzep/zep-python
Parameters
session_id (str) β
url (str) β
Return type
None
property messages: List[langchain.schema.BaseMessage]ο
Retrieve messages from Zep memory
property zep_messages: List[Message]ο
Retrieve summary from Zep memory
property zep_summary: Optional[str]ο
Retrieve summary from Zep memory
add_message(message)[source]ο
Append the message to the Zep memory history
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
search(query, metadata=None, limit=None)[source]ο
Search Zep memory for messages matching the query
Parameters
query (str) β
metadata (Optional[Dict]) β
limit (Optional[int]) β
Return type
List[MemorySearchResult]
clear()[source]ο
Clear session memory from Zep. Note that Zep is long-term storage for memory
and this is not advised unless you have specific data retention requirements.
Return type
None | https://api.python.langchain.com/en/latest/modules/memory.html |
7702ed66-abcf-4761-ad44-02001b146132 | Output Parsersο
class langchain.output_parsers.BooleanOutputParser(*, true_val='YES', false_val='NO')[source]ο
Bases: langchain.schema.BaseOutputParser[bool]
Parameters
true_val (str) β
false_val (str) β
Return type
None
attribute false_val: str = 'NO'ο
attribute true_val: str = 'YES'ο
parse(text)[source]ο
Parse the output of an LLM call to a boolean.
Parameters
text (str) β output of language model
Returns
boolean
Return type
bool
class langchain.output_parsers.CombiningOutputParser(*, parsers)[source]ο
Bases: langchain.schema.BaseOutputParser
Class to combine multiple output parsers into one.
Parameters
parsers (List[langchain.schema.BaseOutputParser]) β
Return type
None
attribute parsers: List[langchain.schema.BaseOutputParser] [Required]ο
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(text)[source]ο
Parse the output of an LLM call.
Parameters
text (str) β
Return type
Dict[str, Any]
class langchain.output_parsers.CommaSeparatedListOutputParser[source]ο
Bases: langchain.output_parsers.list.ListOutputParser
Parse out comma separated lists.
Return type
None
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(text)[source]ο
Parse the output of an LLM call.
Parameters
text (str) β
Return type
List[str]
class langchain.output_parsers.DatetimeOutputParser(*, format='%Y-%m-%dT%H:%M:%S.%fZ')[source]ο
Bases: langchain.schema.BaseOutputParser[datetime.datetime]
Parameters
format (str) β
Return type
None
attribute format: str = '%Y-%m-%dT%H:%M:%S.%fZ'ο
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(response)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
response (str) β
Returns
structured output
Return type
datetime.datetime
class langchain.output_parsers.EnumOutputParser(*, enum)[source]ο
Bases: langchain.schema.BaseOutputParser
Parameters
enum (Type[enum.Enum]) β
Return type
None
attribute enum: Type[enum.Enum] [Required]ο
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(response)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
response (str) β
Returns
structured output
Return type
Any
class langchain.output_parsers.GuardrailsOutputParser(*, guard=None, api=None, args=None, kwargs=None)[source]ο
Bases: langchain.schema.BaseOutputParser
Parameters
guard (Any) β
api (Optional[Callable]) β
args (Any) β
kwargs (Any) β
Return type
None
attribute api: Optional[Callable] = Noneο
attribute args: Any = Noneο
attribute guard: Any = Noneο
attribute kwargs: Any = Noneο
classmethod from_rail(rail_file, num_reasks=1, api=None, *args, **kwargs)[source]ο
Parameters
rail_file (str) β
num_reasks (int) β
api (Optional[Callable]) β
args (Any) β
kwargs (Any) β
Return type
langchain.output_parsers.rail_parser.GuardrailsOutputParser
classmethod from_rail_string(rail_str, num_reasks=1, api=None, *args, **kwargs)[source]ο
Parameters
rail_str (str) β
num_reasks (int) β
api (Optional[Callable]) β
args (Any) β
kwargs (Any) β
Return type
langchain.output_parsers.rail_parser.GuardrailsOutputParser
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(text)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text (str) β output of language model
Returns
structured output
Return type
Dict
class langchain.output_parsers.ListOutputParser[source]ο
Bases: langchain.schema.BaseOutputParser
Class to parse the output of an LLM call to a list.
Return type
None
abstract parse(text)[source]ο
Parse the output of an LLM call.
Parameters
text (str) β
Return type
List[str]
class langchain.output_parsers.OutputFixingParser(*, parser, retry_chain)[source]ο
Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]
Wraps a parser and tries to fix parsing errors.
Parameters
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]) β
retry_chain (langchain.chains.llm.LLMChain) β
Return type
None
attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] [Required]ο
attribute retry_chain: langchain.chains.llm.LLMChain [Required]ο
classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'error', 'instructions'], output_parser=None, partial_variables={}, template='Instructions:\n--------------\n{instructions}\n--------------\nCompletion:\n--------------\n{completion}\n--------------\n\nAbove, the Completion did not satisfy the constraints given in the Instructions.\nError:\n--------------\n{error}\n--------------\n\nPlease try again. Please only respond with an answer that satisfies the constraints laid out in the Instructions:', template_format='f-string', validate_template=True))[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]) β
prompt (langchain.prompts.base.BasePromptTemplate) β
Return type
langchain.output_parsers.fix.OutputFixingParser[langchain.output_parsers.fix.T]
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(completion)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
completion (str) β
Returns
structured output
Return type
langchain.output_parsers.fix.T
class langchain.output_parsers.PydanticOutputParser(*, pydantic_object)[source]ο
Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.pydantic.T]
Parameters
pydantic_object (Type[langchain.output_parsers.pydantic.T]) β
Return type
None
attribute pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required]ο
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(text)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text (str) β output of language model
Returns
structured output
Return type
langchain.output_parsers.pydantic.T
class langchain.output_parsers.RegexDictParser(*, regex_pattern="{}:\\s?([^.'\\n']*)\\.?", output_key_to_format, no_update_value=None)[source]ο
Bases: langchain.schema.BaseOutputParser
Class to parse the output into a dictionary.
Parameters
regex_pattern (str) β
output_key_to_format (Dict[str, str]) β
no_update_value (Optional[str]) β
Return type
None
attribute no_update_value: Optional[str] = Noneο
attribute output_key_to_format: Dict[str, str] [Required]ο
attribute regex_pattern: str = "{}:\\s?([^.'\\n']*)\\.?"ο
parse(text)[source]ο
Parse the output of an LLM call.
Parameters
text (str) β
Return type
Dict[str, str]
class langchain.output_parsers.RegexParser(*, regex, output_keys, default_output_key=None)[source]ο
Bases: langchain.schema.BaseOutputParser
Class to parse the output into a dictionary.
Parameters
regex (str) β
output_keys (List[str]) β
default_output_key (Optional[str]) β
Return type
None
attribute default_output_key: Optional[str] = Noneο
attribute output_keys: List[str] [Required]ο
attribute regex: str [Required]ο
parse(text)[source]ο
Parse the output of an LLM call.
Parameters
text (str) β
Return type
Dict[str, str]
class langchain.output_parsers.ResponseSchema(*, name, description, type='string')[source]ο
Bases: pydantic.main.BaseModel
Parameters
name (str) β
description (str) β
type (str) β
Return type
None
attribute description: str [Required]ο
attribute name: str [Required]ο
attribute type: str = 'string'ο
class langchain.output_parsers.RetryOutputParser(*, parser, retry_chain)[source]ο
Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]
Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt and the completion to another
LLM, and telling it the completion did not satisfy criteria in the prompt.
Parameters
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) β
retry_chain (langchain.chains.llm.LLMChain) β
Return type
None
attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]ο
attribute retry_chain: langchain.chains.llm.LLMChain [Required]ο
classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nPlease try again:', template_format='f-string', validate_template=True))[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) β
prompt (langchain.prompts.base.BasePromptTemplate) β
Return type
langchain.output_parsers.retry.RetryOutputParser[langchain.output_parsers.retry.T]
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(completion)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
completion (str) β
Returns
structured output
Return type
langchain.output_parsers.retry.T
parse_with_prompt(completion, prompt_value)[source]ο
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion (str) β output of language model
prompt β prompt value
prompt_value (langchain.schema.PromptValue) β
Returns
structured output
Return type
langchain.output_parsers.retry.T
class langchain.output_parsers.RetryWithErrorOutputParser(*, parser, retry_chain)[source]ο
Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]
Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt, the completion, AND the error
that was raised to another language model and telling it that the completion
did not work, and raised the given error. Differs from RetryOutputParser
in that this implementation provides the error that was raised back to the
LLM, which in theory should give it more information on how to fix it.
Parameters
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) β
retry_chain (langchain.chains.llm.LLMChain) β
Return type
None
attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]ο
attribute retry_chain: langchain.chains.llm.LLMChain [Required]ο
classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'error', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nDetails: {error}\nPlease try again:', template_format='f-string', validate_template=True))[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) β
prompt (langchain.prompts.base.BasePromptTemplate) β
Return type
langchain.output_parsers.retry.RetryWithErrorOutputParser[langchain.output_parsers.retry.T]
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(completion)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
completion (str) β
Returns
structured output
Return type
langchain.output_parsers.retry.T
parse_with_prompt(completion, prompt_value)[source]ο
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion (str) β output of language model
prompt β prompt value
prompt_value (langchain.schema.PromptValue) β
Returns
structured output
Return type
langchain.output_parsers.retry.T
class langchain.output_parsers.StructuredOutputParser(*, response_schemas)[source]ο
Bases: langchain.schema.BaseOutputParser
Parameters
response_schemas (List[langchain.output_parsers.structured.ResponseSchema]) β
Return type
None
attribute response_schemas: List[langchain.output_parsers.structured.ResponseSchema] [Required]ο
classmethod from_response_schemas(response_schemas)[source]ο
Parameters
response_schemas (List[langchain.output_parsers.structured.ResponseSchema]) β
Return type
langchain.output_parsers.structured.StructuredOutputParser
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(text)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text (str) β output of language model
Returns
structured output
Return type
Any | https://api.python.langchain.com/en/latest/modules/output_parsers.html |
392b7fe6-4662-40df-96c9-d94e21dcb8f8 | Toolsο
Core toolkit implementations.
class langchain.tools.AIPluginTool(*, name, description, args_schema=<class 'langchain.tools.plugin.AIPluginToolSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, plugin, api_spec)[source]ο
Bases: langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[langchain.tools.plugin.AIPluginToolSchema]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
plugin (langchain.tools.plugin.AIPlugin) β
api_spec (str) β
Return type
None
attribute api_spec: str [Required]ο
attribute args_schema: Type[AIPluginToolSchema] = <class 'langchain.tools.plugin.AIPluginToolSchema'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute plugin: AIPlugin [Required]ο
classmethod from_plugin_url(url)[source]ο
Parameters
url (str) β
Return type
langchain.tools.plugin.AIPluginTool
class langchain.tools.APIOperation(*, operation_id, description=None, base_url, path, method, properties, request_body=None)[source]ο
Bases: pydantic.main.BaseModel
A model for a single API operation.
Parameters
operation_id (str) β
description (Optional[str]) β
base_url (str) β
path (str) β
method (langchain.utilities.openapi.HTTPVerb) β
properties (Sequence[langchain.tools.openapi.utils.api_models.APIProperty]) β
request_body (Optional[langchain.tools.openapi.utils.api_models.APIRequestBody]) β
Return type
None
attribute base_url: str [Required]ο
The base URL of the operation.
attribute description: Optional[str] = Noneο
The description of the operation.
attribute method: langchain.utilities.openapi.HTTPVerb [Required]ο
The HTTP method of the operation.
attribute operation_id: str [Required]ο
The unique identifier of the operation.
attribute path: str [Required]ο
The path of the operation.
attribute properties: Sequence[langchain.tools.openapi.utils.api_models.APIProperty] [Required]ο
attribute request_body: Optional[langchain.tools.openapi.utils.api_models.APIRequestBody] = Noneο
The request body of the operation.
classmethod from_openapi_spec(spec, path, method)[source]ο
Create an APIOperation from an OpenAPI spec.
Parameters
spec (langchain.utilities.openapi.OpenAPISpec) β
path (str) β
method (str) β
Return type
langchain.tools.openapi.utils.api_models.APIOperation
classmethod from_openapi_url(spec_url, path, method)[source]ο
Create an APIOperation from an OpenAPI URL.
Parameters
spec_url (str) β
path (str) β
method (str) β
Return type
langchain.tools.openapi.utils.api_models.APIOperation
to_typescript()[source]ο
Get typescript string representation of the operation.
Return type
str
static ts_type_from_python(type_)[source]ο
Parameters
type_ (Union[str, Type, tuple, None, enum.Enum]) β
Return type
str
property body_params: List[str]ο
property path_params: List[str]ο
property query_params: List[str]ο
class langchain.tools.ArxivQueryRun(*, name='arxiv', description='A wrapper around Arxiv.org Useful for when you need to answer questions about Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance, Statistics, Electrical Engineering, and Economics from scientific articles on arxiv.org. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to search using the Arxiv API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.arxiv.ArxivAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.arxiv.ArxivAPIWrapper [Optional]ο
class langchain.tools.AzureCogsFormRecognizerTool(*, name='azure_cognitive_services_form_recognizer', description='A wrapper around Azure Cognitive Services Form Recognizer. Useful for when you need to extract text, tables, and key-value pairs from documents. Input should be a url to a document.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, azure_cogs_key='', azure_cogs_endpoint='', doc_analysis_client=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that queries the Azure Cognitive Services Form Recognizer API.
In order to set this up, follow instructions at:
https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/quickstarts/get-started-sdks-rest-api?view=form-recog-3.0.0&pivots=programming-language-python
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
azure_cogs_key (str) β
azure_cogs_endpoint (str) β
doc_analysis_client (Any) β
Return type
None
class langchain.tools.AzureCogsImageAnalysisTool(*, name='azure_cognitive_services_image_analysis', description='A wrapper around Azure Cognitive Services Image Analysis. Useful for when you need to analyze images. Input should be a url to an image.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, azure_cogs_key='', azure_cogs_endpoint='', vision_service=None, analysis_options=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that queries the Azure Cognitive Services Image Analysis API.
In order to set this up, follow instructions at:
https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
azure_cogs_key (str) β
azure_cogs_endpoint (str) β
vision_service (Any) β
analysis_options (Any) β
Return type
None
class langchain.tools.AzureCogsSpeech2TextTool(*, name='azure_cognitive_services_speech2text', description='A wrapper around Azure Cognitive Services Speech2Text. Useful for when you need to transcribe audio to text. Input should be a url to an audio file.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, azure_cogs_key='', azure_cogs_region='', speech_language='en-US', speech_config=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that queries the Azure Cognitive Services Speech2Text API.
In order to set this up, follow instructions at:
https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-speech-to-text?pivots=programming-language-python
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
azure_cogs_key (str) β
azure_cogs_region (str) β
speech_language (str) β
speech_config (Any) β
Return type
None
class langchain.tools.AzureCogsText2SpeechTool(*, name='azure_cognitive_services_text2speech', description='A wrapper around Azure Cognitive Services Text2Speech. Useful for when you need to convert text to speech. ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, azure_cogs_key='', azure_cogs_region='', speech_language='en-US', speech_config=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that queries the Azure Cognitive Services Text2Speech API.
In order to set this up, follow instructions at:
https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?pivots=programming-language-python
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
azure_cogs_key (str) β
azure_cogs_region (str) β
speech_language (str) β
speech_config (Any) β
Return type
None
class langchain.tools.BaseGraphQLTool(*, name='query_graphql', description="Β Β Β Input to this tool is a detailed and correct GraphQL query, output is a result from the API.\nΒ Β Β If the query is not correct, an error message will be returned.\nΒ Β Β If an error is returned with 'Bad request' in it, rewrite the query and try again.\nΒ Β Β If an error is returned with 'Unauthorized' in it, do not try again, but tell the user to change their authentication.\n\nΒ Β Β Example Input: query {{ allUsers {{ id, name, email }} }}Β Β Β ", args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, graphql_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Base tool for querying a GraphQL API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
graphql_wrapper (langchain.utilities.graphql.GraphQLAPIWrapper) β
Return type
None
attribute graphql_wrapper: langchain.utilities.graphql.GraphQLAPIWrapper [Required]ο
class langchain.tools.BaseRequestsTool(*, requests_wrapper)[source]ο
Bases: pydantic.main.BaseModel
Base class for requests tools.
Parameters
requests_wrapper (langchain.requests.TextRequestsWrapper) β
Return type
None
attribute requests_wrapper: langchain.requests.TextRequestsWrapper [Required]ο
class langchain.tools.BaseSQLDatabaseTool(*, db)[source]ο
Bases: pydantic.main.BaseModel
Base tool for interacting with a SQL database.
Parameters
db (langchain.sql_database.SQLDatabase) β
Return type
None
attribute db: langchain.sql_database.SQLDatabase [Required]ο
class langchain.tools.BaseSparkSQLTool(*, db)[source]ο
Bases: pydantic.main.BaseModel
Base tool for interacting with Spark SQL.
Parameters
db (langchain.utilities.spark_sql.SparkSQL) β
Return type
None
attribute db: langchain.utilities.spark_sql.SparkSQL [Required]ο
class langchain.tools.BaseTool(*, name, description, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False)[source]ο
Bases: abc.ABC, pydantic.main.BaseModel
Interface LangChain tools must implement.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
Return type
None
attribute args_schema: Optional[Type[pydantic.main.BaseModel]] = Noneο
Pydantic model class to validate and parse the toolβs input arguments.
attribute callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = Noneο
Deprecated. Please use callbacks instead.
attribute callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = Noneο
Callbacks to be called during tool execution.
attribute description: str [Required]ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = Falseο
Handle the content of the ToolException thrown.
attribute name: str [Required]ο
The unique name of the tool that clearly communicates its purpose.
attribute return_direct: bool = Falseο
Whether to return the toolβs output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
attribute verbose: bool = Falseο
Whether to log the toolβs progress.
async arun(tool_input, verbose=None, start_color='green', color='green', callbacks=None, **kwargs)[source]ο
Run the tool asynchronously.
Parameters
tool_input (Union[str, Dict]) β
verbose (Optional[bool]) β
start_color (Optional[str]) β
color (Optional[str]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Any
run(tool_input, verbose=None, start_color='green', color='green', callbacks=None, **kwargs)[source]ο
Run the tool.
Parameters
tool_input (Union[str, Dict]) β
verbose (Optional[bool]) β
start_color (Optional[str]) β
color (Optional[str]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Any
property args: dictο
property is_single_input: boolο
Whether the tool only accepts a single input.
class langchain.tools.BingSearchResults(*, name='Bing Search Results JSON', description='A wrapper around Bing Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, num_results=4, api_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that has capability to query the Bing Search API and get back json.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
num_results (int) β
api_wrapper (langchain.utilities.bing_search.BingSearchAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]ο
attribute num_results: int = 4ο
class langchain.tools.BingSearchRun(*, name='bing_search', description='A wrapper around Bing Search. Useful for when you need to answer questions about current events. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to query the Bing search API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.bing_search.BingSearchAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]ο
class langchain.tools.BraveSearch(*, name='brave_search', description='a search engine. useful for when you need to answer questions about current events. input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, search_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
search_wrapper (langchain.utilities.brave_search.BraveSearchWrapper) β
Return type
None
attribute search_wrapper: BraveSearchWrapper [Required]ο
classmethod from_api_key(api_key, search_kwargs=None, **kwargs)[source]ο
Parameters
api_key (str) β
search_kwargs (Optional[dict]) β
kwargs (Any) β
Return type
langchain.tools.brave_search.tool.BraveSearch
class langchain.tools.ClickTool(*, name='click_element', description='Click on an element with the given CSS selector', args_schema=<class 'langchain.tools.playwright.click.ClickToolInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None, visible_only=True, playwright_strict=False, playwright_timeout=1000)[source]ο
Bases: langchain.tools.playwright.base.BaseBrowserTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
sync_browser (Optional['SyncBrowser']) β
async_browser (Optional['AsyncBrowser']) β
visible_only (bool) β
playwright_strict (bool) β
playwright_timeout (float) β
Return type
None
attribute args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.click.ClickToolInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Click on an element with the given CSS selector'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'click_element'ο
The unique name of the tool that clearly communicates its purpose.
attribute playwright_strict: bool = Falseο
Whether to employ Playwrightβs strict mode when clicking on elements.
attribute playwright_timeout: float = 1000ο
Timeout (in ms) for Playwright to wait for element to be ready.
attribute visible_only: bool = Trueο
Whether to consider only visible elements.
class langchain.tools.CopyFileTool(*, name='copy_file', description='Create a copy of a file in a specified location', args_schema=<class 'langchain.tools.file_management.copy.FileCopyInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]ο
Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
root_dir (Optional[str]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.copy.FileCopyInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Create a copy of a file in a specified location'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'copy_file'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.CurrentWebPageTool(*, name='current_webpage', description='Returns the URL of the current page', args_schema=<class 'pydantic.main.BaseModel'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]ο
Bases: langchain.tools.playwright.base.BaseBrowserTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
sync_browser (Optional['SyncBrowser']) β
async_browser (Optional['AsyncBrowser']) β
Return type
None
attribute args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Returns the URL of the current page'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'current_webpage'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.DeleteFileTool(*, name='file_delete', description='Delete a file', args_schema=<class 'langchain.tools.file_management.delete.FileDeleteInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]ο
Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
root_dir (Optional[str]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.delete.FileDeleteInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Delete a file'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'file_delete'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.DuckDuckGoSearchResults(*, name='DuckDuckGo Results JSON', description='A wrapper around Duck Duck Go Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, num_results=4, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that queries the Duck Duck Go Search API and get back json.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
num_results (int) β
api_wrapper (langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]ο
attribute num_results: int = 4ο
class langchain.tools.DuckDuckGoSearchRun(*, name='duckduckgo_search', description='A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to query the DuckDuckGo search API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]ο
class langchain.tools.ExtractHyperlinksTool(*, name='extract_hyperlinks', description='Extract all hyperlinks on the current webpage', args_schema=<class 'langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]ο
Bases: langchain.tools.playwright.base.BaseBrowserTool
Extract all hyperlinks on the page.
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
sync_browser (Optional['SyncBrowser']) β
async_browser (Optional['AsyncBrowser']) β
Return type
None
attribute args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Extract all hyperlinks on the current webpage'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'extract_hyperlinks'ο
The unique name of the tool that clearly communicates its purpose.
static scrape_page(page, html_content, absolute_urls)[source]ο
Parameters
page (Any) β
html_content (str) β
absolute_urls (bool) β
Return type
str
class langchain.tools.ExtractTextTool(*, name='extract_text', description='Extract all the text on the current webpage', args_schema=<class 'pydantic.main.BaseModel'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]ο
Bases: langchain.tools.playwright.base.BaseBrowserTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
sync_browser (Optional['SyncBrowser']) β
async_browser (Optional['AsyncBrowser']) β
Return type
None
attribute args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Extract all the text on the current webpage'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'extract_text'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.FileSearchTool(*, name='file_search', description='Recursively search for files in a subdirectory that match the regex pattern', args_schema=<class 'langchain.tools.file_management.file_search.FileSearchInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]ο
Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
root_dir (Optional[str]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.file_search.FileSearchInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Recursively search for files in a subdirectory that match the regex pattern'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'file_search'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.GetElementsTool(*, name='get_elements', description='Retrieve elements in the current web page matching the given CSS selector', args_schema=<class 'langchain.tools.playwright.get_elements.GetElementsToolInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]ο
Bases: langchain.tools.playwright.base.BaseBrowserTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
sync_browser (Optional['SyncBrowser']) β
async_browser (Optional['AsyncBrowser']) β
Return type
None
attribute args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.get_elements.GetElementsToolInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Retrieve elements in the current web page matching the given CSS selector'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'get_elements'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.GmailCreateDraft(*, name='create_gmail_draft', description='Use this tool to create a draft email with the provided message fields.', args_schema=<class 'langchain.tools.gmail.create_draft.CreateDraftSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source]ο
Bases: langchain.tools.gmail.base.GmailBaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[langchain.tools.gmail.create_draft.CreateDraftSchema]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_resource (Resource) β
Return type
None
attribute args_schema: Type[langchain.tools.gmail.create_draft.CreateDraftSchema] = <class 'langchain.tools.gmail.create_draft.CreateDraftSchema'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Use this tool to create a draft email with the provided message fields.'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'create_gmail_draft'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.GmailGetMessage(*, name='get_gmail_message', description='Use this tool to fetch an email by message ID. Returns the thread ID, snipet, body, subject, and sender.', args_schema=<class 'langchain.tools.gmail.get_message.SearchArgsSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source]ο
Bases: langchain.tools.gmail.base.GmailBaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[langchain.tools.gmail.get_message.SearchArgsSchema]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_resource (Resource) β
Return type
None
attribute args_schema: Type[langchain.tools.gmail.get_message.SearchArgsSchema] = <class 'langchain.tools.gmail.get_message.SearchArgsSchema'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Use this tool to fetch an email by message ID. Returns the thread ID, snipet, body, subject, and sender.'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'get_gmail_message'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.GmailGetThread(*, name='get_gmail_thread', description='Use this tool to search for email messages. The input must be a valid Gmail query. The output is a JSON list of messages.', args_schema=<class 'langchain.tools.gmail.get_thread.GetThreadSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source]ο
Bases: langchain.tools.gmail.base.GmailBaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[langchain.tools.gmail.get_thread.GetThreadSchema]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_resource (Resource) β
Return type
None
attribute args_schema: Type[langchain.tools.gmail.get_thread.GetThreadSchema] = <class 'langchain.tools.gmail.get_thread.GetThreadSchema'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Use this tool to search for email messages. The input must be a valid Gmail query. The output is a JSON list of messages.'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'get_gmail_thread'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.GmailSearch(*, name='search_gmail', description='Use this tool to search for email messages or threads. The input must be a valid Gmail query. The output is a JSON list of the requested resource.', args_schema=<class 'langchain.tools.gmail.search.SearchArgsSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source]ο
Bases: langchain.tools.gmail.base.GmailBaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[langchain.tools.gmail.search.SearchArgsSchema]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_resource (Resource) β
Return type
None
attribute args_schema: Type[langchain.tools.gmail.search.SearchArgsSchema] = <class 'langchain.tools.gmail.search.SearchArgsSchema'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Use this tool to search for email messages or threads. The input must be a valid Gmail query. The output is a JSON list of the requested resource.'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'search_gmail'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.GmailSendMessage(*, name='send_gmail_message', description='Use this tool to send email messages. The input is the message, recipents', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source]ο
Bases: langchain.tools.gmail.base.GmailBaseTool
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_resource (Resource) β
Return type
None
attribute description: str = 'Use this tool to send email messages. The input is the message, recipents'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'send_gmail_message'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.GooglePlacesTool(*, name='google_places', description='A wrapper around Google Places. Useful for when you need to validate or discover addressed from ambiguous text. Input should be a search query.', args_schema=<class 'langchain.tools.google_places.tool.GooglePlacesSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to query the Google places API.
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.google_places_api.GooglePlacesAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.google_places_api.GooglePlacesAPIWrapper [Optional]ο
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.google_places.tool.GooglePlacesSchema'>ο
Pydantic model class to validate and parse the toolβs input arguments.
class langchain.tools.GoogleSearchResults(*, name='Google Search Results JSON', description='A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, num_results=4, api_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that has capability to query the Google Search API and get back json.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
num_results (int) β
api_wrapper (langchain.utilities.google_search.GoogleSearchAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]ο
attribute num_results: int = 4ο
class langchain.tools.GoogleSearchRun(*, name='google_search', description='A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to query the Google search API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.google_search.GoogleSearchAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]ο
class langchain.tools.GoogleSerperResults(*, name='Google Serrper Results JSON', description='A low-cost Google Search API.Useful for when you need to answer questions about current events.Input should be a search query. Output is a JSON object of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that has capability to query the Serper.dev Google Search API
and get back json.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.google_serper.GoogleSerperAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.google_serper.GoogleSerperAPIWrapper [Optional]ο
class langchain.tools.GoogleSerperRun(*, name='google_serper', description='A low-cost Google Search API.Useful for when you need to answer questions about current events.Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to query the Serper.dev Google search API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.google_serper.GoogleSerperAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.google_serper.GoogleSerperAPIWrapper [Required]ο
class langchain.tools.HumanInputRun(*, name='human', description='You can ask a human for guidance when you think you got stuck or you are not sure what to do next. The input should be a question for the human.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, prompt_func=None, input_func=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to ask user for input.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
prompt_func (Callable[[str], None]) β
input_func (Callable) β
Return type
None
attribute input_func: Callable [Optional]ο
attribute prompt_func: Callable[[str], None] [Optional]ο
class langchain.tools.IFTTTWebhook(*, name, description, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, url)[source]ο
Bases: langchain.tools.base.BaseTool
IFTTT Webhook.
Parameters
name (str) β name of the tool
description (str) β description of the tool
url (str) β url to hit with the json event.
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
Return type
None
attribute url: str [Required]ο
class langchain.tools.InfoPowerBITool(*, name='schema_powerbi', description='\nΒ Β Β Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\nΒ Β Β Be sure that the tables actually exist by calling list_tables_powerbi first!\n\nΒ Β Β Example Input: "table1, table2, table3"\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, powerbi)[source]ο
Bases: langchain.tools.base.BaseTool
Tool for getting metadata about a PowerBI Dataset.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
powerbi (langchain.utilities.powerbi.PowerBIDataset) β
Return type
None
attribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]ο
class langchain.tools.InfoSQLDatabaseTool(*, name='sql_db_schema', description='\nΒ Β Β Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.Β Β Β \n\nΒ Β Β Example Input: "table1, table2, table3"\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]ο
Bases: langchain.tools.sql_database.tool.BaseSQLDatabaseTool, langchain.tools.base.BaseTool
Tool for getting metadata about a SQL database.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
db (langchain.sql_database.SQLDatabase) β
Return type
None
class langchain.tools.InfoSparkSQLTool(*, name='schema_sql_db', description='\nΒ Β Β Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\nΒ Β Β Be sure that the tables actually exist by calling list_tables_sql_db first!\n\nΒ Β Β Example Input: "table1, table2, table3"\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]ο
Bases: langchain.tools.spark_sql.tool.BaseSparkSQLTool, langchain.tools.base.BaseTool
Tool for getting metadata about a Spark SQL.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
db (langchain.utilities.spark_sql.SparkSQL) β
Return type
None
class langchain.tools.JiraAction(*, name='', description='', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None, mode)[source]ο
Bases: langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.jira.JiraAPIWrapper) β
mode (str) β
Return type
None
attribute api_wrapper: langchain.utilities.jira.JiraAPIWrapper [Optional]ο
attribute mode: str [Required]ο
class langchain.tools.JsonGetValueTool(*, name='json_spec_get_value', description='\nΒ Β Β Can be used to see value in string format at a given path.\nΒ Β Β Before calling this you should be SURE that the path to this exists.\nΒ Β Β The input is a text representation of the path to the dict in Python syntax (e.g. data["key1"][0]["key2"]).\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, spec)[source]ο
Bases: langchain.tools.base.BaseTool
Tool for getting a value in a JSON spec.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
spec (langchain.tools.json.tool.JsonSpec) β
Return type
None
attribute spec: JsonSpec [Required]ο
class langchain.tools.JsonListKeysTool(*, name='json_spec_list_keys', description='\nΒ Β Β Can be used to list all keys at a given path. \nΒ Β Β Before calling this you should be SURE that the path to this exists.\nΒ Β Β The input is a text representation of the path to the dict in Python syntax (e.g. data["key1"][0]["key2"]).\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, spec)[source]ο
Bases: langchain.tools.base.BaseTool
Tool for listing keys in a JSON spec.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
spec (langchain.tools.json.tool.JsonSpec) β
Return type
None
attribute spec: JsonSpec [Required]ο
class langchain.tools.ListDirectoryTool(*, name='list_directory', description='List files and directories in a specified folder', args_schema=<class 'langchain.tools.file_management.list_dir.DirectoryListingInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]ο
Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
root_dir (Optional[str]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.list_dir.DirectoryListingInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'List files and directories in a specified folder'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'list_directory'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.ListPowerBITool(*, name='list_tables_powerbi', description='Input is an empty string, output is a comma separated list of tables in the database.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, powerbi)[source]ο
Bases: langchain.tools.base.BaseTool
Tool for getting tables names.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
powerbi (langchain.utilities.powerbi.PowerBIDataset) β
Return type
None
attribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]ο
class langchain.tools.ListSQLDatabaseTool(*, name='sql_db_list_tables', description='Input is an empty string, output is a comma separated list of tables in the database.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]ο
Bases: langchain.tools.sql_database.tool.BaseSQLDatabaseTool, langchain.tools.base.BaseTool
Tool for getting tables names.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
db (langchain.sql_database.SQLDatabase) β
Return type
None
class langchain.tools.ListSparkSQLTool(*, name='list_tables_sql_db', description='Input is an empty string, output is a comma separated list of tables in the Spark SQL.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]ο
Bases: langchain.tools.spark_sql.tool.BaseSparkSQLTool, langchain.tools.base.BaseTool
Tool for getting tables names.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
db (langchain.utilities.spark_sql.SparkSQL) β
Return type
None
class langchain.tools.MetaphorSearchResults(*, name='metaphor_search_results_json', description='A wrapper around Metaphor Search. Input should be a Metaphor-optimized query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that has capability to query the Metaphor Search API and get back json.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper [Required]ο
class langchain.tools.MoveFileTool(*, name='move_file', description='Move or rename a file from one location to another', args_schema=<class 'langchain.tools.file_management.move.FileMoveInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]ο
Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
root_dir (Optional[str]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.move.FileMoveInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Move or rename a file from one location to another'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'move_file'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.NavigateBackTool(*, name='previous_webpage', description='Navigate back to the previous page in the browser history', args_schema=<class 'pydantic.main.BaseModel'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]ο
Bases: langchain.tools.playwright.base.BaseBrowserTool
Navigate back to the previous page in the browser history.
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
sync_browser (Optional['SyncBrowser']) β
async_browser (Optional['AsyncBrowser']) β
Return type
None
attribute args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Navigate back to the previous page in the browser history'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'previous_webpage'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.NavigateTool(*, name='navigate_browser', description='Navigate a browser to the specified URL', args_schema=<class 'langchain.tools.playwright.navigate.NavigateToolInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]ο
Bases: langchain.tools.playwright.base.BaseBrowserTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
sync_browser (Optional['SyncBrowser']) β
async_browser (Optional['AsyncBrowser']) β
Return type
None
attribute args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.navigate.NavigateToolInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Navigate a browser to the specified URL'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'navigate_browser'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.OpenAPISpec(*, openapi='3.1.0', info, jsonSchemaDialect=None, servers=[Server(url='/', description=None, variables=None)], paths=None, webhooks=None, components=None, security=None, tags=None, externalDocs=None)[source]ο
Bases: openapi_schema_pydantic.v3.v3_1_0.open_api.OpenAPI
OpenAPI Model that removes misformatted parts of the spec.
Parameters
openapi (str) β
info (openapi_schema_pydantic.v3.v3_1_0.info.Info) β
jsonSchemaDialect (Optional[str]) β
servers (List[openapi_schema_pydantic.v3.v3_1_0.server.Server]) β
paths (Optional[Dict[str, openapi_schema_pydantic.v3.v3_1_0.path_item.PathItem]]) β
webhooks (Optional[Dict[str, Union[openapi_schema_pydantic.v3.v3_1_0.path_item.PathItem, openapi_schema_pydantic.v3.v3_1_0.reference.Reference]]]) β
components (Optional[openapi_schema_pydantic.v3.v3_1_0.components.Components]) β
security (Optional[List[Dict[str, List[str]]]]) β
tags (Optional[List[openapi_schema_pydantic.v3.v3_1_0.tag.Tag]]) β
externalDocs (Optional[openapi_schema_pydantic.v3.v3_1_0.external_documentation.ExternalDocumentation]) β
Return type
None
classmethod from_file(path)[source]ο
Get an OpenAPI spec from a file path.
Parameters
path (Union[str, pathlib.Path]) β
Return type
langchain.utilities.openapi.OpenAPISpec
classmethod from_spec_dict(spec_dict)[source]ο
Get an OpenAPI spec from a dict.
Parameters
spec_dict (dict) β
Return type
langchain.utilities.openapi.OpenAPISpec
classmethod from_text(text)[source]ο
Get an OpenAPI spec from a text.
Parameters
text (str) β
Return type
langchain.utilities.openapi.OpenAPISpec
classmethod from_url(url)[source]ο
Get an OpenAPI spec from a URL.
Parameters
url (str) β
Return type
langchain.utilities.openapi.OpenAPISpec
static get_cleaned_operation_id(operation, path, method)[source]ο
Get a cleaned operation id from an operation id.
Parameters
operation (openapi_schema_pydantic.v3.v3_1_0.operation.Operation) β
path (str) β
method (str) β
Return type
str
get_methods_for_path(path)[source]ο
Return a list of valid methods for the specified path.
Parameters
path (str) β
Return type
List[str]
get_operation(path, method)[source]ο
Get the operation object for a given path and HTTP method.
Parameters
path (str) β
method (str) β
Return type
openapi_schema_pydantic.v3.v3_1_0.operation.Operation
get_parameters_for_operation(operation)[source]ο
Get the components for a given operation.
Parameters
operation (openapi_schema_pydantic.v3.v3_1_0.operation.Operation) β
Return type
List[openapi_schema_pydantic.v3.v3_1_0.parameter.Parameter]
get_parameters_for_path(path)[source]ο
Parameters
path (str) β
Return type
List[openapi_schema_pydantic.v3.v3_1_0.parameter.Parameter]
get_referenced_schema(ref)[source]ο
Get a schema (or nested reference) or err.
Parameters
ref (openapi_schema_pydantic.v3.v3_1_0.reference.Reference) β
Return type
openapi_schema_pydantic.v3.v3_1_0.schema.Schema
get_request_body_for_operation(operation)[source]ο
Get the request body for a given operation.
Parameters
operation (openapi_schema_pydantic.v3.v3_1_0.operation.Operation) β
Return type
Optional[openapi_schema_pydantic.v3.v3_1_0.request_body.RequestBody]
get_schema(schema)[source]ο
Parameters
schema (Union[openapi_schema_pydantic.v3.v3_1_0.reference.Reference, openapi_schema_pydantic.v3.v3_1_0.schema.Schema]) β
Return type
openapi_schema_pydantic.v3.v3_1_0.schema.Schema
classmethod parse_obj(obj)[source]ο
Parameters
obj (dict) β
Return type
langchain.utilities.openapi.OpenAPISpec
property base_url: strο
Get the base url.
class langchain.tools.OpenWeatherMapQueryRun(*, name='OpenWeatherMap', description='A wrapper around OpenWeatherMap API. Useful for fetching current weather information for a specified location. Input should be a location string (e.g. London,GB).', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to query using the OpenWeatherMap API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper [Optional]ο
class langchain.tools.PubmedQueryRun(*, name='PubMed', description='A wrapper around PubMed.org Useful for when you need to answer questions about Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance, Statistics, Electrical Engineering, and Economics from scientific articles on PubMed.org. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to search using the PubMed API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.pupmed.PubMedAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.pupmed.PubMedAPIWrapper [Optional]ο
class langchain.tools.PythonAstREPLTool(*, name='python_repl_ast', description='A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, globals=None, locals=None, sanitize_input=True)[source]ο
Bases: langchain.tools.base.BaseTool
A tool for running python code in a REPL.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
globals (Optional[Dict]) β
locals (Optional[Dict]) β
sanitize_input (bool) β
Return type
None
attribute globals: Optional[Dict] [Optional]ο
attribute locals: Optional[Dict] [Optional]ο
attribute sanitize_input: bool = Trueο
class langchain.tools.PythonREPLTool(*, name='Python_REPL', description='A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, python_repl=None, sanitize_input=True)[source]ο
Bases: langchain.tools.base.BaseTool
A tool for running python code in a REPL.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
python_repl (langchain.utilities.python.PythonREPL) β
sanitize_input (bool) β
Return type
None
attribute python_repl: langchain.utilities.python.PythonREPL [Optional]ο
attribute sanitize_input: bool = Trueο
class langchain.tools.QueryCheckerTool(*, name='query_checker_sql_db', description='\nΒ Β Β Use this tool to double check if your query is correct before executing it.\nΒ Β Β Always use this tool before executing a query with query_sql_db!\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db, template='\n{query}\nDouble check the Spark SQL query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.', llm, llm_chain)[source]ο
Bases: langchain.tools.spark_sql.tool.BaseSparkSQLTool, langchain.tools.base.BaseTool
Use an LLM to check if a query is correct.
Adapted from https://www.patterns.app/blog/2023/01/18/crunchbot-sql-analyst-gpt/
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
db (langchain.utilities.spark_sql.SparkSQL) β
template (str) β
llm (langchain.base_language.BaseLanguageModel) β
llm_chain (langchain.chains.llm.LLMChain) β
Return type
None
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
attribute template: str = '\n{query}\nDouble check the Spark SQL query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.'ο
class langchain.tools.QueryPowerBITool(*, name='query_powerbi', description='\nΒ Β Β Input to this tool is a detailed question about the dataset, output is a result from the dataset. It will try to answer the question using the dataset, and if it cannot, it will ask for clarification.\n\nΒ Β Β Example Input: "How many rows are in table1?"\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, llm_chain, powerbi, template='\nAnswer the question below with a DAX query that can be sent to Power BI. DAX queries have a simple syntax comprised of just one required keyword, EVALUATE, and several optional keywords: ORDER BY, START AT, DEFINE, MEASURE, VAR, TABLE, and COLUMN. Each keyword defines a statement used for the duration of the query. Any time < or > are used in the text below it means that those values need to be replaced by table, columns or other things. If the question is not something you can answer with a DAX query, reply with "I cannot answer this" and the question will be escalated to a human.\n\nSome DAX functions return a table instead of a scalar, and must be wrapped in a function that evaluates the table and returns a scalar; unless the table is a single column, single row table, then it is treated as a scalar value. Most DAX functions require one or more arguments, which can include tables, columns, expressions, and values. However, some functions, such as PI, do not require any arguments, but always require parentheses to indicate the null argument. For example, you must always type PI(), not PI. You can also nest functions within other functions. \n\nSome commonly used functions are:\nEVALUATE <table> - At the most basic level, a DAX query is an EVALUATE statement containing a table expression. At least one EVALUATE statement is required, however, a query can contain any number of EVALUATE statements.\nEVALUATE <table> ORDER BY <expression> ASC or DESC - The optional ORDER BY keyword defines one or more expressions used to sort query results. Any expression that can be evaluated for each row of the result is valid.\nEVALUATE <table> ORDER BY <expression> ASC or DESC START AT <value> or <parameter> - The optional START AT keyword is used inside an ORDER BY clause. It defines the value at which the query results begin.\nDEFINE MEASURE | VAR; EVALUATE <table> - The optional DEFINE keyword introduces one or more calculated entity definitions that exist only for the duration of the query. Definitions precede the EVALUATE statement and are valid for all EVALUATE statements in the query. Definitions can be variables, measures, tables1, and columns1. Definitions can reference other definitions that appear before or after the current definition. At least one definition is required if the DEFINE keyword is included in a query.\nMEASURE <table name>[<measure name>] = <scalar expression> - Introduces a measure definition in a DEFINE statement of a DAX query.\nVAR <name> = <expression> - Stores the result of an expression as a named variable, which can then be passed as an argument to other measure expressions. Once resultant values have been calculated for a variable expression, those values do not change, even if the variable is referenced in another expression.\n\nFILTER(<table>,<filter>) - Returns a table that represents a subset of another table or expression, where <filter> is a Boolean expression that is to be evaluated for each row of the table. For example, [Amount] > 0 or [Region] = "France"\nROW(<name>, <expression>) - Returns a table with a single row containing values that result from the expressions given to each column.\nDISTINCT(<column>) - Returns a one-column table that contains the distinct values from the specified column. In other words, duplicate values are removed and only unique values are returned. This function cannot be used to Return values into a cell or column on a worksheet; rather, you nest the DISTINCT function within a formula, to get a list of distinct values that can be passed to another function and then counted, summed, or used for other operations.\nDISTINCT(<table>) - Returns a table by removing duplicate rows from another table or expression.\n\nAggregation functions, names with a A in it, handle booleans and empty strings in appropriate ways, while the same function without A only uses the numeric values in a column. Functions names with an X in it can include a expression as an argument, this will be evaluated for each row in the table and the result will be used in the regular function calculation, these are the functions:\nCOUNT(<column>), COUNTA(<column>), COUNTX(<table>,<expression>), COUNTAX(<table>,<expression>), COUNTROWS([<table>]), COUNTBLANK(<column>), DISTINCTCOUNT(<column>), DISTINCTCOUNTNOBLANK (<column>) - these are all variantions of count functions.\nAVERAGE(<column>), AVERAGEA(<column>), AVERAGEX(<table>,<expression>) - these are all variantions of average functions.\nMAX(<column>), MAXA(<column>), MAXX(<table>,<expression>) - these are all variantions of max functions.\nMIN(<column>), MINA(<column>), MINX(<table>,<expression>) - these are all variantions of min functions.\nPRODUCT(<column>), PRODUCTX(<table>,<expression>) - these are all variantions of product functions.\nSUM(<column>), SUMX(<table>,<expression>) - these are all variantions of sum functions.\n\nDate and time functions:\nDATE(year, month, day) - Returns a date value that represents the specified year, month, and day.\nDATEDIFF(date1, date2, <interval>) - Returns the difference between two date values, in the specified interval, that can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR.\nDATEVALUE(<date_text>) - Returns a date value that represents the specified date.\nYEAR(<date>), QUARTER(<date>), MONTH(<date>), DAY(<date>), HOUR(<date>), MINUTE(<date>), SECOND(<date>) - Returns the part of the date for the specified date.\n\nFinally, make sure to escape double quotes with a single backslash, and make sure that only table names have single quotes around them, while names of measures or the values of columns that you want to compare against are in escaped double quotes. Newlines are not necessary and can be skipped. The queries are serialized as json and so will have to fit be compliant with json syntax. Sometimes you will get a question, a DAX query and a error, in that case you need to rewrite the DAX query to get the correct answer.\n\nThe following tables exist: {tables}\n\nand the schema\'s for some are given here:\n{schemas}\n\nExamples:\n{examples}\n\nQuestion: {tool_input}\nDAX: \n', examples='\nQuestion: How many rows are in the table <table>?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(<table>))\n----\nQuestion: How many rows are in the table <table> where <column> is not empty?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(FILTER(<table>, <table>[<column>] <> "")))\n----\nQuestion: What was the average of <column> in <table>?\nDAX: EVALUATE ROW("Average", AVERAGE(<table>[<column>]))\n----\n', session_cache=None, max_iterations=5)[source]ο
Bases: langchain.tools.base.BaseTool
Tool for querying a Power BI Dataset.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
llm_chain (langchain.chains.llm.LLMChain) β
powerbi (langchain.utilities.powerbi.PowerBIDataset) β
template (Optional[str]) β
examples (Optional[str]) β
session_cache (Dict[str, Any]) β
max_iterations (int) β
Return type
None
attribute examples: Optional[str] = '\nQuestion: How many rows are in the table <table>?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(<table>))\n----\nQuestion: How many rows are in the table <table> where <column> is not empty?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(FILTER(<table>, <table>[<column>] <> "")))\n----\nQuestion: What was the average of <column> in <table>?\nDAX: EVALUATE ROW("Average", AVERAGE(<table>[<column>]))\n----\n'ο
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
attribute max_iterations: int = 5ο
attribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]ο
attribute session_cache: Dict[str, Any] [Optional]ο
attribute template: Optional[str] = '\nAnswer the question below with a DAX query that can be sent to Power BI. DAX queries have a simple syntax comprised of just one required keyword, EVALUATE, and several optional keywords: ORDER BY, START AT, DEFINE, MEASURE, VAR, TABLE, and COLUMN. Each keyword defines a statement used for the duration of the query. Any time < or > are used in the text below it means that those values need to be replaced by table, columns or other things. If the question is not something you can answer with a DAX query, reply with "I cannot answer this" and the question will be escalated to a human.\n\nSome DAX functions return a table instead of a scalar, and must be wrapped in a function that evaluates the table and returns a scalar; unless the table is a single column, single row table, then it is treated as a scalar value. Most DAX functions require one or more arguments, which can include tables, columns, expressions, and values. However, some functions, such as PI, do not require any arguments, but always require parentheses to indicate the null argument. For example, you must always type PI(), not PI. You can also nest functions within other functions. \n\nSome commonly used functions are:\nEVALUATE <table> - At the most basic level, a DAX query is an EVALUATE statement containing a table expression. At least one EVALUATE statement is required, however, a query can contain any number of EVALUATE statements.\nEVALUATE <table> ORDER BY <expression> ASC or DESC - The optional ORDER BY keyword defines one or more expressions used to sort query results. Any expression that can be evaluated for each row of the result is valid.\nEVALUATE <table> ORDER BY <expression> ASC or DESC START AT <value> or <parameter> - The optional START AT keyword is used inside an ORDER BY clause. It defines the value at which the query results begin.\nDEFINE MEASURE | VAR; EVALUATE <table> - The optional DEFINE keyword introduces one or more calculated entity definitions that exist only for the duration of the query. Definitions precede the EVALUATE statement and are valid for all EVALUATE statements in the query. Definitions can be variables, measures, tables1, and columns1. Definitions can reference other definitions that appear before or after the current definition. At least one definition is required if the DEFINE keyword is included in a query.\nMEASURE <table name>[<measure name>] = <scalar expression> - Introduces a measure definition in a DEFINE statement of a DAX query.\nVAR <name> = <expression> - Stores the result of an expression as a named variable, which can then be passed as an argument to other measure expressions. Once resultant values have been calculated for a variable expression, those values do not change, even if the variable is referenced in another expression.\n\nFILTER(<table>,<filter>) - Returns a table that represents a subset of another table or expression, where <filter> is a Boolean expression that is to be evaluated for each row of the table. For example, [Amount] > 0 or [Region] = "France"\nROW(<name>, <expression>) - Returns a table with a single row containing values that result from the expressions given to each column.\nDISTINCT(<column>) - Returns a one-column table that contains the distinct values from the specified column. In other words, duplicate values are removed and only unique values are returned. This function cannot be used to Return values into a cell or column on a worksheet; rather, you nest the DISTINCT function within a formula, to get a list of distinct values that can be passed to another function and then counted, summed, or used for other operations.\nDISTINCT(<table>) - Returns a table by removing duplicate rows from another table or expression.\n\nAggregation functions, names with a A in it, handle booleans and empty strings in appropriate ways, while the same function without A only uses the numeric values in a column. Functions names with an X in it can include a expression as an argument, this will be evaluated for each row in the table and the result will be used in the regular function calculation, these are the functions:\nCOUNT(<column>), COUNTA(<column>), COUNTX(<table>,<expression>), COUNTAX(<table>,<expression>), COUNTROWS([<table>]), COUNTBLANK(<column>), DISTINCTCOUNT(<column>), DISTINCTCOUNTNOBLANK (<column>) - these are all variantions of count functions.\nAVERAGE(<column>), AVERAGEA(<column>), AVERAGEX(<table>,<expression>) - these are all variantions of average functions.\nMAX(<column>), MAXA(<column>), MAXX(<table>,<expression>) - these are all variantions of max functions.\nMIN(<column>), MINA(<column>), MINX(<table>,<expression>) - these are all variantions of min functions.\nPRODUCT(<column>), PRODUCTX(<table>,<expression>) - these are all variantions of product functions.\nSUM(<column>), SUMX(<table>,<expression>) - these are all variantions of sum functions.\n\nDate and time functions:\nDATE(year, month, day) - Returns a date value that represents the specified year, month, and day.\nDATEDIFF(date1, date2, <interval>) - Returns the difference between two date values, in the specified interval, that can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR.\nDATEVALUE(<date_text>) - Returns a date value that represents the specified date.\nYEAR(<date>), QUARTER(<date>), MONTH(<date>), DAY(<date>), HOUR(<date>), MINUTE(<date>), SECOND(<date>) - Returns the part of the date for the specified date.\n\nFinally, make sure to escape double quotes with a single backslash, and make sure that only table names have single quotes around them, while names of measures or the values of columns that you want to compare against are in escaped double quotes. Newlines are not necessary and can be skipped. The queries are serialized as json and so will have to fit be compliant with json syntax. Sometimes you will get a question, a DAX query and a error, in that case you need to rewrite the DAX query to get the correct answer.\n\nThe following tables exist: {tables}\n\nand the schema\'s for some are given here:\n{schemas}\n\nExamples:\n{examples}\n\nQuestion: {tool_input}\nDAX: \n'ο
class langchain.tools.QuerySQLCheckerTool(*, name='sql_db_query_checker', description='\nΒ Β Β Use this tool to double check if your query is correct before executing it.\nΒ Β Β Always use this tool before executing a query with query_sql_db!\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db, template='\n{query}\nDouble check the {dialect} query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.', llm, llm_chain)[source]ο
Bases: langchain.tools.sql_database.tool.BaseSQLDatabaseTool, langchain.tools.base.BaseTool
Use an LLM to check if a query is correct.
Adapted from https://www.patterns.app/blog/2023/01/18/crunchbot-sql-analyst-gpt/
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
db (langchain.sql_database.SQLDatabase) β
template (str) β
llm (langchain.base_language.BaseLanguageModel) β
llm_chain (langchain.chains.llm.LLMChain) β
Return type
None
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
attribute template: str = '\n{query}\nDouble check the {dialect} query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.'ο
class langchain.tools.QuerySQLDataBaseTool(*, name='sql_db_query', description='\nΒ Β Β Input to this tool is a detailed and correct SQL query, output is a result from the database.\nΒ Β Β If the query is not correct, an error message will be returned.\nΒ Β Β If an error is returned, rewrite the query, check the query, and try again.\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]ο
Bases: langchain.tools.sql_database.tool.BaseSQLDatabaseTool, langchain.tools.base.BaseTool
Tool for querying a SQL database.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
db (langchain.sql_database.SQLDatabase) β
Return type
None
class langchain.tools.QuerySparkSQLTool(*, name='query_sql_db', description='\nΒ Β Β Input to this tool is a detailed and correct SQL query, output is a result from the Spark SQL.\nΒ Β Β If the query is not correct, an error message will be returned.\nΒ Β Β If an error is returned, rewrite the query, check the query, and try again.\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]ο
Bases: langchain.tools.spark_sql.tool.BaseSparkSQLTool, langchain.tools.base.BaseTool
Tool for querying a Spark SQL.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
db (langchain.utilities.spark_sql.SparkSQL) β
Return type
None
class langchain.tools.ReadFileTool(*, name='read_file', description='Read file from disk', args_schema=<class 'langchain.tools.file_management.read.ReadFileInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]ο
Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
root_dir (Optional[str]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.read.ReadFileInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Read file from disk'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'read_file'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.RequestsDeleteTool(*, name='requests_delete', description='A portal to the internet. Use this when you need to make a DELETE request to a URL. Input should be a specific url, and the output will be the text response of the DELETE request.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source]ο
Bases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool
Tool for making a DELETE request to an API endpoint.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
requests_wrapper (langchain.requests.TextRequestsWrapper) β
Return type
None
class langchain.tools.RequestsGetTool(*, name='requests_get', description='A portal to the internet. Use this when you need to get specific content from a website. Input should be aΒ url (i.e. https://www.google.com). The output will be the text response of the GET request.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source]ο
Bases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool
Tool for making a GET request to an API endpoint.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
requests_wrapper (langchain.requests.TextRequestsWrapper) β
Return type
None
class langchain.tools.RequestsPatchTool(*, name='requests_patch', description='Use this when you want to PATCH to a website.\nΒ Β Β Input should be a json string with two keys: "url" and "data".\nΒ Β Β The value of "url" should be a string, and the value of "data" should be a dictionary of \nΒ Β Β key-value pairs you want to PATCH to the url.\nΒ Β Β Be careful to always use double quotes for strings in the json string\nΒ Β Β The output will be the text response of the PATCH request.\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source]ο
Bases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool
Tool for making a PATCH request to an API endpoint.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
requests_wrapper (langchain.requests.TextRequestsWrapper) β
Return type
None
class langchain.tools.RequestsPostTool(*, name='requests_post', description='Use this when you want to POST to a website.\nΒ Β Β Input should be a json string with two keys: "url" and "data".\nΒ Β Β The value of "url" should be a string, and the value of "data" should be a dictionary of \nΒ Β Β key-value pairs you want to POST to the url.\nΒ Β Β Be careful to always use double quotes for strings in the json string\nΒ Β Β The output will be the text response of the POST request.\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source]ο
Bases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool
Tool for making a POST request to an API endpoint.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
requests_wrapper (langchain.requests.TextRequestsWrapper) β
Return type
None
class langchain.tools.RequestsPutTool(*, name='requests_put', description='Use this when you want to PUT to a website.\nΒ Β Β Input should be a json string with two keys: "url" and "data".\nΒ Β Β The value of "url" should be a string, and the value of "data" should be a dictionary of \nΒ Β Β key-value pairs you want to PUT to the url.\nΒ Β Β Be careful to always use double quotes for strings in the json string.\nΒ Β Β The output will be the text response of the PUT request.\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source]ο
Bases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool
Tool for making a PUT request to an API endpoint.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
requests_wrapper (langchain.requests.TextRequestsWrapper) β
Return type
None
class langchain.tools.SceneXplainTool(*, name='image_explainer', description='An Image Captioning Tool: Use this tool to generate a detailed caption for an image. The input can be an image file of any format, and the output will be a text description that covers every detail of the image.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to explain images.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.scenexplain.SceneXplainAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.scenexplain.SceneXplainAPIWrapper [Optional]ο
class langchain.tools.SearxSearchResults(*, name='Searx Search Results', description='A meta search engine.Useful for when you need to answer questions about current events.Input should be a search query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, wrapper, num_results=4, kwargs=None, **extra_data)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that has the capability to query a Searx instance and get back json.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
wrapper (langchain.utilities.searx_search.SearxSearchWrapper) β
num_results (int) β
kwargs (dict) β
extra_data (Any) β
Return type
None
attribute kwargs: dict [Optional]ο
attribute num_results: int = 4ο
attribute wrapper: langchain.utilities.searx_search.SearxSearchWrapper [Required]ο
class langchain.tools.SearxSearchRun(*, name='searx_search', description='A meta search engine.Useful for when you need to answer questions about current events.Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, wrapper, kwargs=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to query a Searx instance.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
wrapper (langchain.utilities.searx_search.SearxSearchWrapper) β
kwargs (dict) β
Return type
None
attribute kwargs: dict [Optional]ο
attribute wrapper: langchain.utilities.searx_search.SearxSearchWrapper [Required]ο
class langchain.tools.ShellTool(*, name='terminal', description='Run shell commands on this Linux machine.', args_schema=<class 'langchain.tools.shell.tool.ShellInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, process=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool to run shell commands.
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
process (langchain.utilities.bash.BashProcess) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.shell.tool.ShellInput'>ο
Schema for input arguments.
attribute description: str = 'Run shell commands on this Linux machine.'ο
Description of tool.
attribute name: str = 'terminal'ο
Name of tool.
attribute process: langchain.utilities.bash.BashProcess [Optional]ο
Bash process to run commands.
class langchain.tools.SleepTool(*, name='sleep', description='Make agent sleep for a specified number of seconds.', args_schema=<class 'langchain.tools.sleep.tool.SleepInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to sleep.
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.sleep.tool.SleepInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
langchain.tools.StdInInquireTool(*args, **kwargs)[source]ο
Tool for asking the user for input.
Parameters
args (Any) β
kwargs (Any) β
Return type
langchain.tools.human.tool.HumanInputRun
class langchain.tools.SteamshipImageGenerationTool(*, name='GenerateImage', description='Useful for when you need to generate an image.Input: A detailed text-2-image prompt describing an imageOutput: the UUID of a generated image', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, model_name, size='512x512', steamship, return_urls=False)[source]ο
Bases: langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
model_name (langchain.tools.steamship_image_generation.tool.ModelName) β
size (Optional[str]) β
steamship (Steamship) β
return_urls (Optional[bool]) β
Return type
None
attribute model_name: ModelName [Required]ο
attribute return_urls: Optional[bool] = Falseο
attribute size: Optional[str] = '512x512'ο
attribute steamship: Steamship [Required]ο
class langchain.tools.StructuredTool(*, name, description='', args_schema, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, func, coroutine=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that can operate on any number of inputs.
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
func (Callable[[...], Any]) β
coroutine (Optional[Callable[[...], Awaitable[Any]]]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] [Required]ο
The input argumentsβ schema.
The tool schema.
attribute coroutine: Optional[Callable[[...], Awaitable[Any]]] = Noneο
The asynchronous version of the function.
attribute description: str = ''ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute func: Callable[[...], Any] [Required]ο
The function to run when the tool is called.
classmethod from_function(func, name=None, description=None, return_direct=False, args_schema=None, infer_schema=True, **kwargs)[source]ο
Create tool from a given function.
A classmethod that helps to create a tool from a function.
Parameters
func (Callable) β The function from which to create a tool
name (Optional[str]) β The name of the tool. Defaults to the function name
description (Optional[str]) β The description of the tool. Defaults to the function docstring
return_direct (bool) β Whether to return the result directly or as a callback
args_schema (Optional[Type[pydantic.main.BaseModel]]) β The schema of the toolβs input arguments
infer_schema (bool) β Whether to infer the schema from the functionβs signature
**kwargs β Additional arguments to pass to the tool
kwargs (Any) β
Returns
The tool
Return type
langchain.tools.base.StructuredTool
Examples
β¦ code-block:: python
def add(a: int, b: int) -> int:βββAdd two numbersβββ
return a + b
tool = StructuredTool.from_function(add)
tool.run(1, 2) # 3
property args: dictο
The toolβs input arguments.
class langchain.tools.Tool(name, func, description, *, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, coroutine=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that takes in function or coroutine directly.
Parameters
name (str) β
func (Callable[[...], str]) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
coroutine (Optional[Callable[[...], Awaitable[str]]]) β
Return type
None
attribute args_schema: Optional[Type[pydantic.main.BaseModel]] = Noneο
Pydantic model class to validate and parse the toolβs input arguments.
attribute callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = Noneο
Deprecated. Please use callbacks instead.
attribute callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = Noneο
Callbacks to be called during tool execution.
attribute coroutine: Optional[Callable[[...], Awaitable[str]]] = Noneο
The asynchronous version of the function.
attribute description: str = ''ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute func: Callable[[...], str] [Required]ο
The function to run when the tool is called.
attribute handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = Falseο
Handle the content of the ToolException thrown.
attribute name: str [Required]ο
The unique name of the tool that clearly communicates its purpose.
attribute return_direct: bool = Falseο
Whether to return the toolβs output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
attribute verbose: bool = Falseο
Whether to log the toolβs progress.
classmethod from_function(func, name, description, return_direct=False, args_schema=None, **kwargs)[source]ο
Initialize tool from a function.
Parameters
func (Callable) β
name (str) β
description (str) β
return_direct (bool) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
kwargs (Any) β
Return type
langchain.tools.base.Tool
property args: dictο
The toolβs input arguments.
class langchain.tools.VectorStoreQATool(*, name, description, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, vectorstore, llm=None)[source]ο
Bases: langchain.tools.vectorstore.tool.BaseVectorStoreTool, langchain.tools.base.BaseTool
Tool for the VectorDBQA chain. To be initialized with name and chain.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
vectorstore (langchain.vectorstores.base.VectorStore) β
llm (langchain.base_language.BaseLanguageModel) β
Return type
None
static get_description(name, description)[source]ο
Parameters
name (str) β
description (str) β
Return type
str
class langchain.tools.VectorStoreQAWithSourcesTool(*, name, description, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, vectorstore, llm=None)[source]ο
Bases: langchain.tools.vectorstore.tool.BaseVectorStoreTool, langchain.tools.base.BaseTool
Tool for the VectorDBQAWithSources chain.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
vectorstore (langchain.vectorstores.base.VectorStore) β
llm (langchain.base_language.BaseLanguageModel) β
Return type
None
static get_description(name, description)[source]ο
Parameters
name (str) β
description (str) β
Return type
str
class langchain.tools.WikipediaQueryRun(*, name='Wikipedia', description='A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, facts, historical events, or other subjects. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to search using the Wikipedia API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.wikipedia.WikipediaAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.wikipedia.WikipediaAPIWrapper [Required]ο
class langchain.tools.WolframAlphaQueryRun(*, name='wolfram_alpha', description='A wrapper around Wolfram Alpha. Useful for when you need to answer questions about Math, Science, Technology, Culture, Society and Everyday Life. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to query using the Wolfram Alpha SDK.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper [Required]ο
class langchain.tools.WriteFileTool(*, name='write_file', description='Write file to disk', args_schema=<class 'langchain.tools.file_management.write.WriteFileInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]ο
Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
root_dir (Optional[str]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.write.WriteFileInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Write file to disk'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'write_file'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.YouTubeSearchTool(*, name='youtube_search', description='search for youtube videos associated with a person. the input to this tool should be a comma separated list, the first part contains a person name and the second a number that is the maximum number of video results to return aka num_results. the second part is optional', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False)[source]ο
Bases: langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
Return type
None
class langchain.tools.ZapierNLAListActions(*, name='ZapierNLA_list_actions', description='A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\'Message_Text\', \'Channel\'], your instruction should be something like \'send a slack message to the #general channel with the text hello world\'. Another example: if the params are [\'Calendar\', \'Search_Term\'], your instruction should be something like \'find the meeting in my personal calendar at 3pm\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \'not enough information provided in the instruction, missing <param>\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}This tool returns a list of the user\'s exposed actions.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool
Returns a list of all exposed (enabled) actions associated withcurrent user (associated with the set api_key). Change your exposed
actions here: https://nla.zapier.com/demo/start/
The return list can be empty if no actions exposed. Else will contain
a list of action objects:
[{βidβ: str,
βdescriptionβ: str,
βparamsβ: Dict[str, str]
}]
params will always contain an instructions key, the only required
param. All others optional and if provided will override any AI guesses
(see βunderstanding the AI guessing flowβ here:
https://nla.zapier.com/api/v1/docs)
Parameters
None β
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.zapier.ZapierNLAWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]ο
class langchain.tools.ZapierNLARunAction(*, name='', description='', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None, action_id, params=None, base_prompt='A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\'Message_Text\', \'Channel\'], your instruction should be something like \'send a slack message to the #general channel with the text hello world\'. Another example: if the params are [\'Calendar\', \'Search_Term\'], your instruction should be something like \'find the meeting in my personal calendar at 3pm\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \'not enough information provided in the instruction, missing <param>\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}', zapier_description, params_schema=None)[source]ο
Bases: langchain.tools.base.BaseTool
Executes an action that is identified by action_id, must be exposed(enabled) by the current user (associated with the set api_key). Change
your exposed actions here: https://nla.zapier.com/demo/start/
The return JSON is guaranteed to be less than ~500 words (350
tokens) making it safe to inject into the prompt of another LLM
call.
Parameters
action_id (str) β a specific action ID (from list actions) of the action to execute
(the set api_key must be associated with the action owner)
instructions β a natural language instruction string for using the action
(eg. βget the latest email from Mike Knoopβ for βGmail: find emailβ action)
params (Optional[dict]) β a dict, optional. Any params provided will override AI guesses
from instructions (see βunderstanding the AI guessing flowβ here:
https://nla.zapier.com/api/v1/docs)
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.zapier.ZapierNLAWrapper) β
base_prompt (str) β
zapier_description (str) β
params_schema (Dict[str, str]) β
Return type
None
attribute action_id: str [Required]ο
attribute api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]ο
attribute base_prompt: str = 'A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\'Message_Text\', \'Channel\'], your instruction should be something like \'send a slack message to the #general channel with the text hello world\'. Another example: if the params are [\'Calendar\', \'Search_Term\'], your instruction should be something like \'find the meeting in my personal calendar at 3pm\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \'not enough information provided in the instruction, missing <param>\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}'ο
attribute params: Optional[dict] = Noneο
attribute params_schema: Dict[str, str] [Optional]ο
attribute zapier_description: str [Required]ο
langchain.tools.format_tool_to_openai_function(tool)[source]ο
Format tool into the OpenAI function API.
Parameters
tool (langchain.tools.base.BaseTool) β
Return type
langchain.tools.convert_to_openai.FunctionDescription
langchain.tools.tool(*args, return_direct=False, args_schema=None, infer_schema=True)[source]ο
Make tools out of functions, can be used with or without arguments.
Parameters
*args β The arguments to the tool.
return_direct (bool) β Whether to return directly from the tool rather
than continuing the agent loop.
args_schema (Optional[Type[pydantic.main.BaseModel]]) β optional argument schema for user to specify
infer_schema (bool) β Whether to infer the schema of the arguments from
the functionβs signature. This also makes the resultant tool
accept a dictionary input to its run() function.
args (Union[str, Callable]) β
Return type
Callable
Requires:
Function must be of type (str) -> str
Function must have a docstring
Examples
@tool
def search_api(query: str) -> str:
# Searches the API for the query.
return
@tool("search", return_direct=True)
def search_api(query: str) -> str:
# Searches the API for the query.
return | https://api.python.langchain.com/en/latest/modules/tools.html |
18521a36-3fee-40f9-86fe-e77cfb18d8be | Callbacksο
Callback handlers that allow listening to events in LangChain.
class langchain.callbacks.AimCallbackHandler(repo=None, experiment_name=None, system_tracking_interval=10, log_system_params=True)[source]ο
Bases: langchain.callbacks.aim_callback.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler
Callback Handler that logs to Aim.
Parameters
repo (str, optional) β Aim repository path or Repo object to which
Run object is bound. If skipped, default Repo is used.
experiment_name (str, optional) β Sets Runβs experiment property.
βdefaultβ if not specified. Can be used later to query runs/sequences.
system_tracking_interval (int, optional) β Sets the tracking interval
in seconds for system usage metrics (CPU, Memory, etc.). Set to None
to disable system metrics tracking.
log_system_params (bool, optional) β Enable/Disable logging of system
params such as installed packages, git info, environment variables, etc.
Return type
None
This handler will utilize the associated callback method called and formats
the input of each callback function with metadata regarding the state of LLM run
and then logs the response to Aim.
setup(**kwargs)[source]ο
Parameters
kwargs (Any) β
Return type
None
on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Run when LLM ends running.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Run when LLM generates a new token.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Run when LLM errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Run when chain errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_tool_end(output, **kwargs)[source]ο
Run when tool ends running.
Parameters
output (str) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Run when tool errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Run when agent is ending.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Run when agent ends running.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
flush_tracker(repo=None, experiment_name=None, system_tracking_interval=10, log_system_params=True, langchain_asset=None, reset=True, finish=False)[source]ο
Flush the tracker and reset the session.
Parameters
repo (str, optional) β Aim repository path or Repo object to which
Run object is bound. If skipped, default Repo is used.
experiment_name (str, optional) β Sets Runβs experiment property.
βdefaultβ if not specified. Can be used later to query runs/sequences.
system_tracking_interval (int, optional) β Sets the tracking interval
in seconds for system usage metrics (CPU, Memory, etc.). Set to None
to disable system metrics tracking.
log_system_params (bool, optional) β Enable/Disable logging of system
params such as installed packages, git info, environment variables, etc.
langchain_asset (Any) β The langchain asset to save.
reset (bool) β Whether to reset the session.
finish (bool) β Whether to finish the run.
Returns β None
Return type
None
class langchain.callbacks.ArgillaCallbackHandler(dataset_name, workspace_name=None, api_url=None, api_key=None)[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
Callback Handler that logs into Argilla.
Parameters
dataset_name (str) β name of the FeedbackDataset in Argilla. Note that it must
exist in advance. If you need help on how to create a FeedbackDataset in
Argilla, please visit
https://docs.argilla.io/en/latest/guides/llms/practical_guides/use_argilla_callback_in_langchain.html.
workspace_name (Optional[str]) β name of the workspace in Argilla where the specified
FeedbackDataset lives in. Defaults to None, which means that the
default workspace will be used.
api_url (Optional[str]) β URL of the Argilla Server that we want to use, and where the
FeedbackDataset lives in. Defaults to None, which means that either
ARGILLA_API_URL environment variable or the default http://localhost:6900
will be used.
api_key (Optional[str]) β API Key to connect to the Argilla Server. Defaults to None, which
means that either ARGILLA_API_KEY environment variable or the default
argilla.apikey will be used.
Raises
ImportError β if the argilla package is not installed.
ConnectionError β if the connection to Argilla fails.
FileNotFoundError β if the FeedbackDataset retrieval from Argilla fails.
Return type
None
Examples
>>> from langchain.llms import OpenAI
>>> from langchain.callbacks import ArgillaCallbackHandler
>>> argilla_callback = ArgillaCallbackHandler(
... dataset_name="my-dataset",
... workspace_name="my-workspace",
... api_url="http://localhost:6900",
... api_key="argilla.apikey",
... )
>>> llm = OpenAI(
... temperature=0,
... callbacks=[argilla_callback],
... verbose=True,
... openai_api_key="API_KEY_HERE",
... )
>>> llm.generate([
... "What is the best NLP-annotation tool out there? (no bias at all)",
... ])
"Argilla, no doubt about it."
on_llm_start(serialized, prompts, **kwargs)[source]ο
Save the prompts in memory when an LLM starts.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Do nothing when a new token is generated.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Log records to Argilla when an LLM ends.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Do nothing when LLM outputs an error.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
If the key input is in inputs, then save it in self.prompts using
either the parent_run_id or the run_id as the key. This is done so that
we donβt log the same input prompt twice, once when the LLM starts and once
when the chain starts.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
If either the parent_run_id or the run_id is in self.prompts, then
log the outputs to Argilla, and pop the run from self.prompts. The behavior
differs if the output is a list or not.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Do nothing when LLM chain outputs an error.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Do nothing when tool starts.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Do nothing when agent takes a specific action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
on_tool_end(output, observation_prefix=None, llm_prefix=None, **kwargs)[source]ο
Do nothing when tool ends.
Parameters
output (str) β
observation_prefix (Optional[str]) β
llm_prefix (Optional[str]) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Do nothing when tool outputs an error.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Do nothing
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Do nothing
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
class langchain.callbacks.ArizeCallbackHandler(model_id=None, model_version=None, SPACE_KEY=None, API_KEY=None)[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
Callback Handler that logs to Arize.
Parameters
model_id (Optional[str]) β
model_version (Optional[str]) β
SPACE_KEY (Optional[str]) β
API_KEY (Optional[str]) β
Return type
None
on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Do nothing.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Run when LLM ends running.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Do nothing.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Do nothing.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
on_tool_end(output, observation_prefix=None, llm_prefix=None, **kwargs)[source]ο
Run when tool ends running.
Parameters
output (str) β
observation_prefix (Optional[str]) β
llm_prefix (Optional[str]) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Run when tool errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Run on arbitrary text.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Run on agent end.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
class langchain.callbacks.AsyncIteratorCallbackHandler[source]ο
Bases: langchain.callbacks.base.AsyncCallbackHandler
Callback handler that returns an async iterator.
Return type
None
property always_verbose: boolο
queue: asyncio.queues.Queue[str]ο
done: asyncio.locks.Eventο
async on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
async on_llm_new_token(token, **kwargs)[source]ο
Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) β
kwargs (Any) β
Return type
None
async on_llm_end(response, **kwargs)[source]ο
Run when LLM ends running.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
async on_llm_error(error, **kwargs)[source]ο
Run when LLM errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
async aiter()[source]ο
Return type
AsyncIterator[str]
class langchain.callbacks.ClearMLCallbackHandler(task_type='inference', project_name='langchain_callback_demo', tags=None, task_name=None, visualize=False, complexity_metrics=False, stream_logs=False)[source]ο
Bases: langchain.callbacks.utils.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler
Callback Handler that logs to ClearML.
Parameters
job_type (str) β The type of clearml task such as βinferenceβ, βtestingβ or βqcβ
project_name (str) β The clearml project name
tags (list) β Tags to add to the task
task_name (str) β Name of the clearml task
visualize (bool) β Whether to visualize the run.
complexity_metrics (bool) β Whether to log complexity metrics
stream_logs (bool) β Whether to stream callback actions to ClearML
task_type (Optional[str]) β
Return type
None
This handler will utilize the associated callback method and formats
the input of each callback function with metadata regarding the state of LLM run,
and adds the response to the list of records for both the {method}_records and
action. It then logs the response to the ClearML console.
on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Run when LLM generates a new token.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Run when LLM ends running.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Run when LLM errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Run when chain errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_tool_end(output, **kwargs)[source]ο
Run when tool ends running.
Parameters
output (str) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Run when tool errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Run when agent is ending.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Run when agent ends running.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
analyze_text(text)[source]ο
Analyze text using textstat and spacy.
Parameters
text (str) β The text to analyze.
Returns
A dictionary containing the complexity metrics.
Return type
(dict)
flush_tracker(name=None, langchain_asset=None, finish=False)[source]ο
Flush the tracker and setup the session.
Everything after this will be a new table.
Parameters
name (Optional[str]) β Name of the preformed session so far so it is identifyable
langchain_asset (Any) β The langchain asset to save.
finish (bool) β Whether to finish the run.
Returns β None
Return type
None
class langchain.callbacks.CometCallbackHandler(task_type='inference', workspace=None, project_name=None, tags=None, name=None, visualizations=None, complexity_metrics=False, custom_metrics=None, stream_logs=True)[source]ο
Bases: langchain.callbacks.utils.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler
Callback Handler that logs to Comet.
Parameters
job_type (str) β The type of comet_ml task such as βinferenceβ,
βtestingβ or βqcβ
project_name (str) β The comet_ml project name
tags (list) β Tags to add to the task
task_name (str) β Name of the comet_ml task
visualize (bool) β Whether to visualize the run.
complexity_metrics (bool) β Whether to log complexity metrics
stream_logs (bool) β Whether to stream callback actions to Comet
task_type (Optional[str]) β
workspace (Optional[str]) β
name (Optional[str]) β
visualizations (Optional[List[str]]) β
custom_metrics (Optional[Callable]) β
Return type
None
This handler will utilize the associated callback method and formats
the input of each callback function with metadata regarding the state of LLM run,
and adds the response to the list of records for both the {method}_records and
action. It then logs the response to Comet.
on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Run when LLM generates a new token.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Run when LLM ends running.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Run when LLM errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Run when chain errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_tool_end(output, **kwargs)[source]ο
Run when tool ends running.
Parameters
output (str) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Run when tool errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Run when agent is ending.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Run when agent ends running.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
flush_tracker(langchain_asset=None, task_type='inference', workspace=None, project_name='comet-langchain-demo', tags=None, name=None, visualizations=None, complexity_metrics=False, custom_metrics=None, finish=False, reset=False)[source]ο
Flush the tracker and setup the session.
Everything after this will be a new table.
Parameters
name (Optional[str]) β Name of the preformed session so far so it is identifyable
langchain_asset (Any) β The langchain asset to save.
finish (bool) β Whether to finish the run.
Returns β None
task_type (Optional[str]) β
workspace (Optional[str]) β
project_name (Optional[str]) β
tags (Optional[Sequence]) β
visualizations (Optional[List[str]]) β
complexity_metrics (bool) β
custom_metrics (Optional[Callable]) β
reset (bool) β
Return type
None
class langchain.callbacks.FileCallbackHandler(filename, mode='a', color=None)[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
Callback Handler that writes to a file.
Parameters
filename (str) β
mode (str) β
color (Optional[str]) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Print out that we are entering a chain.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Print out that we finished a chain.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_agent_action(action, color=None, **kwargs)[source]ο
Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
color (Optional[str]) β
kwargs (Any) β
Return type
Any
on_tool_end(output, color=None, observation_prefix=None, llm_prefix=None, **kwargs)[source]ο
If not the final action, print out observation.
Parameters
output (str) β
color (Optional[str]) β
observation_prefix (Optional[str]) β
llm_prefix (Optional[str]) β
kwargs (Any) β
Return type
None
on_text(text, color=None, end='', **kwargs)[source]ο
Run when agent ends.
Parameters
text (str) β
color (Optional[str]) β
end (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, color=None, **kwargs)[source]ο
Run on agent end.
Parameters
finish (langchain.schema.AgentFinish) β
color (Optional[str]) β
kwargs (Any) β
Return type
None
class langchain.callbacks.FinalStreamingStdOutCallbackHandler(*, answer_prefix_tokens=None, strip_tokens=True, stream_prefix=False)[source]ο
Bases: langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler
Callback handler for streaming in agents.
Only works with agents using LLMs that support streaming.
Only the final output of the agent will be streamed.
Parameters
answer_prefix_tokens (Optional[List[str]]) β
strip_tokens (bool) β
stream_prefix (bool) β
Return type
None
append_to_last_tokens(token)[source]ο
Parameters
token (str) β
Return type
None
check_if_answer_reached()[source]ο
Return type
bool
on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) β
kwargs (Any) β
Return type
None
class langchain.callbacks.HumanApprovalCallbackHandler(approve=<function _default_approve>, should_check=<function _default_true>)[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
Callback for manually validating values.
Parameters
approve (Callable[[Any], bool]) β
should_check (Callable[[Dict[str, Any]], bool]) β
raise_error: bool = Trueο
on_tool_start(serialized, input_str, *, run_id, parent_run_id=None, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
run_id (uuid.UUID) β
parent_run_id (Optional[uuid.UUID]) β
kwargs (Any) β
Return type
Any
class langchain.callbacks.InfinoCallbackHandler(model_id=None, model_version=None, verbose=False)[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
Callback Handler that logs to Infino.
Parameters
model_id (Optional[str]) β
model_version (Optional[str]) β
verbose (bool) β
Return type
None
on_llm_start(serialized, prompts, **kwargs)[source]ο
Log the prompts to Infino, and set start time and error flag.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Do nothing when a new token is generated.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Log the latency, error, token usage, and response to Infino.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Set the error flag.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Do nothing when LLM chain starts.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Do nothing when LLM chain ends.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Need to log the error.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Do nothing when tool starts.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Do nothing when agent takes a specific action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
on_tool_end(output, observation_prefix=None, llm_prefix=None, **kwargs)[source]ο
Do nothing when tool ends.
Parameters
output (str) β
observation_prefix (Optional[str]) β
llm_prefix (Optional[str]) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Do nothing when tool outputs an error.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Do nothing.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Do nothing.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
class langchain.callbacks.MlflowCallbackHandler(name='langchainrun-%', experiment='langchain', tags={}, tracking_uri=None)[source]ο
Bases: langchain.callbacks.utils.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler
Callback Handler that logs metrics and artifacts to mlflow server.
Parameters
name (str) β Name of the run.
experiment (str) β Name of the experiment.
tags (dict) β Tags to be attached for the run.
tracking_uri (str) β MLflow tracking server uri.
Return type
None
This handler will utilize the associated callback method called and formats
the input of each callback function with metadata regarding the state of LLM run,
and adds the response to the list of records for both the {method}_records and
action. It then logs the response to mlflow server.
on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Run when LLM generates a new token.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Run when LLM ends running.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Run when LLM errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Run when chain errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_tool_end(output, **kwargs)[source]ο
Run when tool ends running.
Parameters
output (str) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Run when tool errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Run when agent is ending.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Run when agent ends running.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
flush_tracker(langchain_asset=None, finish=False)[source]ο
Parameters
langchain_asset (Any) β
finish (bool) β
Return type
None
class langchain.callbacks.OpenAICallbackHandler[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
Callback Handler that tracks OpenAI info.
total_tokens: int = 0ο
prompt_tokens: int = 0ο
completion_tokens: int = 0ο
successful_requests: int = 0ο
total_cost: float = 0.0ο
property always_verbose: boolο
Whether to call verbose callbacks even if verbose is False.
on_llm_start(serialized, prompts, **kwargs)[source]ο
Print out the prompts.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Print out the token.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Collect token usage.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
class langchain.callbacks.StdOutCallbackHandler(color=None)[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
Callback Handler that prints to std out.
Parameters
color (Optional[str]) β
Return type
None
on_llm_start(serialized, prompts, **kwargs)[source]ο
Print out the prompts.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Do nothing.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Do nothing.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Print out that we are entering a chain.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Print out that we finished a chain.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Do nothing.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_agent_action(action, color=None, **kwargs)[source]ο
Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
color (Optional[str]) β
kwargs (Any) β
Return type
Any
on_tool_end(output, color=None, observation_prefix=None, llm_prefix=None, **kwargs)[source]ο
If not the final action, print out observation.
Parameters
output (str) β
color (Optional[str]) β
observation_prefix (Optional[str]) β
llm_prefix (Optional[str]) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, color=None, end='', **kwargs)[source]ο
Run when agent ends.
Parameters
text (str) β
color (Optional[str]) β
end (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, color=None, **kwargs)[source]ο
Run on agent end.
Parameters
finish (langchain.schema.AgentFinish) β
color (Optional[str]) β
kwargs (Any) β
Return type
None
class langchain.callbacks.StreamingStdOutCallbackHandler[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
Callback handler for streaming. Only works with LLMs that support streaming.
on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Run when LLM ends running.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Run when LLM errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Run when chain errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
on_tool_end(output, **kwargs)[source]ο
Run when tool ends running.
Parameters
output (str) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Run when tool errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Run on arbitrary text.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Run on agent end.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
langchain.callbacks.StreamlitCallbackHandler(parent_container, *, max_thought_containers=4, expand_new_thoughts=True, collapse_completed_thoughts=True, thought_labeler=None)[source]ο
Construct a new StreamlitCallbackHandler. This CallbackHandler is geared towards
use with a LangChain Agent; it displays the Agentβs LLM and tool-usage βthoughtsβ
inside a series of Streamlit expanders.
Parameters
parent_container (DeltaGenerator) β The st.container that will contain all the Streamlit elements that the
Handler creates.
max_thought_containers (int) β The max number of completed LLM thought containers to show at once. When this
threshold is reached, a new thought will cause the oldest thoughts to be
collapsed into a βHistoryβ expander. Defaults to 4.
expand_new_thoughts (bool) β Each LLM βthoughtβ gets its own st.expander. This param controls whether that
expander is expanded by default. Defaults to True.
collapse_completed_thoughts (bool) β If True, LLM thought expanders will be collapsed when completed.
Defaults to True.
thought_labeler (Optional[LLMThoughtLabeler]) β An optional custom LLMThoughtLabeler instance. If unspecified, the handler
will use the default thought labeling logic. Defaults to None.
Returns
A new StreamlitCallbackHandler instance.
Note that this is an βauto-updatingβ API (if the installed version of Streamlit)
has a more recent StreamlitCallbackHandler implementation, an instance of that class
will be used.
Return type
BaseCallbackHandler
class langchain.callbacks.LLMThoughtLabeler[source]ο
Bases: object
Generates markdown labels for LLMThought containers. Pass a custom
subclass of this to StreamlitCallbackHandler to override its default
labeling logic.
get_initial_label()[source]ο
Return the markdown label for a new LLMThought that doesnβt have
an associated tool yet.
Return type
str
get_tool_label(tool, is_complete)[source]ο
Return the label for an LLMThought that has an associated
tool.
Parameters
tool (langchain.callbacks.streamlit.streamlit_callback_handler.ToolRecord) β The toolβs ToolRecord
is_complete (bool) β True if the thought is complete; False if the thought
is still receiving input.
Return type
The markdown label for the thoughtβs container.
get_history_label()[source]ο
Return a markdown label for the special βhistoryβ container
that contains overflow thoughts.
Return type
str
get_final_agent_thought_label()[source]ο
Return the markdown label for the agentβs final thought -
the βNow I have the answerβ thought, that doesnβt involve
a tool.
Return type
str
class langchain.callbacks.WandbCallbackHandler(job_type=None, project='langchain_callback_demo', entity=None, tags=None, group=None, name=None, notes=None, visualize=False, complexity_metrics=False, stream_logs=False)[source]ο
Bases: langchain.callbacks.utils.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler
Callback Handler that logs to Weights and Biases.
Parameters
job_type (str) β The type of job.
project (str) β The project to log to.
entity (str) β The entity to log to.
tags (list) β The tags to log.
group (str) β The group to log to.
name (str) β The name of the run.
notes (str) β The notes to log.
visualize (bool) β Whether to visualize the run.
complexity_metrics (bool) β Whether to log complexity metrics.
stream_logs (bool) β Whether to stream callback actions to W&B
Return type
None
This handler will utilize the associated callback method called and formats
the input of each callback function with metadata regarding the state of LLM run,
and adds the response to the list of records for both the {method}_records and
action. It then logs the response using the run.log() method to Weights and Biases.
on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Run when LLM generates a new token.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Run when LLM ends running.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Run when LLM errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Run when chain errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_tool_end(output, **kwargs)[source]ο
Run when tool ends running.
Parameters
output (str) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Run when tool errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Run when agent is ending.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Run when agent ends running.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
flush_tracker(langchain_asset=None, reset=True, finish=False, job_type=None, project=None, entity=None, tags=None, group=None, name=None, notes=None, visualize=None, complexity_metrics=None)[source]ο
Flush the tracker and reset the session.
Parameters
langchain_asset (Any) β The langchain asset to save.
reset (bool) β Whether to reset the session.
finish (bool) β Whether to finish the run.
job_type (Optional[str]) β The job type.
project (Optional[str]) β The project.
entity (Optional[str]) β The entity.
tags (Optional[Sequence]) β The tags.
group (Optional[str]) β The group.
name (Optional[str]) β The name.
notes (Optional[str]) β The notes.
visualize (Optional[bool]) β Whether to visualize.
complexity_metrics (Optional[bool]) β Whether to compute complexity metrics.
Returns β None
Return type
None
class langchain.callbacks.WhyLabsCallbackHandler(logger)[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
WhyLabs CallbackHandler.
Parameters
logger (Logger) β
on_llm_start(serialized, prompts, **kwargs)[source]ο
Pass the input prompts to the logger
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Pass the generated response to the logger.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Do nothing.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Do nothing.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Do nothing.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Do nothing.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_agent_action(action, color=None, **kwargs)[source]ο
Do nothing.
Parameters
action (langchain.schema.AgentAction) β
color (Optional[str]) β
kwargs (Any) β
Return type
Any
on_tool_end(output, color=None, observation_prefix=None, llm_prefix=None, **kwargs)[source]ο
Do nothing.
Parameters
output (str) β
color (Optional[str]) β
observation_prefix (Optional[str]) β
llm_prefix (Optional[str]) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Do nothing.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, color=None, **kwargs)[source]ο
Run on agent end.
Parameters
finish (langchain.schema.AgentFinish) β
color (Optional[str]) β
kwargs (Any) β
Return type
None
flush()[source]ο
Return type
None
close()[source]ο
Return type
None
classmethod from_params(*, api_key=None, org_id=None, dataset_id=None, sentiment=False, toxicity=False, themes=False)[source]ο
Instantiate whylogs Logger from params.
Parameters
api_key (Optional[str]) β WhyLabs API key. Optional because the preferred
way to specify the API key is with environment variable
WHYLABS_API_KEY.
org_id (Optional[str]) β WhyLabs organization id to write profiles to.
If not set must be specified in environment variable
WHYLABS_DEFAULT_ORG_ID.
dataset_id (Optional[str]) β The model or dataset this callback is gathering
telemetry for. If not set must be specified in environment variable
WHYLABS_DEFAULT_DATASET_ID.
sentiment (bool) β If True will initialize a model to perform
sentiment analysis compound score. Defaults to False and will not gather
this metric.
toxicity (bool) β If True will initialize a model to score
toxicity. Defaults to False and will not gather this metric.
themes (bool) β If True will initialize a model to calculate
distance to configured themes. Defaults to None and will not gather this
metric.
Return type
Logger
langchain.callbacks.get_openai_callback()[source]ο
Get the OpenAI callback handler in a context manager.
which conveniently exposes token and cost information.
Returns
The OpenAI callback handler.
Return type
OpenAICallbackHandler
Example
>>> with get_openai_callback() as cb:
... # Use the OpenAI callback handler
langchain.callbacks.tracing_enabled(session_name='default')[source]ο
Get the Deprecated LangChainTracer in a context manager.
Parameters
session_name (str, optional) β The name of the session.
Defaults to βdefaultβ.
Returns
The LangChainTracer session.
Return type
TracerSessionV1
Example
>>> with tracing_enabled() as session:
... # Use the LangChainTracer session
langchain.callbacks.wandb_tracing_enabled(session_name='default')[source]ο
Get the WandbTracer in a context manager.
Parameters
session_name (str, optional) β The name of the session.
Defaults to βdefaultβ.
Returns
None
Return type
Generator[None, None, None]
Example
>>> with wandb_tracing_enabled() as session:
... # Use the WandbTracer session | https://api.python.langchain.com/en/latest/modules/callbacks.html |
fb470470-dfc6-4e38-8a18-8c5a864d5999 | Document Loadersο
All different types of document loaders.
class langchain.document_loaders.AcreomLoader(path, encoding='UTF-8', collect_metadata=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Parameters
path (str) β
encoding (str) β
collect_metadata (bool) β
FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)ο
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AZLyricsLoader(web_path, header_template=None, verify=True)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that loads AZLyrics webpages.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β
load()[source]ο
Load webpage.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AirbyteJSONLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads local airbyte json files.
Parameters
file_path (str) β
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AirtableLoader(api_token, table_id, base_id)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader for Airtable tables.
Parameters
api_token (str) β
table_id (str) β
base_id (str) β
lazy_load()[source]ο
Lazy load records from table.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load Table.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ApifyDatasetLoader(dataset_id, dataset_mapping_function)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Logic for loading documents from Apify datasets.
Parameters
dataset_id (str) β
dataset_mapping_function (Callable[[Dict], langchain.schema.Document]) β
Return type
None
attribute apify_client: Any = Noneο
attribute dataset_id: str [Required]ο
The ID of the dataset on the Apify platform.
attribute dataset_mapping_function: Callable[[Dict], langchain.schema.Document] [Required]ο
A custom function that takes a single dictionary (an Apify dataset item)
and converts it to an instance of the Document class.
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ArxivLoader(query, load_max_docs=100, load_all_available_meta=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from arxiv.org into a list of Documents.
Each document represents one Document.
The loader converts the original PDF format into the text.
Parameters
query (str) β
load_max_docs (Optional[int]) β
load_all_available_meta (Optional[bool]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AzureBlobStorageContainerLoader(conn_str, container, prefix='')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from Azure Blob Storage.
Parameters
conn_str (str) β
container (str) β
prefix (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AzureBlobStorageFileLoader(conn_str, container, blob_name)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from Azure Blob Storage.
Parameters
conn_str (str) β
container (str) β
blob_name (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BSHTMLLoader(file_path, open_encoding=None, bs_kwargs=None, get_text_separator='')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses beautiful soup to parse HTML files.
Parameters
file_path (str) β
open_encoding (Optional[str]) β
bs_kwargs (Optional[dict]) β
get_text_separator (str) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BibtexLoader(file_path, *, parser=None, max_docs=None, max_content_chars=4000, load_extra_metadata=False, file_pattern='[^:]+\\.pdf')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a bibtex file into a list of Documents.
Each document represents one entry from the bibtex file.
If a PDF file is present in the file bibtex field, the original PDF
is loaded into the document text. If no such file entry is present,
the abstract field is used instead.
Parameters
file_path (str) β
parser (Optional[langchain.utilities.bibtex.BibtexparserWrapper]) β
max_docs (Optional[int]) β
max_content_chars (Optional[int]) β
load_extra_metadata (bool) β
file_pattern (str) β
lazy_load()[source]ο
Load bibtex file using bibtexparser and get the article texts plus the
article metadata.
See https://bibtexparser.readthedocs.io/en/master/
Returns
a list of documents with the document.page_content in text format
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load bibtex file documents from the given bibtex file path.
See https://bibtexparser.readthedocs.io/en/master/
Parameters
file_path β the path to the bibtex file
Returns
a list of documents with the document.page_content in text format
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BigQueryLoader(query, project=None, page_content_columns=None, metadata_columns=None, credentials=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from BigQuery into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Parameters
query (str) β
project (Optional[str]) β
page_content_columns (Optional[List[str]]) β
metadata_columns (Optional[List[str]]) β
credentials (Optional[Credentials]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BiliBiliLoader(video_urls)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads bilibili transcripts.
Parameters
video_urls (List[str]) β
load()[source]ο
Load from bilibili url.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BlackboardLoader(blackboard_course_url, bbrouter, load_all_recursively=True, basic_auth=None, cookies=None)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that loads all documents from a Blackboard course.
This loader is not compatible with all Blackboard courses. It is only
compatible with courses that use the new Blackboard interface.
To use this loader, you must have the BbRouter cookie. You can get this
cookie by logging into the course and then copying the value of the
BbRouter cookie from the browserβs developer tools.
Example
from langchain.document_loaders import BlackboardLoader
loader = BlackboardLoader(
blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1",
bbrouter="expires:12345...",
)
documents = loader.load()
Parameters
blackboard_course_url (str) β
bbrouter (str) β
load_all_recursively (bool) β
basic_auth (Optional[Tuple[str, str]]) β
cookies (Optional[dict]) β
folder_path: strο
base_url: strο
load_all_recursively: boolο
check_bs4()[source]ο
Check if BeautifulSoup4 is installed.
Raises
ImportError β If BeautifulSoup4 is not installed.
Return type
None
load()[source]ο
Load data into document objects.
Returns
List of documents.
Return type
List[langchain.schema.Document]
download(path)[source]ο
Download a file from a url.
Parameters
path (str) β Path to the file.
Return type
None
parse_filename(url)[source]ο
Parse the filename from a url.
Parameters
url (str) β Url to parse the filename from.
Returns
The filename.
Return type
str
class langchain.document_loaders.Blob(*, data=None, mimetype=None, encoding='utf-8', path=None)[source]ο
Bases: pydantic.main.BaseModel
A blob is used to represent raw data by either reference or value.
Provides an interface to materialize the blob in different representations, and
help to decouple the development of data loaders from the downstream parsing of
the raw data.
Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob
Parameters
data (Optional[Union[bytes, str]]) β
mimetype (Optional[str]) β
encoding (str) β
path (Optional[Union[str, pathlib.PurePath]]) β
Return type
None
attribute data: Optional[Union[bytes, str]] = Noneο
attribute encoding: str = 'utf-8'ο
attribute mimetype: Optional[str] = Noneο
attribute path: Optional[Union[str, pathlib.PurePath]] = Noneο
as_bytes()[source]ο
Read data as bytes.
Return type
bytes
as_bytes_io()[source]ο
Read data as a byte stream.
Return type
Generator[Union[_io.BytesIO, _io.BufferedReader], None, None]
as_string()[source]ο
Read data as a string.
Return type
str
classmethod from_data(data, *, encoding='utf-8', mime_type=None, path=None)[source]ο
Initialize the blob from in-memory data.
Parameters
data (Union[str, bytes]) β the in-memory data associated with the blob
encoding (str) β Encoding to use if decoding the bytes into a string
mime_type (Optional[str]) β if provided, will be set as the mime-type of the data
path (Optional[str]) β if provided, will be set as the source from which the data came
Returns
Blob instance
Return type
langchain.document_loaders.blob_loaders.schema.Blob
classmethod from_path(path, *, encoding='utf-8', mime_type=None, guess_type=True)[source]ο
Load the blob from a path like object.
Parameters
path (Union[str, pathlib.PurePath]) β path like object to file to be read
encoding (str) β Encoding to use if decoding the bytes into a string
mime_type (Optional[str]) β if provided, will be set as the mime-type of the data
guess_type (bool) β If True, the mimetype will be guessed from the file extension,
if a mime-type was not provided
Returns
Blob instance
Return type
langchain.document_loaders.blob_loaders.schema.Blob
property source: Optional[str]ο
The source location of the blob as string if known otherwise none.
class langchain.document_loaders.BlobLoader[source]ο
Bases: abc.ABC
Abstract interface for blob loaders implementation.
Implementer should be able to load raw content from a storage system according
to some criteria and return the raw content lazily as a stream of blobs.
abstract yield_blobs()[source]ο
A lazy loader for raw data represented by LangChainβs Blob object.
Returns
A generator over blobs
Return type
Iterable[langchain.document_loaders.blob_loaders.schema.Blob]
class langchain.document_loaders.BlockchainDocumentLoader(contract_address, blockchainType=BlockchainType.ETH_MAINNET, api_key='docs-demo', startToken='', get_all_tokens=False, max_execution_time=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads elements from a blockchain smart contract into Langchain documents.
The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,
Polygon mainnet, and Polygon Mumbai testnet.
If no BlockchainType is specified, the default is Ethereum mainnet.
The Loader uses the Alchemy API to interact with the blockchain.
ALCHEMY_API_KEY environment variable must be set to use this loader.
The API returns 100 NFTs per request and can be paginated using the
startToken parameter.
If get_all_tokens is set to True, the loader will get all tokens
on the contract. Note that for contracts with a large number of tokens,
this may take a long time (e.g. 10k tokens is 100 requests).
Default value is false for this reason.
The max_execution_time (sec) can be set to limit the execution time
of the loader.
Future versions of this loader can:
Support additional Alchemy APIs (e.g. getTransactions, etc.)
Support additional blockain APIs (e.g. Infura, Opensea, etc.)
Parameters
contract_address (str) β
blockchainType (langchain.document_loaders.blockchain.BlockchainType) β
api_key (str) β
startToken (str) β
get_all_tokens (bool) β
max_execution_time (Optional[int]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.CSVLoader(file_path, source_column=None, csv_args=None, encoding=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a CSV file into a list of documents.
Each document represents one row of the CSV file. Every row is converted into a
key/value pair and outputted to a new line in the documentβs page_content.
The source for each document loaded from csv is set to the value of the
file_path argument for all doucments by default.
You can override this by setting the source_column argument to the
name of a column in the CSV file.
The source of each document will then be set to the value of the column
with the name specified in source_column.
Output Example:column1: value1
column2: value2
column3: value3
Parameters
file_path (str) β
source_column (Optional[str]) β
csv_args (Optional[Dict]) β
encoding (Optional[str]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ChatGPTLoader(log_file, num_logs=- 1)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads conversations from exported ChatGPT data.
Parameters
log_file (str) β
num_logs (int) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.CoNLLULoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load CoNLL-U files.
Parameters
file_path (str) β
load()[source]ο
Load from file path.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.CollegeConfidentialLoader(web_path, header_template=None, verify=True)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that loads College Confidential webpages.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β
load()[source]ο
Load webpage.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ConfluenceLoader(url, api_key=None, username=None, oauth2=None, token=None, cloud=True, number_of_retries=3, min_retry_seconds=2, max_retry_seconds=10, confluence_kwargs=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load Confluence pages. Port of https://llamahub.ai/l/confluence
This currently supports username/api_key, Oauth2 login or personal access token
authentication.
Specify a list page_ids and/or space_key to load in the corresponding pages into
Document objects, if both are specified the union of both sets will be returned.
You can also specify a boolean include_attachments to include attachments, this
is set to False by default, if set to True all attachments will be downloaded and
ConfluenceReader will extract the text from the attachments and add it to the
Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,
SVG, Word and Excel.
Confluence API supports difference format of page content. The storage format is the
raw XML representation for storage. The view format is the HTML representation for
viewing with macros are rendered as though it is viewed by users. You can pass
a enum content_format argument to load() to specify the content format, this is
set to ContentFormat.STORAGE by default.
Hint: space_key and page_id can both be found in the URL of a page in Confluence
- https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>
Example
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
username="me",
api_key="12345"
)
documents = loader.load(space_key="SPACE",limit=50)
Parameters
url (str) β _description_
api_key (str, optional) β _description_, defaults to None
username (str, optional) β _description_, defaults to None
oauth2 (dict, optional) β _description_, defaults to {}
token (str, optional) β _description_, defaults to None
cloud (bool, optional) β _description_, defaults to True
number_of_retries (Optional[int], optional) β How many times to retry, defaults to 3
min_retry_seconds (Optional[int], optional) β defaults to 2
max_retry_seconds (Optional[int], optional) β defaults to 10
confluence_kwargs (dict, optional) β additional kwargs to initialize confluence with
Raises
ValueError β Errors while validating input
ImportError β Required dependencies not installed.
static validate_init_args(url=None, api_key=None, username=None, oauth2=None, token=None)[source]ο
Validates proper combinations of init arguments
Parameters
url (Optional[str]) β
api_key (Optional[str]) β
username (Optional[str]) β
oauth2 (Optional[dict]) β
token (Optional[str]) β
Return type
Optional[List]
load(space_key=None, page_ids=None, label=None, cql=None, include_restricted_content=False, include_archived_content=False, include_attachments=False, include_comments=False, content_format=ContentFormat.STORAGE, limit=50, max_pages=1000, ocr_languages=None)[source]ο
Parameters
space_key (Optional[str], optional) β Space key retrieved from a confluence URL, defaults to None
page_ids (Optional[List[str]], optional) β List of specific page IDs to load, defaults to None
label (Optional[str], optional) β Get all pages with this label, defaults to None
cql (Optional[str], optional) β CQL Expression, defaults to None
include_restricted_content (bool, optional) β defaults to False
include_archived_content (bool, optional) β Whether to include archived content,
defaults to False
include_attachments (bool, optional) β defaults to False
include_comments (bool, optional) β defaults to False
content_format (ContentFormat) β Specify content format, defaults to ContentFormat.STORAGE
limit (int, optional) β Maximum number of pages to retrieve per request, defaults to 50
max_pages (int, optional) β Maximum number of pages to retrieve in total, defaults 1000
ocr_languages (str, optional) β The languages to use for the Tesseract agent. To use a
language, youβll first need to install the appropriate
Tesseract language pack.
Raises
ValueError β _description_
ImportError β _description_
Returns
_description_
Return type
List[Document]
paginate_request(retrieval_method, **kwargs)[source]ο
Paginate the various methods to retrieve groups of pages.
Unfortunately, due to page size, sometimes the Confluence API
doesnβt match the limit value. If limit is >100 confluence
seems to cap the response to 100. Also, due to the Atlassian Python
package, we donβt get the βnextβ values from the β_linksβ key because
they only return the value from the results key. So here, the pagination
starts from 0 and goes until the max_pages, getting the limit number
of pages with each request. We have to manually check if there
are more docs based on the length of the returned list of pages, rather than
just checking for the presence of a next key in the response like this page
would have you do:
https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/
Parameters
retrieval_method (callable) β Function used to retrieve docs
kwargs (Any) β
Returns
List of documents
Return type
List
is_public_page(page)[source]ο
Check if a page is publicly accessible.
Parameters
page (dict) β
Return type
bool
process_pages(pages, include_restricted_content, include_attachments, include_comments, content_format, ocr_languages=None)[source]ο
Process a list of pages into a list of documents.
Parameters
pages (List[dict]) β
include_restricted_content (bool) β
include_attachments (bool) β
include_comments (bool) β
content_format (langchain.document_loaders.confluence.ContentFormat) β
ocr_languages (Optional[str]) β
Return type
List[langchain.schema.Document]
process_page(page, include_attachments, include_comments, content_format, ocr_languages=None)[source]ο
Parameters
page (dict) β
include_attachments (bool) β
include_comments (bool) β
content_format (langchain.document_loaders.confluence.ContentFormat) β
ocr_languages (Optional[str]) β
Return type
langchain.schema.Document
process_attachment(page_id, ocr_languages=None)[source]ο
Parameters
page_id (str) β
ocr_languages (Optional[str]) β
Return type
List[str]
process_pdf(link, ocr_languages=None)[source]ο
Parameters
link (str) β
ocr_languages (Optional[str]) β
Return type
str
process_image(link, ocr_languages=None)[source]ο
Parameters
link (str) β
ocr_languages (Optional[str]) β
Return type
str
process_doc(link)[source]ο
Parameters
link (str) β
Return type
str
process_xls(link)[source]ο
Parameters
link (str) β
Return type
str
process_svg(link, ocr_languages=None)[source]ο
Parameters
link (str) β
ocr_languages (Optional[str]) β
Return type
str
class langchain.document_loaders.DataFrameLoader(data_frame, page_content_column='text')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load Pandas DataFrames.
Parameters
data_frame (Any) β
page_content_column (str) β
lazy_load()[source]ο
Lazy load records from dataframe.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load full dataframe.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DiffbotLoader(api_token, urls, continue_on_failure=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Diffbot file json.
Parameters
api_token (str) β
urls (List[str]) β
continue_on_failure (bool) β
load()[source]ο
Extract text from Diffbot on all the URLs and return Document instances
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DirectoryLoader(path, glob='**/[!.]*', silent_errors=False, load_hidden=False, loader_cls=<class 'langchain.document_loaders.unstructured.UnstructuredFileLoader'>, loader_kwargs=None, recursive=False, show_progress=False, use_multithreading=False, max_concurrency=4)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from a directory.
Parameters
path (str) β
glob (str) β
silent_errors (bool) β
load_hidden (bool) β
loader_cls (Union[Type[langchain.document_loaders.unstructured.UnstructuredFileLoader], Type[langchain.document_loaders.text.TextLoader], Type[langchain.document_loaders.html_bs.BSHTMLLoader]]) β
loader_kwargs (Optional[dict]) β
recursive (bool) β
show_progress (bool) β
use_multithreading (bool) β
max_concurrency (int) β
load_file(item, path, docs, pbar)[source]ο
Parameters
item (pathlib.Path) β
path (pathlib.Path) β
docs (List[langchain.schema.Document]) β
pbar (Optional[Any]) β
Return type
None
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DiscordChatLoader(chat_log, user_id_col='ID')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load Discord chat logs.
Parameters
chat_log (pd.DataFrame) β
user_id_col (str) β
load()[source]ο
Load all chat messages.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DocugamiLoader(*, api='https://api.docugami.com/v1preview1', access_token=None, docset_id=None, document_ids=None, file_paths=None, min_chunk_size=32)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Loader that loads processed docs from Docugami.
To use, you should have the lxml python package installed.
Parameters
api (str) β
access_token (Optional[str]) β
docset_id (Optional[str]) β
document_ids (Optional[Sequence[str]]) β
file_paths (Optional[Sequence[Union[pathlib.Path, str]]]) β
min_chunk_size (int) β
Return type
None
attribute access_token: Optional[str] = Noneο
attribute api: str = 'https://api.docugami.com/v1preview1'ο
attribute docset_id: Optional[str] = Noneο
attribute document_ids: Optional[Sequence[str]] = Noneο
attribute file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = Noneο
attribute min_chunk_size: int = 32ο
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.Docx2txtLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, abc.ABC
Loads a DOCX with docx2txt and chunks at character level.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, and use that, then clean up the temporary file after completion
Parameters
file_path (str) β
load()[source]ο
Load given path as single page.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DuckDBLoader(query, database=':memory:', read_only=False, config=None, page_content_columns=None, metadata_columns=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from DuckDB into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Parameters
query (str) β
database (str) β
read_only (bool) β
config (Optional[Dict[str, str]]) β
page_content_columns (Optional[List[str]]) β
metadata_columns (Optional[List[str]]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.EmbaasBlobLoader(*, embaas_api_key=None, api_url='https://api.embaas.io/v1/document/extract-text/bytes/', params={})[source]ο
Bases: langchain.document_loaders.embaas.BaseEmbaasLoader, langchain.document_loaders.base.BaseBlobParser
Wrapper around embaasβs document byte loader service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader()
blob = Blob.from_path(path="example.mp3")
documents = loader.parse(blob=blob)
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
)
blob = Blob.from_path(path="example.pdf")
documents = loader.parse(blob=blob)
Parameters
embaas_api_key (Optional[str]) β
api_url (str) β
params (langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters) β
Return type
None
lazy_parse(blob)[source]ο
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob (langchain.document_loaders.blob_loaders.schema.Blob) β Blob instance
Returns
Generator of documents
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.EmbaasLoader(*, embaas_api_key=None, api_url='https://api.embaas.io/v1/document/extract-text/bytes/', params={}, file_path, blob_loader=None)[source]ο
Bases: langchain.document_loaders.embaas.BaseEmbaasLoader, langchain.document_loaders.base.BaseLoader
Wrapper around embaasβs document loader service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasLoader
loader = EmbaasLoader(file_path="example.mp3")
documents = loader.load()
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
file_path="example.pdf",
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
)
documents = loader.load()
Parameters
embaas_api_key (Optional[str]) β
api_url (str) β
params (langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters) β
file_path (str) β
blob_loader (Optional[langchain.document_loaders.embaas.EmbaasBlobLoader]) β
Return type
None
attribute blob_loader: Optional[langchain.document_loaders.embaas.EmbaasBlobLoader] = Noneο
The blob loader to use. If not provided, a default one will be created.
attribute file_path: str [Required]ο
The path to the file to load.
lazy_load()[source]ο
Load the documents from the file path lazily.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
load_and_split(text_splitter=None)[source]ο
Load documents and split into chunks.
Parameters
text_splitter (Optional[langchain.text_splitter.TextSplitter]) β
Return type
List[langchain.schema.Document]
class langchain.document_loaders.EverNoteLoader(file_path, load_single_document=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
EverNote Loader.
Loads an EverNote notebook export file e.g. my_notebook.enex into Documents.
Instructions on producing this file can be found at
https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML
Currently only the plain text in the note is extracted and stored as the contents
of the Document, any non content metadata (e.g. βauthorβ, βcreatedβ, βupdatedβ etc.
but not βcontent-rawβ or βresourceβ) tags on the note will be extracted and stored
as metadata on the Document.
Parameters
file_path (str) β The path to the notebook export with a .enex extension
load_single_document (bool) β Whether or not to concatenate the content of all
notes into a single long Document.
True (If this is set to) β the βsourceβ which contains the file name of the export.
load()[source]ο
Load documents from EverNote export file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.FacebookChatLoader(path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Facebook messages json directory dump.
Parameters
path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.FaunaLoader(query, page_content_field, secret, metadata_fields=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
FaunaDB Loader.
Parameters
query (str) β
page_content_field (str) β
secret (str) β
metadata_fields (Optional[Sequence[str]]) β
queryο
The FQL query string to execute.
Type
str
page_content_fieldο
The field that contains the content of each page.
Type
str
secretο
The secret key for authenticating to FaunaDB.
Type
str
metadata_fieldsο
Optional list of field names to include in metadata.
Type
Optional[Sequence[str]]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.FigmaFileLoader(access_token, ids, key)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Figma file json.
Parameters
access_token (str) β
ids (str) β
key (str) β
load()[source]ο
Load file
Return type
List[langchain.schema.Document]
class langchain.document_loaders.FileSystemBlobLoader(path, *, glob='**/[!.]*', suffixes=None, show_progress=False)[source]ο
Bases: langchain.document_loaders.blob_loaders.schema.BlobLoader
Blob loader for the local file system.
Example:
from langchain.document_loaders.blob_loaders import FileSystemBlobLoader
loader = FileSystemBlobLoader("/path/to/directory")
for blob in loader.yield_blobs():
print(blob)
Parameters
path (Union[str, pathlib.Path]) β
glob (str) β
suffixes (Optional[Sequence[str]]) β
show_progress (bool) β
Return type
None
yield_blobs()[source]ο
Yield blobs that match the requested pattern.
Return type
Iterable[langchain.document_loaders.blob_loaders.schema.Blob]
count_matching_files()[source]ο
Count files that match the pattern without loading them.
Return type
int
class langchain.document_loaders.GCSDirectoryLoader(project_name, bucket, prefix='')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from GCS.
Parameters
project_name (str) β
bucket (str) β
prefix (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GCSFileLoader(project_name, bucket, blob)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from GCS.
Parameters
project_name (str) β
bucket (str) β
blob (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GitHubIssuesLoader(*, repo, access_token, include_prs=True, milestone=None, state=None, assignee=None, creator=None, mentioned=None, labels=None, sort=None, direction=None, since=None)[source]ο
Bases: langchain.document_loaders.github.BaseGitHubLoader
Parameters
repo (str) β
access_token (str) β
include_prs (bool) β
milestone (Optional[Union[int, Literal['*', 'none']]]) β
state (Optional[Literal['open', 'closed', 'all']]) β
assignee (Optional[str]) β
creator (Optional[str]) β
mentioned (Optional[str]) β
labels (Optional[List[str]]) β
sort (Optional[Literal['created', 'updated', 'comments']]) β
direction (Optional[Literal['asc', 'desc']]) β
since (Optional[str]) β
Return type
None
attribute assignee: Optional[str] = Noneο
Filter on assigned user. Pass βnoneβ for no user and β*β for any user.
attribute creator: Optional[str] = Noneο
Filter on the user that created the issue.
attribute direction: Optional[Literal['asc', 'desc']] = Noneο
The direction to sort the results by. Can be one of: βascβ, βdescβ.
attribute include_prs: bool = Trueο
If True include Pull Requests in results, otherwise ignore them.
attribute labels: Optional[List[str]] = Noneο
Label names to filter one. Example: bug,ui,@high.
attribute mentioned: Optional[str] = Noneο
Filter on a user thatβs mentioned in the issue.
attribute milestone: Optional[Union[int, Literal['*', 'none']]] = Noneο
If integer is passed, it should be a milestoneβs number field.
If the string β*β is passed, issues with any milestone are accepted.
If the string βnoneβ is passed, issues without milestones are returned.
attribute since: Optional[str] = Noneο
Only show notifications updated after the given time.
This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.
attribute sort: Optional[Literal['created', 'updated', 'comments']] = Noneο
What to sort results by. Can be one of: βcreatedβ, βupdatedβ, βcommentsβ.
Default is βcreatedβ.
attribute state: Optional[Literal['open', 'closed', 'all']] = Noneο
Filter on issue state. Can be one of: βopenβ, βclosedβ, βallβ.
lazy_load()[source]ο
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
load()[source]ο
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
parse_issue(issue)[source]ο
Create Document objects from a list of GitHub issues.
Parameters
issue (dict) β
Return type
langchain.schema.Document
property query_params: strο
property url: strο
class langchain.document_loaders.GitLoader(repo_path, clone_url=None, branch='main', file_filter=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads files from a Git repository into a list of documents.
Repository can be local on disk available at repo_path,
or remote at clone_url that will be cloned to repo_path.
Currently supports only text files.
Each document represents one file in the repository. The path points to
the local Git repository, and the branch specifies the branch to load
files from. By default, it loads from the main branch.
Parameters
repo_path (str) β
clone_url (Optional[str]) β
branch (Optional[str]) β
file_filter (Optional[Callable[[str], bool]]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GitbookLoader(web_page, load_all_paths=False, base_url=None, content_selector='main')[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Load GitBook data.
load from either a single page, or
load all (relative) paths in the navbar.
Parameters
web_page (str) β
load_all_paths (bool) β
base_url (Optional[str]) β
content_selector (str) β
load()[source]ο
Fetch text from one single GitBook page.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GoogleApiClient(credentials_path=PosixPath('/home/docs/.credentials/credentials.json'), service_account_path=PosixPath('/home/docs/.credentials/credentials.json'), token_path=PosixPath('/home/docs/.credentials/token.json'))[source]ο
Bases: object
A Generic Google Api Client.
To use, you should have the google_auth_oauthlib,youtube_transcript_api,google
python package installed.
As the google api expects credentials you need to set up a google account and
register your Service. βhttps://developers.google.com/docs/api/quickstart/pythonβ
Example
from langchain.document_loaders import GoogleApiClient
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
Parameters
credentials_path (pathlib.Path) β
service_account_path (pathlib.Path) β
token_path (pathlib.Path) β
Return type
None
credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')ο
service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')ο
token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')ο
classmethod validate_channel_or_videoIds_is_set(values)[source]ο
Validate that either folder_id or document_ids is set, but not both.
Parameters
values (Dict[str, Any]) β
Return type
Dict[str, Any]
class langchain.document_loaders.GoogleApiYoutubeLoader(google_api_client, channel_name=None, video_ids=None, add_video_info=True, captions_language='en', continue_on_failure=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads all Videos from a Channel
To use, you should have the googleapiclient,youtube_transcript_api
python package installed.
As the service needs a google_api_client, you first have to initialize
the GoogleApiClient.
Additionally you have to either provide a channel name or a list of videoids
βhttps://developers.google.com/docs/api/quickstart/pythonβ
Example
from langchain.document_loaders import GoogleApiClient
from langchain.document_loaders import GoogleApiYoutubeLoader
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
loader = GoogleApiYoutubeLoader(
google_api_client=google_api_client,
channel_name = "CodeAesthetic"
)
load.load()
Parameters
google_api_client (langchain.document_loaders.youtube.GoogleApiClient) β
channel_name (Optional[str]) β
video_ids (Optional[List[str]]) β
add_video_info (bool) β
captions_language (str) β
continue_on_failure (bool) β
Return type
None
google_api_client: langchain.document_loaders.youtube.GoogleApiClientο
channel_name: Optional[str] = Noneο
video_ids: Optional[List[str]] = Noneο
add_video_info: bool = Trueο
captions_language: str = 'en'ο
continue_on_failure: bool = Falseο
classmethod validate_channel_or_videoIds_is_set(values)[source]ο
Validate that either folder_id or document_ids is set, but not both.
Parameters
values (Dict[str, Any]) β
Return type
Dict[str, Any]
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GoogleDriveLoader(*, service_account_key=PosixPath('/home/docs/.credentials/keys.json'), credentials_path=PosixPath('/home/docs/.credentials/credentials.json'), token_path=PosixPath('/home/docs/.credentials/token.json'), folder_id=None, document_ids=None, file_ids=None, recursive=False, file_types=None, load_trashed_files=False, file_loader_cls=None, file_loader_kwargs={})[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Loader that loads Google Docs from Google Drive.
Parameters
service_account_key (pathlib.Path) β
credentials_path (pathlib.Path) β
token_path (pathlib.Path) β
folder_id (Optional[str]) β
document_ids (Optional[List[str]]) β
file_ids (Optional[List[str]]) β
recursive (bool) β
file_types (Optional[Sequence[str]]) β
load_trashed_files (bool) β
file_loader_cls (Any) β
file_loader_kwargs (Dict[str, Any]) β
Return type
None
attribute credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')ο
attribute document_ids: Optional[List[str]] = Noneο
attribute file_ids: Optional[List[str]] = Noneο
attribute file_loader_cls: Any = Noneο
attribute file_loader_kwargs: Dict[str, Any] = {}ο
attribute file_types: Optional[Sequence[str]] = Noneο
attribute folder_id: Optional[str] = Noneο
attribute load_trashed_files: bool = Falseο
attribute recursive: bool = Falseο
attribute service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json')ο
attribute token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')ο
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GutenbergLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses urllib to load .txt web files.
Parameters
file_path (str) β
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.HNLoader(web_path, header_template=None, verify=True)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Load Hacker News data from either main page results or the comments page.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β
load()[source]ο
Get important HN webpage information.
Components are:
title
content
source url,
time of post
author of the post
number of comments
rank of the post
Return type
List[langchain.schema.Document]
load_comments(soup_info)[source]ο
Load comments from a HN post.
Parameters
soup_info (Any) β
Return type
List[langchain.schema.Document]
load_results(soup)[source]ο
Load items from an HN page.
Parameters
soup (Any) β
Return type
List[langchain.schema.Document]
class langchain.document_loaders.HuggingFaceDatasetLoader(path, page_content_column='text', name=None, data_dir=None, data_files=None, cache_dir=None, keep_in_memory=None, save_infos=False, use_auth_token=None, num_proc=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from the Hugging Face Hub.
Parameters
path (str) β
page_content_column (str) β
name (Optional[str]) β
data_dir (Optional[str]) β
data_files (Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]]) β
cache_dir (Optional[str]) β
keep_in_memory (Optional[bool]) β
save_infos (bool) β
use_auth_token (Optional[Union[bool, str]]) β
num_proc (Optional[int]) β
lazy_load()[source]ο
Load documents lazily.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.IFixitLoader(web_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load iFixit repair guides, device wikis and answers.
iFixit is the largest, open repair community on the web. The site contains nearly
100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is
licensed under CC-BY.
This loader will allow you to download the text of a repair guide, text of Q&Aβs
and wikis from devices on iFixit using their open APIs and web scraping.
Parameters
web_path (str) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
static load_suggestions(query='', doc_type='all')[source]ο
Parameters
query (str) β
doc_type (str) β
Return type
List[langchain.schema.Document]
load_questions_and_answers(url_override=None)[source]ο
Parameters
url_override (Optional[str]) β
Return type
List[langchain.schema.Document]
load_device(url_override=None, include_guides=True)[source]ο
Parameters
url_override (Optional[str]) β
include_guides (bool) β
Return type
List[langchain.schema.Document]
load_guide(url_override=None)[source]ο
Parameters
url_override (Optional[str]) β
Return type
List[langchain.schema.Document]
class langchain.document_loaders.IMSDbLoader(web_path, header_template=None, verify=True)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that loads IMSDb webpages.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β
load()[source]ο
Load webpage.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ImageCaptionLoader(path_images, blip_processor='Salesforce/blip-image-captioning-base', blip_model='Salesforce/blip-image-captioning-base')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads the captions of an image
Parameters
path_images (Union[str, List[str]]) β
blip_processor (str) β
blip_model (str) β
load()[source]ο
Load from a list of image files
Return type
List[langchain.schema.Document]
class langchain.document_loaders.IuguLoader(resource, api_token=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that fetches data from IUGU.
Parameters
resource (str) β
api_token (Optional[str]) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.JSONLoader(file_path, jq_schema, content_key=None, metadata_func=None, text_content=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a JSON file and references a jq schema provided to load the text into
documents.
Example
[{βtextβ: β¦}, {βtextβ: β¦}, {βtextβ: β¦}] -> schema = .[].text
{βkeyβ: [{βtextβ: β¦}, {βtextβ: β¦}, {βtextβ: β¦}]} -> schema = .key[].text
[ββ, ββ, ββ] -> schema = .[]
Parameters
file_path (Union[str, pathlib.Path]) β
jq_schema (str) β
content_key (Optional[str]) β
metadata_func (Optional[Callable[[Dict, Dict], Dict]]) β
text_content (bool) β
load()[source]ο
Load and return documents from the JSON file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.JoplinLoader(access_token=None, port=41184, host='localhost')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that fetches notes from Joplin.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for βWeb Clipperβ in the app settings).
To get the access token, you need to go to the Web Clipper options and
under βAdvanced Optionsβ you will find the access token.
You can find more information about the Web Clipper service here:
https://joplinapp.org/clipper/
Parameters
access_token (Optional[str]) β
port (int) β
host (str) β
Return type
None
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MWDumpLoader(file_path, encoding='utf8')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load MediaWiki dump from XML file
.. rubric:: Example
from langchain.document_loaders import MWDumpLoader
loader = MWDumpLoader(
file_path="myWiki.xml",
encoding="utf8"
)
docs = loader.load()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, chunk_overlap=0
)
texts = text_splitter.split_documents(docs)
Parameters
file_path (str) β XML local file path
encoding (str, optional) β Charset encoding, defaults to βutf8β
load()[source]ο
Load from file path.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MastodonTootsLoader(mastodon_accounts, number_toots=100, exclude_replies=False, access_token=None, api_base_url='https://mastodon.social')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Mastodon toots loader.
Parameters
mastodon_accounts (Sequence[str]) β
number_toots (Optional[int]) β
exclude_replies (bool) β
access_token (Optional[str]) β
api_base_url (str) β
load()[source]ο
Load toots into documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MathpixPDFLoader(file_path, processed_file_format='mmd', max_wait_time_seconds=500, should_clean_pdf=False, **kwargs)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Parameters
file_path (str) β
processed_file_format (str) β
max_wait_time_seconds (int) β
should_clean_pdf (bool) β
kwargs (Any) β
Return type
None
property headers: dictο
property url: strο
property data: dictο
send_pdf()[source]ο
Return type
str
wait_for_processing(pdf_id)[source]ο
Parameters
pdf_id (str) β
Return type
None
get_processed_pdf(pdf_id)[source]ο
Parameters
pdf_id (str) β
Return type
str
clean_pdf(contents)[source]ο
Parameters
contents (str) β
Return type
str
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MaxComputeLoader(query, api_wrapper, *, page_content_columns=None, metadata_columns=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from Alibaba Cloud MaxCompute table into documents.
Parameters
query (str) β
api_wrapper (MaxComputeAPIWrapper) β
page_content_columns (Optional[Sequence[str]]) β
metadata_columns (Optional[Sequence[str]]) β
classmethod from_params(query, endpoint, project, *, access_id=None, secret_access_key=None, **kwargs)[source]ο
Convenience constructor that builds the MaxCompute API wrapper fromgiven parameters.
Parameters
query (str) β SQL query to execute.
endpoint (str) β MaxCompute endpoint.
project (str) β A project is a basic organizational unit of MaxCompute, which is
similar to a database.
access_id (Optional[str]) β MaxCompute access ID. Should be passed in directly or set as the
environment variable MAX_COMPUTE_ACCESS_ID.
secret_access_key (Optional[str]) β MaxCompute secret access key. Should be passed in
directly or set as the environment variable
MAX_COMPUTE_SECRET_ACCESS_KEY.
kwargs (Any) β
Return type
langchain.document_loaders.max_compute.MaxComputeLoader
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MergedDataLoader(loaders)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Merge documents from a list of loaders
Parameters
loaders (List) β
lazy_load()[source]ο
Lazy load docs from each individual loader.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load docs.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MHTMLLoader(file_path, open_encoding=None, bs_kwargs=None, get_text_separator='')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses beautiful soup to parse HTML files.
Parameters
file_path (str) β
open_encoding (Optional[str]) β
bs_kwargs (Optional[dict]) β
get_text_separator (str) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ModernTreasuryLoader(resource, organization_id=None, api_key=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that fetches data from Modern Treasury.
Parameters
resource (str) β
organization_id (Optional[str]) β
api_key (Optional[str]) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.NotebookLoader(path, include_outputs=False, max_output_length=10, remove_newline=False, traceback=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads .ipynb notebook files.
Parameters
path (str) β
include_outputs (bool) β
max_output_length (int) β
remove_newline (bool) β
traceback (bool) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.NotionDBLoader(integration_token, database_id, request_timeout_sec=10)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Notion DB Loader.
Reads content from pages within a Noton Database.
:param integration_token: Notion integration token.
:type integration_token: str
:param database_id: Notion database id.
:type database_id: str
:param request_timeout_sec: Timeout for Notion requests in seconds.
:type request_timeout_sec: int
Parameters
integration_token (str) β
database_id (str) β
request_timeout_sec (Optional[int]) β
Return type
None
load()[source]ο
Load documents from the Notion database.
:returns: List of documents.
:rtype: List[Document]
Return type
List[langchain.schema.Document]
load_page(page_summary)[source]ο
Read a page.
Parameters
page_summary (Dict[str, Any]) β
Return type
langchain.schema.Document
class langchain.document_loaders.NotionDirectoryLoader(path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Notion directory dump.
Parameters
path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ObsidianLoader(path, encoding='UTF-8', collect_metadata=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Obsidian files from disk.
Parameters
path (str) β
encoding (str) β
collect_metadata (bool) β
FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)ο
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.OneDriveFileLoader(*, file)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Parameters
file (File) β
Return type
None
attribute file: File [Required]ο
load()[source]ο
Load Documents
Return type
List[langchain.schema.Document]
class langchain.document_loaders.OneDriveLoader(*, settings=None, drive_id, folder_path=None, object_ids=None, auth_with_token=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Parameters
settings (langchain.document_loaders.onedrive._OneDriveSettings) β
drive_id (str) β
folder_path (Optional[str]) β
object_ids (Optional[List[str]]) β
auth_with_token (bool) β
Return type
None
attribute auth_with_token: bool = Falseο
attribute drive_id: str [Required]ο
attribute folder_path: Optional[str] = Noneο
attribute object_ids: Optional[List[str]] = Noneο
attribute settings: langchain.document_loaders.onedrive._OneDriveSettings [Optional]ο
load()[source]ο
Loads all supported document files from the specified OneDrive drive a
nd returns a list of Document objects.
Returns
A list of Document objects
representing the loaded documents.
Return type
List[Document]
Raises
ValueError β If the specified drive ID
does not correspond to a drive in the OneDrive storage. β
class langchain.document_loaders.OnlinePDFLoader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loader that loads online PDFs.
Parameters
file_path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.OutlookMessageLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Outlook Message files using extract_msg.
https://github.com/TeamMsgExtractor/msg-extractor
Parameters
file_path (str) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.OpenCityDataLoader(city_id, dataset_id, limit)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Open city data.
Parameters
city_id (str) β
dataset_id (str) β
limit (int) β
lazy_load()[source]ο
Lazy load records.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load records.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PDFMinerLoader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loader that uses PDFMiner to load PDF files.
Parameters
file_path (str) β
Return type
None
load()[source]ο
Eagerly load the content.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
Lazily lod documents.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.PDFMinerPDFasHTMLLoader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loader that uses PDFMiner to load PDF files as HTML content.
Parameters
file_path (str) β
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PDFPlumberLoader(file_path, text_kwargs=None)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loader that uses pdfplumber to load PDF files.
Parameters
file_path (str) β
text_kwargs (Optional[Mapping[str, Any]]) β
Return type
None
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
langchain.document_loaders.PagedPDFSplitterο
alias of langchain.document_loaders.pdf.PyPDFLoader
class langchain.document_loaders.PlaywrightURLLoader(urls, continue_on_failure=True, headless=True, remove_selectors=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses Playwright and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
Parameters
urls (List[str]) β
continue_on_failure (bool) β
headless (bool) β
remove_selectors (Optional[List[str]]) β
urlsο
List of URLs to load.
Type
List[str]
continue_on_failureο
If True, continue loading other URLs on failure.
Type
bool
headlessο
If True, the browser will run in headless mode.
Type
bool
load()[source]ο
Load the specified URLs using Playwright and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
class langchain.document_loaders.PsychicLoader(api_key, connector_id, connection_id)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads documents from Psychic.dev.
Parameters
api_key (str) β
connector_id (str) β
connection_id (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PyMuPDFLoader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loader that uses PyMuPDF to load PDF files.
Parameters
file_path (str) β
Return type
None
load(**kwargs)[source]ο
Load file.
Parameters
kwargs (Optional[Any]) β
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PyPDFDirectoryLoader(path, glob='**/[!.]*.pdf', silent_errors=False, load_hidden=False, recursive=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a directory with PDF files with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
Parameters
path (str) β
glob (str) β
silent_errors (bool) β
load_hidden (bool) β
recursive (bool) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PyPDFLoader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loads a PDF with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
Parameters
file_path (str) β
Return type
None
load()[source]ο
Load given path as pages.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
Lazy load given path as pages.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.PyPDFium2Loader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loads a PDF with pypdfium2 and chunks at character level.
Parameters
file_path (str) β
load()[source]ο
Load given path as pages.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
Lazy load given path as pages.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.PySparkDataFrameLoader(spark_session=None, df=None, page_content_column='text', fraction_of_memory=0.1)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load PySpark DataFrames
Parameters
spark_session (Optional[SparkSession]) β
df (Optional[Any]) β
page_content_column (str) β
fraction_of_memory (float) β
get_num_rows()[source]ο
Gets the amount of βfeasibleβ rows for the DataFrame
Return type
Tuple[int, int]
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load from the dataframe.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PythonLoader(file_path)[source]ο
Bases: langchain.document_loaders.text.TextLoader
Load Python files, respecting any non-default encoding if specified.
Parameters
file_path (str) β
class langchain.document_loaders.ReadTheDocsLoader(path, encoding=None, errors=None, custom_html_tag=None, **kwargs)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads ReadTheDocs documentation directory dump.
Parameters
path (Union[str, pathlib.Path]) β
encoding (Optional[str]) β
errors (Optional[str]) β
custom_html_tag (Optional[Tuple[str, dict]]) β
kwargs (Optional[Any]) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.RecursiveUrlLoader(url, exclude_dirs=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads all child links from a given url.
Parameters
url (str) β
exclude_dirs (Optional[str]) β
Return type
None
get_child_links_recursive(url, visited=None)[source]ο
Recursively get all child links starting with the path of the input URL.
Parameters
url (str) β
visited (Optional[Set[str]]) β
Return type
Set[str]
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load web pages.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.RedditPostsLoader(client_id, client_secret, user_agent, search_queries, mode, categories=['new'], number_posts=10)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Reddit posts loader.
Read posts on a subreddit.
First you need to go to
https://www.reddit.com/prefs/apps/
and create your application
Parameters
client_id (str) β
client_secret (str) β
user_agent (str) β
search_queries (Sequence[str]) β
mode (str) β
categories (Sequence[str]) β
number_posts (Optional[int]) β
load()[source]ο
Load reddits.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.RoamLoader(path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Roam files from disk.
Parameters
path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.S3DirectoryLoader(bucket, prefix='')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from s3.
Parameters
bucket (str) β
prefix (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.S3FileLoader(bucket, key)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from s3.
Parameters
bucket (str) β
key (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.SRTLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader for .srt (subtitle) files.
Parameters
file_path (str) β
load()[source]ο
Load using pysrt file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.SeleniumURLLoader(urls, continue_on_failure=True, browser='chrome', binary_location=None, executable_path=None, headless=True, arguments=[])[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses Selenium and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
Parameters
urls (List[str]) β
continue_on_failure (bool) β
browser (Literal['chrome', 'firefox']) β
binary_location (Optional[str]) β
executable_path (Optional[str]) β
headless (bool) β
arguments (List[str]) β
urlsο
List of URLs to load.
Type
List[str]
continue_on_failureο
If True, continue loading other URLs on failure.
Type
bool
browserο
The browser to use, either βchromeβ or βfirefoxβ.
Type
str
binary_locationο
The location of the browser binary.
Type
Optional[str]
executable_pathο
The path to the browser executable.
Type
Optional[str]
headlessο
If True, the browser will run in headless mode.
Type
bool
arguments [List[str]]
List of arguments to pass to the browser.
load()[source]ο
Load the specified URLs using Selenium and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
class langchain.document_loaders.SitemapLoader(web_path, filter_urls=None, parsing_function=None, blocksize=None, blocknum=0, meta_function=None, is_local=False)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that fetches a sitemap and loads those URLs.
Parameters
web_path (str) β
filter_urls (Optional[List[str]]) β
parsing_function (Optional[Callable]) β
blocksize (Optional[int]) β
blocknum (int) β
meta_function (Optional[Callable]) β
is_local (bool) β
parse_sitemap(soup)[source]ο
Parse sitemap xml and load into a list of dicts.
Parameters
soup (Any) β
Return type
List[dict]
load()[source]ο
Load sitemap.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.SlackDirectoryLoader(zip_path, workspace_url=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader for loading documents from a Slack directory dump.
Parameters
zip_path (str) β
workspace_url (Optional[str]) β
load()[source]ο
Load and return documents from the Slack directory dump.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.SnowflakeLoader(query, user, password, account, warehouse, role, database, schema, parameters=None, page_content_columns=None, metadata_columns=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from Snowflake into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Parameters
query (str) β
user (str) β
password (str) β
account (str) β
warehouse (str) β
role (str) β
database (str) β
schema (str) β
parameters (Optional[Dict[str, Any]]) β
page_content_columns (Optional[List[str]]) β
metadata_columns (Optional[List[str]]) β
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.SpreedlyLoader(access_token, resource)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that fetches data from Spreedly API.
Parameters
access_token (str) β
resource (str) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.StripeLoader(resource, access_token=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that fetches data from Stripe.
Parameters
resource (str) β
access_token (Optional[str]) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.TelegramChatApiLoader(chat_entity=None, api_id=None, api_hash=None, username=None, file_path='telegram_data.json')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Telegram chat json directory dump.
Parameters
chat_entity (Optional[EntityLike]) β
api_id (Optional[int]) β
api_hash (Optional[str]) β
username (Optional[str]) β
file_path (str) β
async fetch_data_from_telegram()[source]ο
Fetch data from Telegram API and save it as a JSON file.
Return type
None
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.TelegramChatFileLoader(path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Telegram chat json directory dump.
Parameters
path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
langchain.document_loaders.TelegramChatLoaderο
alias of langchain.document_loaders.telegram.TelegramChatFileLoader
class langchain.document_loaders.TextLoader(file_path, encoding=None, autodetect_encoding=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load text files.
Parameters
file_path (str) β Path to the file to load.
encoding (Optional[str]) β File encoding to use. If None, the file will be loaded
encoding. (with the default system) β
autodetect_encoding (bool) β Whether to try to autodetect the file encoding
if the specified encoding fails.
load()[source]ο
Load from file path.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ToMarkdownLoader(url, api_key)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads HTML to markdown using 2markdown.
Parameters
url (str) β
api_key (str) β
lazy_load()[source]ο
Lazily load the file.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.TomlLoader(source)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
A TOML document loader that inherits from the BaseLoader class.
This class can be initialized with either a single source file or a source
directory containing TOML files.
Parameters
source (Union[str, pathlib.Path]) β
load()[source]ο
Load and return all documents.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
Lazily load the TOML documents from the source file or directory.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.TrelloLoader(client, board_name, *, include_card_name=True, include_comments=True, include_checklist=True, card_filter='all', extra_metadata=('due_date', 'labels', 'list', 'closed'))[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Trello loader. Reads all cards from a Trello board.
Parameters
client (TrelloClient) β
board_name (str) β
include_card_name (bool) β
include_comments (bool) β
include_checklist (bool) β
card_filter (Literal['closed', 'open', 'all']) β
extra_metadata (Tuple[str, ...]) β
classmethod from_credentials(board_name, *, api_key=None, token=None, **kwargs)[source]ο
Convenience constructor that builds TrelloClient init param for you.
Parameters
board_name (str) β The name of the Trello board.
api_key (Optional[str]) β Trello API key. Can also be specified as environment variable
TRELLO_API_KEY.
token (Optional[str]) β Trello token. Can also be specified as environment variable
TRELLO_TOKEN.
include_card_name β Whether to include the name of the card in the document.
include_comments β Whether to include the comments on the card in the
document.
include_checklist β Whether to include the checklist on the card in the
document.
card_filter β Filter on card status. Valid values are βclosedβ, βopenβ,
βallβ.
extra_metadata β List of additional metadata fields to include as document
metadata.Valid values are βdue_dateβ, βlabelsβ, βlistβ, βclosedβ.
kwargs (Any) β
Return type
langchain.document_loaders.trello.TrelloLoader
load()[source]ο
Loads all cards from the specified Trello board.
You can filter the cards, metadata and text included by using the optional
parameters.
Returns:A list of documents, one for each card in the board.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.TwitterTweetLoader(auth_handler, twitter_users, number_tweets=100)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Twitter tweets loader.
Read tweets of user twitter handle.
First you need to go to
https://developer.twitter.com/en/docs/twitter-api
/getting-started/getting-access-to-the-twitter-api
to get your token. And create a v2 version of the app.
Parameters
auth_handler (Union[OAuthHandler, OAuth2BearerHandler]) β
twitter_users (Sequence[str]) β
number_tweets (Optional[int]) β
load()[source]ο
Load tweets.
Return type
List[langchain.schema.Document]
classmethod from_bearer_token(oauth2_bearer_token, twitter_users, number_tweets=100)[source]ο
Create a TwitterTweetLoader from OAuth2 bearer token.
Parameters
oauth2_bearer_token (str) β
twitter_users (Sequence[str]) β
number_tweets (Optional[int]) β
Return type
langchain.document_loaders.twitter.TwitterTweetLoader
classmethod from_secrets(access_token, access_token_secret, consumer_key, consumer_secret, twitter_users, number_tweets=100)[source]ο
Create a TwitterTweetLoader from access tokens and secrets.
Parameters
access_token (str) β
access_token_secret (str) β
consumer_key (str) β
consumer_secret (str) β
twitter_users (Sequence[str]) β
number_tweets (Optional[int]) β
Return type
langchain.document_loaders.twitter.TwitterTweetLoader
class langchain.document_loaders.UnstructuredAPIFileIOLoader(file, mode='single', url='https://api.unstructured.io/general/v0/general', api_key='', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileIOLoader
Loader that uses the unstructured web API to load file IO objects.
Parameters
file (Union[IO, Sequence[IO]]) β
mode (str) β
url (str) β
api_key (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredAPIFileLoader(file_path='', mode='single', url='https://api.unstructured.io/general/v0/general', api_key='', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses the unstructured web API to load files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
url (str) β
api_key (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredCSVLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load CSV files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredEPubLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load epub files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredEmailLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load email files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredExcelLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load Microsoft Excel files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredFileIOLoader(file, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredBaseLoader
Loader that uses unstructured to load file IO objects.
Parameters
file (Union[IO, Sequence[IO]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredFileLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredBaseLoader
Loader that uses unstructured to load files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredHTMLLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load HTML files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredImageLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load image files, such as PNGs and JPGs.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredMarkdownLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load markdown files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredODTLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load open office ODT files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredPDFLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load PDF files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredPowerPointLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load powerpoint files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredRSTLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load RST files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredRTFLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load rtf files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredURLLoader(urls, continue_on_failure=True, mode='single', show_progress_bar=False, **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses unstructured to load HTML files.
Parameters
urls (List[str]) β
continue_on_failure (bool) β
mode (str) β
show_progress_bar (bool) β
unstructured_kwargs (Any) β
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.UnstructuredWordDocumentLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load word documents.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredXMLLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load XML files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.WeatherDataLoader(client, places)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Weather Reader.
Reads the forecast & current weather of any location using OpenWeatherMapβs free
API. Checkout βhttps://openweathermap.org/appidβ for more on how to generate a free
OpenWeatherMap API.
Parameters
client (OpenWeatherMapAPIWrapper) β
places (Sequence[str]) β
Return type
None
classmethod from_params(places, *, openweathermap_api_key=None)[source]ο
Parameters
places (Sequence[str]) β
openweathermap_api_key (Optional[str]) β
Return type
langchain.document_loaders.weather.WeatherDataLoader
lazy_load()[source]ο
Lazily load weather data for the given locations.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load weather data for the given locations.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.WebBaseLoader(web_path, header_template=None, verify=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses urllib and beautiful soup to load webpages.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β
requests_per_second: int = 2ο
Max number of concurrent requests to make.
default_parser: str = 'html.parser'ο
Default parser to use for BeautifulSoup.
requests_kwargs: Dict[str, Any] = {}ο
kwargs for requests
bs_get_text_kwargs: Dict[str, Any] = {}ο
kwargs for beatifulsoup4 get_text
web_paths: List[str]ο
property web_path: strο
async fetch_all(urls)[source]ο
Fetch all urls concurrently with rate limiting.
Parameters
urls (List[str]) β
Return type
Any
scrape_all(urls, parser=None)[source]ο
Fetch all urls, then return soups for all results.
Parameters
urls (List[str]) β
parser (Optional[str]) β
Return type
List[Any]
scrape(parser=None)[source]ο
Scrape data from webpage and return it in BeautifulSoup format.
Parameters
parser (Optional[str]) β
Return type
Any
lazy_load()[source]ο
Lazy load text from the url(s) in web_path.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load text from the url(s) in web_path.
Return type
List[langchain.schema.Document]
aload()[source]ο
Load text from the urls in web_path async into Documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.WhatsAppChatLoader(path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads WhatsApp messages text file.
Parameters
path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.WikipediaLoader(query, lang='en', load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from www.wikipedia.org into a list of Documents.
The hard limit on the number of downloaded Documents is 300 for now.
Each wiki page represents one Document.
Parameters
query (str) β
lang (str) β
load_max_docs (Optional[int]) β
load_all_available_meta (Optional[bool]) β
doc_content_chars_max (Optional[int]) β
load()[source]ο
Loads the query result from Wikipedia into a list of Documents.
Returns
A list of Document objects representing the loadedWikipedia pages.
Return type
List[Document]
class langchain.document_loaders.YoutubeAudioLoader(urls, save_dir)[source]ο
Bases: langchain.document_loaders.blob_loaders.schema.BlobLoader
Load YouTube urls as audio file(s).
Parameters
urls (List[str]) β
save_dir (str) β
yield_blobs()[source]ο
Yield audio blobs for each url.
Return type
Iterable[langchain.document_loaders.blob_loaders.schema.Blob]
class langchain.document_loaders.YoutubeLoader(video_id, add_video_info=False, language='en', translation='en', continue_on_failure=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Youtube transcripts.
Parameters
video_id (str) β
add_video_info (bool) β
language (Union[str, Sequence[str]]) β
translation (str) β
continue_on_failure (bool) β
static extract_video_id(youtube_url)[source]ο
Extract video id from common YT urls.
Parameters
youtube_url (str) β
Return type
str
classmethod from_youtube_url(youtube_url, **kwargs)[source]ο
Given youtube URL, load video.
Parameters
youtube_url (str) β
kwargs (Any) β
Return type
langchain.document_loaders.youtube.YoutubeLoader
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
30182965-8330-44d6-a813-300a47ec8ed1 | Experimentalο
This module contains experimental modules and reproductions of existing work using LangChain primitives.
Autonomous agentsο
Here, we document the BabyAGI and AutoGPT classes from the langchain.experimental module.
class langchain.experimental.BabyAGI(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, task_list=None, task_creation_chain, task_prioritization_chain, execution_chain, task_id_counter=1, vectorstore, max_iterations=None)[source]ο
Bases: langchain.chains.base.Chain, pydantic.main.BaseModel
Controller model for the BabyAGI agent.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
task_list (collections.deque) β
task_creation_chain (langchain.chains.base.Chain) β
task_prioritization_chain (langchain.chains.base.Chain) β
execution_chain (langchain.chains.base.Chain) β
task_id_counter (int) β
vectorstore (langchain.vectorstores.base.VectorStore) β
max_iterations (Optional[int]) β
Return type
None
model Config[source]ο
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = Trueο
property input_keys: List[str]ο
Input keys this chain expects.
property output_keys: List[str]ο
Output keys this chain expects.
get_next_task(result, task_description, objective)[source]ο
Get the next task.
Parameters
result (str) β
task_description (str) β
objective (str) β
Return type
List[Dict]
prioritize_tasks(this_task_id, objective)[source]ο
Prioritize tasks.
Parameters
this_task_id (int) β
objective (str) β
Return type
List[Dict]
execute_task(objective, task, k=5)[source]ο
Execute a task.
Parameters
objective (str) β
task (str) β
k (int) β
Return type
str
classmethod from_llm(llm, vectorstore, verbose=False, task_execution_chain=None, **kwargs)[source]ο
Initialize the BabyAGI Controller.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
vectorstore (langchain.vectorstores.base.VectorStore) β
verbose (bool) β
task_execution_chain (Optional[langchain.chains.base.Chain]) β
kwargs (Dict[str, Any]) β
Return type
langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI
class langchain.experimental.AutoGPT(ai_name, memory, chain, output_parser, tools, feedback_tool=None, chat_history_memory=None)[source]ο
Bases: object
Agent class for interacting with Auto-GPT.
Parameters
ai_name (str) β
memory (VectorStoreRetriever) β
chain (LLMChain) β
output_parser (BaseAutoGPTOutputParser) β
tools (List[BaseTool]) β
feedback_tool (Optional[HumanInputRun]) β
chat_history_memory (Optional[BaseChatMessageHistory]) β
Generative agentsο
Here, we document the GenerativeAgent and GenerativeAgentMemory classes from the langchain.experimental module.
class langchain.experimental.GenerativeAgent(*, name, age=None, traits='N/A', status, memory, llm, verbose=False, summary='', summary_refresh_seconds=3600, last_refreshed=None, daily_summaries=None)[source]ο
Bases: pydantic.main.BaseModel
A character with memory and innate characteristics.
Parameters
name (str) β
age (Optional[int]) β
traits (str) β
status (str) β
memory (langchain.experimental.generative_agents.memory.GenerativeAgentMemory) β
llm (langchain.base_language.BaseLanguageModel) β
verbose (bool) β
summary (str) β
summary_refresh_seconds (int) β
last_refreshed (datetime.datetime) β
daily_summaries (List[str]) β
Return type
None
attribute name: str [Required]ο
The characterβs name.
attribute age: Optional[int] = Noneο
The optional age of the character.
attribute traits: str = 'N/A'ο
Permanent traits to ascribe to the character.
attribute status: str [Required]ο
The traits of the character you wish not to change.
attribute memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory [Required]ο
The memory object that combines relevance, recency, and βimportanceβ.
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
The underlying language model.
attribute summary: str = ''ο
Stateful self-summary generated via reflection on the characterβs memory.
attribute summary_refresh_seconds: int = 3600ο
How frequently to re-generate the summary.
attribute last_refreshed: datetime.datetime [Optional]ο
The last time the characterβs summary was regenerated.
attribute daily_summaries: List[str] [Optional]ο
Summary of the events in the plan that the agent took.
model Config[source]ο
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = Trueο
summarize_related_memories(observation)[source]ο
Summarize memories that are most relevant to an observation.
Parameters
observation (str) β
Return type
str
generate_reaction(observation, now=None)[source]ο
React to a given observation.
Parameters
observation (str) β
now (Optional[datetime.datetime]) β
Return type
Tuple[bool, str]
generate_dialogue_response(observation, now=None)[source]ο
React to a given observation.
Parameters
observation (str) β
now (Optional[datetime.datetime]) β
Return type
Tuple[bool, str]
get_summary(force_refresh=False, now=None)[source]ο
Return a descriptive summary of the agent.
Parameters
force_refresh (bool) β
now (Optional[datetime.datetime]) β
Return type
str
get_full_header(force_refresh=False, now=None)[source]ο
Return a full header of the agentβs status, summary, and current time.
Parameters
force_refresh (bool) β
now (Optional[datetime.datetime]) β
Return type
str
class langchain.experimental.GenerativeAgentMemory(*, llm, memory_retriever, verbose=False, reflection_threshold=None, current_plan=[], importance_weight=0.15, aggregate_importance=0.0, max_tokens_limit=1200, queries_key='queries', most_recent_memories_token_key='recent_memories_token', add_memory_key='add_memory', relevant_memories_key='relevant_memories', relevant_memories_simple_key='relevant_memories_simple', most_recent_memories_key='most_recent_memories', now_key='now', reflecting=False)[source]ο
Bases: langchain.schema.BaseMemory
Parameters
llm (langchain.base_language.BaseLanguageModel) β
memory_retriever (langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever) β
verbose (bool) β
reflection_threshold (Optional[float]) β
current_plan (List[str]) β
importance_weight (float) β
aggregate_importance (float) β
max_tokens_limit (int) β
queries_key (str) β
most_recent_memories_token_key (str) β
add_memory_key (str) β
relevant_memories_key (str) β
relevant_memories_simple_key (str) β
most_recent_memories_key (str) β
now_key (str) β
reflecting (bool) β
Return type
None
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
The core language model.
attribute memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required]ο
The retriever to fetch related memories.
attribute reflection_threshold: Optional[float] = Noneο
When aggregate_importance exceeds reflection_threshold, stop to reflect.
attribute current_plan: List[str] = []ο
The current plan of the agent.
attribute importance_weight: float = 0.15ο
How much weight to assign the memory importance.
attribute aggregate_importance: float = 0.0ο
Track the sum of the βimportanceβ of recent memories.
Triggers reflection when it reaches reflection_threshold.
pause_to_reflect(now=None)[source]ο
Reflect on recent observations and generate βinsightsβ.
Parameters
now (Optional[datetime.datetime]) β
Return type
List[str]
add_memories(memory_content, now=None)[source]ο
Add an observations or memories to the agentβs memory.
Parameters
memory_content (str) β
now (Optional[datetime.datetime]) β
Return type
List[str]
add_memory(memory_content, now=None)[source]ο
Add an observation or memory to the agentβs memory.
Parameters
memory_content (str) β
now (Optional[datetime.datetime]) β
Return type
List[str]
fetch_memories(observation, now=None)[source]ο
Fetch related memories.
Parameters
observation (str) β
now (Optional[datetime.datetime]) β
Return type
List[langchain.schema.Document]
property memory_variables: List[str]ο
Input keys this memory class will load dynamically.
load_memory_variables(inputs)[source]ο
Return key-value pairs given the text input to the chain.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, str]
save_context(inputs, outputs)[source]ο
Save the context of this model run to memory.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, Any]) β
Return type
None
clear()[source]ο
Clear memory contents.
Return type
None | https://api.python.langchain.com/en/latest/modules/experimental.html |
93f10def-2cda-45f9-a6d1-df7aedd7480d | Utilitiesο
General utilities.
class langchain.utilities.ApifyWrapper(*, apify_client=None, apify_client_async=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around Apify.
To use, you should have the apify-client python package installed,
and the environment variable APIFY_API_TOKEN set with your API key, or pass
apify_api_token as a named parameter to the constructor.
Parameters
apify_client (Any) β
apify_client_async (Any) β
Return type
None
attribute apify_client: Any = Noneο
attribute apify_client_async: Any = Noneο
async acall_actor(actor_id, run_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]ο
Run an Actor on the Apify platform and wait for results to be ready.
Parameters
actor_id (str) β The ID or name of the Actor on the Apify platform.
run_input (Dict) β The input object of the Actor that youβre trying to run.
dataset_mapping_function (Callable) β A function that takes a single
dictionary (an Apify dataset item) and converts it to
an instance of the Document class.
build (str, optional) β Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) β Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) β Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from theActor runβs default dataset.
Return type
ApifyDatasetLoader
call_actor(actor_id, run_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]ο
Run an Actor on the Apify platform and wait for results to be ready.
Parameters
actor_id (str) β The ID or name of the Actor on the Apify platform.
run_input (Dict) β The input object of the Actor that youβre trying to run.
dataset_mapping_function (Callable) β A function that takes a single
dictionary (an Apify dataset item) and converts it to an
instance of the Document class.
build (str, optional) β Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) β Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) β Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from theActor runβs default dataset.
Return type
ApifyDatasetLoader
class langchain.utilities.ArxivAPIWrapper(*, arxiv_search=None, arxiv_exceptions=None, top_k_results=3, load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000, ARXIV_MAX_QUERY_LENGTH=300)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around ArxivAPI.
To use, you should have the arxiv python package installed.
https://lukasschwab.me/arxiv.py/index.html
This wrapper will use the Arxiv API to conduct searches and
fetch document summaries. By default, it will return the document summaries
of the top-k results.
It limits the Document content by doc_content_chars_max.
Set doc_content_chars_max=None if you donβt want to limit the content size.
Parameters
top_k_results (int) β number of the top-scored document used for the arxiv tool
ARXIV_MAX_QUERY_LENGTH (int) β the cut limit on the query used for the arxiv tool.
load_max_docs (int) β a limit to the number of loaded documents
load_all_available_meta (bool) β
if True: the metadata of the loaded Documents gets all available meta info(see https://lukasschwab.me/arxiv.py/index.html#Result),
if False: the metadata gets only the most informative fields.
arxiv_search (Any) β
arxiv_exceptions (Any) β
doc_content_chars_max (Optional[int]) β
Return type
None
attribute arxiv_exceptions: Any = Noneο
attribute doc_content_chars_max: Optional[int] = 4000ο
attribute load_all_available_meta: bool = Falseο
attribute load_max_docs: int = 100ο
attribute top_k_results: int = 3ο
load(query)[source]ο
Run Arxiv search and get the article texts plus the article meta information.
See https://lukasschwab.me/arxiv.py/index.html#Search
Returns: a list of documents with the document.page_content in text format
Parameters
query (str) β
Return type
List[langchain.schema.Document]
run(query)[source]ο
Run Arxiv search and get the article meta information.
See https://lukasschwab.me/arxiv.py/index.html#Search
See https://lukasschwab.me/arxiv.py/index.html#Result
It uses only the most informative fields of article meta information.
Parameters
query (str) β
Return type
str
class langchain.utilities.BashProcess(strip_newlines=False, return_err_output=False, persistent=False)[source]ο
Bases: object
Executes bash commands and returns the output.
Parameters
strip_newlines (bool) β
return_err_output (bool) β
persistent (bool) β
run(commands)[source]ο
Run commands and return final output.
Parameters
commands (Union[str, List[str]]) β
Return type
str
process_output(output, command)[source]ο
Parameters
output (str) β
command (str) β
Return type
str
class langchain.utilities.BibtexparserWrapper[source]ο
Bases: pydantic.main.BaseModel
Wrapper around bibtexparser.
To use, you should have the bibtexparser python package installed.
https://bibtexparser.readthedocs.io/en/master/
This wrapper will use bibtexparser to load a collection of references from
a bibtex file and fetch document summaries.
Return type
None
get_metadata(entry, load_extra=False)[source]ο
Get metadata for the given entry.
Parameters
entry (Mapping[str, Any]) β
load_extra (bool) β
Return type
Dict[str, Any]
load_bibtex_entries(path)[source]ο
Load bibtex entries from the bibtex file at the given path.
Parameters
path (str) β
Return type
List[Dict[str, Any]]
class langchain.utilities.BingSearchAPIWrapper(*, bing_subscription_key, bing_search_url, k=10)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Bing Search API.
In order to set this up, follow instructions at:
https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e
Parameters
bing_subscription_key (str) β
bing_search_url (str) β
k (int) β
Return type
None
attribute bing_search_url: str [Required]ο
attribute bing_subscription_key: str [Required]ο
attribute k: int = 10ο
results(query, num_results)[source]ο
Run query through BingSearch and return metadata.
Parameters
query (str) β The query to search for.
num_results (int) β The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query)[source]ο
Run query through BingSearch and parse result.
Parameters
query (str) β
Return type
str
class langchain.utilities.BraveSearchWrapper(*, api_key, search_kwargs=None)[source]ο
Bases: pydantic.main.BaseModel
Parameters
api_key (str) β
search_kwargs (dict) β
Return type
None
attribute api_key: str [Required]ο
attribute search_kwargs: dict [Optional]ο
run(query)[source]ο
Parameters
query (str) β
Return type
str
class langchain.utilities.DuckDuckGoSearchAPIWrapper(*, k=10, region='wt-wt', safesearch='moderate', time='y', max_results=5)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for DuckDuckGo Search API.
Free and does not require any setup
Parameters
k (int) β
region (Optional[str]) β
safesearch (str) β
time (Optional[str]) β
max_results (int) β
Return type
None
attribute k: int = 10ο
attribute max_results: int = 5ο
attribute region: Optional[str] = 'wt-wt'ο
attribute safesearch: str = 'moderate'ο
attribute time: Optional[str] = 'y'ο
get_snippets(query)[source]ο
Run query through DuckDuckGo and return concatenated results.
Parameters
query (str) β
Return type
List[str]
results(query, num_results)[source]ο
Run query through DuckDuckGo and return metadata.
Parameters
query (str) β The query to search for.
num_results (int) β The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query)[source]ο
Parameters
query (str) β
Return type
str
class langchain.utilities.GooglePlacesAPIWrapper(*, gplaces_api_key=None, google_map_client=None, top_k_results=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around Google Places API.
To use, you should have the googlemaps python package installed,an API key for the google maps platform,
and the enviroment variable ββGPLACES_API_KEYββ
set with your API key , or pass βgplaces_api_keyβ
as a named parameter to the constructor.
By default, this will return the all the results on the input query.You can use the top_k_results argument to limit the number of results.
Example
from langchain import GooglePlacesAPIWrapper
gplaceapi = GooglePlacesAPIWrapper()
Parameters
gplaces_api_key (Optional[str]) β
google_map_client (Any) β
top_k_results (Optional[int]) β
Return type
None
attribute gplaces_api_key: Optional[str] = Noneο
attribute top_k_results: Optional[int] = Noneο
fetch_place_details(place_id)[source]ο
Parameters
place_id (str) β
Return type
Optional[str]
format_place_details(place_details)[source]ο
Parameters
place_details (Dict[str, Any]) β
Return type
Optional[str]
run(query)[source]ο
Run Places search and get k number of places that exists that match.
Parameters
query (str) β
Return type
str
class langchain.utilities.GoogleSearchAPIWrapper(*, search_engine=None, google_api_key=None, google_cse_id=None, k=10, siterestrict=False)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Google Search API.
Adapted from: Instructions adapted from https://stackoverflow.com/questions/
37083058/
programmatically-searching-google-in-python-using-custom-search
TODO: DOCS for using it
1. Install google-api-python-client
- If you donβt already have a Google account, sign up.
- If you have never created a Google APIs Console project,
read the Managing Projects page and create a project in the Google API Console.
- Install the library using pip install google-api-python-client
The current version of the library is 2.70.0 at this time
2. To create an API key:
- Navigate to the APIs & ServicesβCredentials panel in Cloud Console.
- Select Create credentials, then select API key from the drop-down menu.
- The API key created dialog box displays your newly created key.
- You now have an API_KEY
3. Setup Custom Search Engine so you can search the entire web
- Create a custom search engine in this link.
- In Sites to search, add any valid URL (i.e. www.stackoverflow.com).
- Thatβs all you have to fill up, the rest doesnβt matter.
In the left-side menu, click Edit search engine β {your search engine name}
β Setup Set Search the entire web to ON. Remove the URL you added from
the list of Sites to search.
- Under Search engine ID youβll find the search-engine-ID.
4. Enable the Custom Search API
- Navigate to the APIs & ServicesβDashboard panel in Cloud Console.
- Click Enable APIs and Services.
- Search for Custom Search API and click on it.
- Click Enable.
URL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis
.com
Parameters
search_engine (Any) β
google_api_key (Optional[str]) β
google_cse_id (Optional[str]) β
k (int) β
siterestrict (bool) β
Return type
None
attribute google_api_key: Optional[str] = Noneο
attribute google_cse_id: Optional[str] = Noneο
attribute k: int = 10ο
attribute siterestrict: bool = Falseο
results(query, num_results)[source]ο
Run query through GoogleSearch and return metadata.
Parameters
query (str) β The query to search for.
num_results (int) β The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query)[source]ο
Run query through GoogleSearch and parse result.
Parameters
query (str) β
Return type
str
class langchain.utilities.GoogleSerperAPIWrapper(*, k=10, gl='us', hl='en', type='search', tbs=None, serper_api_key=None, aiosession=None, result_key_for_type={'images': 'images', 'news': 'news', 'places': 'places', 'search': 'organic'})[source]ο
Bases: pydantic.main.BaseModel
Wrapper around the Serper.dev Google Search API.
You can create a free API key at https://serper.dev.
To use, you should have the environment variable SERPER_API_KEY
set with your API key, or pass serper_api_key as a named parameter
to the constructor.
Example
from langchain import GoogleSerperAPIWrapper
google_serper = GoogleSerperAPIWrapper()
Parameters
k (int) β
gl (str) β
hl (str) β
type (Literal['news', 'search', 'places', 'images']) β
tbs (Optional[str]) β
serper_api_key (Optional[str]) β
aiosession (Optional[aiohttp.client.ClientSession]) β
result_key_for_type (dict) β
Return type
None
attribute aiosession: Optional[aiohttp.client.ClientSession] = Noneο
attribute gl: str = 'us'ο
attribute hl: str = 'en'ο
attribute k: int = 10ο
attribute serper_api_key: Optional[str] = Noneο
attribute tbs: Optional[str] = Noneο
attribute type: Literal['news', 'search', 'places', 'images'] = 'search'ο
async aresults(query, **kwargs)[source]ο
Run query through GoogleSearch.
Parameters
query (str) β
kwargs (Any) β
Return type
Dict
async arun(query, **kwargs)[source]ο
Run query through GoogleSearch and parse result async.
Parameters
query (str) β
kwargs (Any) β
Return type
str
results(query, **kwargs)[source]ο
Run query through GoogleSearch.
Parameters
query (str) β
kwargs (Any) β
Return type
Dict
run(query, **kwargs)[source]ο
Run query through GoogleSearch and parse result.
Parameters
query (str) β
kwargs (Any) β
Return type
str
class langchain.utilities.GraphQLAPIWrapper(*, custom_headers=None, graphql_endpoint, gql_client=None, gql_function)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around GraphQL API.
To use, you should have the gql python package installed.
This wrapper will use the GraphQL API to conduct queries.
Parameters
custom_headers (Optional[Dict[str, str]]) β
graphql_endpoint (str) β
gql_client (Any) β
gql_function (Callable[[str], Any]) β
Return type
None
attribute custom_headers: Optional[Dict[str, str]] = Noneο
attribute graphql_endpoint: str [Required]ο
run(query)[source]ο
Run a GraphQL query and get the results.
Parameters
query (str) β
Return type
str
class langchain.utilities.JiraAPIWrapper(*, jira=None, confluence=None, jira_username=None, jira_api_token=None, jira_instance_url=None, operations=[{'mode': 'jql', 'name': 'JQL Query', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira jql API, useful when you need to search for Jira issues.\nΒ Β Β The input to this tool is a JQL query string, and will be passed into atlassian-python-api\'s Jira `jql` function,\nΒ Β Β For example, to find all the issues in project "Test" assigned to the me, you would pass in the following string:\nΒ Β Β project = Test AND assignee = currentUser()\nΒ Β Β or to find issues with summaries that contain the word "test", you would pass in the following string:\nΒ Β Β summary ~ \'test\'\nΒ Β Β '}, {'mode': 'get_projects', 'name': 'Get Projects', 'description': "\nΒ Β Β This tool is a wrapper around atlassian-python-api's Jira project API, \nΒ Β Β useful when you need to fetch all the projects the user has access to, find out how many projects there are, or as an intermediary step that involv searching by projects. \nΒ Β Β there is no input to this tool.\nΒ Β Β "}, {'mode': 'create_issue', 'name': 'Create Issue', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira issue_create API, useful when you need to create a Jira issue. \nΒ Β Β The input to this tool is a dictionary specifying the fields of the Jira issue, and will be passed into atlassian-python-api\'s Jira `issue_create` function.\nΒ Β Β For example, to create a low priority task called "test issue" with description "test description", you would pass in the following dictionary: \nΒ Β Β {{"summary": "test issue", "description": "test description", "issuetype": {{"name": "Task"}}, "priority": {{"name": "Low"}}}}\nΒ Β Β '}, {'mode': 'other', 'name': 'Catch all Jira API call', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira API.\nΒ Β Β There are other dedicated tools for fetching all projects, and creating and searching for issues, \nΒ Β Β use this tool if you need to perform any other actions allowed by the atlassian-python-api Jira API.\nΒ Β Β The input to this tool is line of python code that calls a function from atlassian-python-api\'s Jira API\nΒ Β Β For example, to update the summary field of an issue, you would pass in the following string:\nΒ Β Β self.jira.update_issue_field(key, {{"summary": "New summary"}})\nΒ Β Β or to find out how many projects are in the Jira instance, you would pass in the following string:\nΒ Β Β self.jira.projects()\nΒ Β Β For more information on the Jira API, refer to https://atlassian-python-api.readthedocs.io/jira.html\nΒ Β Β '}, {'mode': 'create_page', 'name': 'Create confluence page', 'description': 'This tool is a wrapper around atlassian-python-api\'s Confluence \natlassian-python-api API, useful when you need to create a Confluence page. The input to this tool is a dictionary \nspecifying the fields of the Confluence page, and will be passed into atlassian-python-api\'s Confluence `create_page` \nfunction. For example, to create a page in the DEMO space titled "This is the title" with body "This is the body. You can use \n<strong>HTML tags</strong>!", you would pass in the following dictionary: {{"space": "DEMO", "title":"This is the \ntitle","body":"This is the body. You can use <strong>HTML tags</strong>!"}} '}])[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Jira API.
Parameters
jira (Any) β
confluence (Any) β
jira_username (Optional[str]) β
jira_api_token (Optional[str]) β
jira_instance_url (Optional[str]) β
operations (List[Dict]) β
Return type
None
attribute confluence: Any = Noneο
attribute jira_api_token: Optional[str] = Noneο
attribute jira_instance_url: Optional[str] = Noneο
attribute jira_username: Optional[str] = Noneο
attribute operations: List[Dict] = [{'mode': 'jql', 'name': 'JQL Query', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira jql API, useful when you need to search for Jira issues.\nΒ Β Β The input to this tool is a JQL query string, and will be passed into atlassian-python-api\'s Jira `jql` function,\nΒ Β Β For example, to find all the issues in project "Test" assigned to the me, you would pass in the following string:\nΒ Β Β project = Test AND assignee = currentUser()\nΒ Β Β or to find issues with summaries that contain the word "test", you would pass in the following string:\nΒ Β Β summary ~ \'test\'\nΒ Β Β '}, {'mode': 'get_projects', 'name': 'Get Projects', 'description': "\nΒ Β Β This tool is a wrapper around atlassian-python-api's Jira project API, \nΒ Β Β useful when you need to fetch all the projects the user has access to, find out how many projects there are, or as an intermediary step that involv searching by projects. \nΒ Β Β there is no input to this tool.\nΒ Β Β "}, {'mode': 'create_issue', 'name': 'Create Issue', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira issue_create API, useful when you need to create a Jira issue. \nΒ Β Β The input to this tool is a dictionary specifying the fields of the Jira issue, and will be passed into atlassian-python-api\'s Jira `issue_create` function.\nΒ Β Β For example, to create a low priority task called "test issue" with description "test description", you would pass in the following dictionary: \nΒ Β Β {{"summary": "test issue", "description": "test description", "issuetype": {{"name": "Task"}}, "priority": {{"name": "Low"}}}}\nΒ Β Β '}, {'mode': 'other', 'name': 'Catch all Jira API call', 'description': '\nΒ Β Β This tool is a wrapper around atlassian-python-api\'s Jira API.\nΒ Β Β There are other dedicated tools for fetching all projects, and creating and searching for issues, \nΒ Β Β use this tool if you need to perform any other actions allowed by the atlassian-python-api Jira API.\nΒ Β Β The input to this tool is line of python code that calls a function from atlassian-python-api\'s Jira API\nΒ Β Β For example, to update the summary field of an issue, you would pass in the following string:\nΒ Β Β self.jira.update_issue_field(key, {{"summary": "New summary"}})\nΒ Β Β or to find out how many projects are in the Jira instance, you would pass in the following string:\nΒ Β Β self.jira.projects()\nΒ Β Β For more information on the Jira API, refer to https://atlassian-python-api.readthedocs.io/jira.html\nΒ Β Β '}, {'mode': 'create_page', 'name': 'Create confluence page', 'description': 'This tool is a wrapper around atlassian-python-api\'s Confluence \natlassian-python-api API, useful when you need to create a Confluence page. The input to this tool is a dictionary \nspecifying the fields of the Confluence page, and will be passed into atlassian-python-api\'s Confluence `create_page` \nfunction. For example, to create a page in the DEMO space titled "This is the title" with body "This is the body. You can use \n<strong>HTML tags</strong>!", you would pass in the following dictionary: {{"space": "DEMO", "title":"This is the \ntitle","body":"This is the body. You can use <strong>HTML tags</strong>!"}} '}]ο
issue_create(query)[source]ο
Parameters
query (str) β
Return type
str
list()[source]ο
Return type
List[Dict]
other(query)[source]ο
Parameters
query (str) β
Return type
str
page_create(query)[source]ο
Parameters
query (str) β
Return type
str
parse_issues(issues)[source]ο
Parameters
issues (Dict) β
Return type
List[dict]
parse_projects(projects)[source]ο
Parameters
projects (List[dict]) β
Return type
List[dict]
project()[source]ο
Return type
str
run(mode, query)[source]ο
Parameters
mode (str) β
query (str) β
Return type
str
search(query)[source]ο
Parameters
query (str) β
Return type
str
class langchain.utilities.LambdaWrapper(*, lambda_client=None, function_name=None, awslambda_tool_name=None, awslambda_tool_description=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for AWS Lambda SDK.
Docs for using:
pip install boto3
Create a lambda function using the AWS Console or CLI
Run aws configure and enter your AWS credentials
Parameters
lambda_client (Any) β
function_name (Optional[str]) β
awslambda_tool_name (Optional[str]) β
awslambda_tool_description (Optional[str]) β
Return type
None
attribute awslambda_tool_description: Optional[str] = Noneο
attribute awslambda_tool_name: Optional[str] = Noneο
attribute function_name: Optional[str] = Noneο
run(query)[source]ο
Invoke Lambda function and parse result.
Parameters
query (str) β
Return type
str
class langchain.utilities.MaxComputeAPIWrapper(client)[source]ο
Bases: object
Interface for querying Alibaba Cloud MaxCompute tables.
Parameters
client (ODPS) β
classmethod from_params(endpoint, project, *, access_id=None, secret_access_key=None)[source]ο
Convenience constructor that builds the odsp.ODPS MaxCompute client fromgiven parameters.
Parameters
endpoint (str) β MaxCompute endpoint.
project (str) β A project is a basic organizational unit of MaxCompute, which is
similar to a database.
access_id (Optional[str]) β MaxCompute access ID. Should be passed in directly or set as the
environment variable MAX_COMPUTE_ACCESS_ID.
secret_access_key (Optional[str]) β MaxCompute secret access key. Should be passed in
directly or set as the environment variable
MAX_COMPUTE_SECRET_ACCESS_KEY.
Return type
langchain.utilities.max_compute.MaxComputeAPIWrapper
lazy_query(query)[source]ο
Parameters
query (str) β
Return type
Iterator[dict]
query(query)[source]ο
Parameters
query (str) β
Return type
List[dict]
class langchain.utilities.MetaphorSearchAPIWrapper(*, metaphor_api_key, k=10)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Metaphor Search API.
Parameters
metaphor_api_key (str) β
k (int) β
Return type
None
attribute k: int = 10ο
attribute metaphor_api_key: str [Required]ο
results(query, num_results, include_domains=None, exclude_domains=None, start_crawl_date=None, end_crawl_date=None, start_published_date=None, end_published_date=None)[source]ο
Run query through Metaphor Search and return metadata.
Parameters
query (str) β The query to search for.
num_results (int) β The number of results to return.
include_domains (Optional[List[str]]) β
exclude_domains (Optional[List[str]]) β
start_crawl_date (Optional[str]) β
end_crawl_date (Optional[str]) β
start_published_date (Optional[str]) β
end_published_date (Optional[str]) β
Returns
title - The title of the
url - The url
author - Author of the content, if applicable. Otherwise, None.
published_date - Estimated date published
in YYYY-MM-DD format. Otherwise, None.
Return type
A list of dictionaries with the following keys
async results_async(query, num_results, include_domains=None, exclude_domains=None, start_crawl_date=None, end_crawl_date=None, start_published_date=None, end_published_date=None)[source]ο
Get results from the Metaphor Search API asynchronously.
Parameters
query (str) β
num_results (int) β
include_domains (Optional[List[str]]) β
exclude_domains (Optional[List[str]]) β
start_crawl_date (Optional[str]) β
end_crawl_date (Optional[str]) β
start_published_date (Optional[str]) β
end_published_date (Optional[str]) β
Return type
List[Dict]
class langchain.utilities.OpenWeatherMapAPIWrapper(*, owm=None, openweathermap_api_key=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for OpenWeatherMap API using PyOWM.
Docs for using:
Go to OpenWeatherMap and sign up for an API key
Save your API KEY into OPENWEATHERMAP_API_KEY env variable
pip install pyowm
Parameters
owm (Any) β
openweathermap_api_key (Optional[str]) β
Return type
None
attribute openweathermap_api_key: Optional[str] = Noneο
attribute owm: Any = Noneο
run(location)[source]ο
Get the current weather information for a specified location.
Parameters
location (str) β
Return type
str
class langchain.utilities.PowerBIDataset(*, dataset_id, table_names, group_id=None, credential=None, token=None, impersonated_user_name=None, sample_rows_in_table_info=1, schemas=None, aiosession=None)[source]ο
Bases: pydantic.main.BaseModel
Create PowerBI engine from dataset ID and credential or token.
Use either the credential or a supplied token to authenticate.
If both are supplied the credential is used to generate a token.
The impersonated_user_name is the UPN of a user to be impersonated.
If the model is not RLS enabled, this will be ignored.
Parameters
dataset_id (str) β
table_names (List[str]) β
group_id (Optional[str]) β
credential (Optional[TokenCredential]) β
token (Optional[str]) β
impersonated_user_name (Optional[str]) β
sample_rows_in_table_info (langchain.utilities.powerbi.ConstrainedIntValue) β
schemas (Dict[str, str]) β
aiosession (Optional[aiohttp.client.ClientSession]) β
Return type
None
attribute aiosession: Optional[aiohttp.ClientSession] = Noneο
attribute credential: Optional[TokenCredential] = Noneο
attribute dataset_id: str [Required]ο
attribute group_id: Optional[str] = Noneο
attribute impersonated_user_name: Optional[str] = Noneο
attribute sample_rows_in_table_info: int = 1ο
Constraints
exclusiveMinimum = 0
maximum = 10
attribute schemas: Dict[str, str] [Optional]ο
attribute table_names: List[str] [Required]ο
attribute token: Optional[str] = Noneο
async aget_table_info(table_names=None)[source]ο
Get information about specified tables.
Parameters
table_names (Optional[Union[List[str], str]]) β
Return type
str
async arun(command)[source]ο
Execute a DAX command and return the result asynchronously.
Parameters
command (str) β
Return type
Any
get_schemas()[source]ο
Get the available schemaβs.
Return type
str
get_table_info(table_names=None)[source]ο
Get information about specified tables.
Parameters
table_names (Optional[Union[List[str], str]]) β
Return type
str
get_table_names()[source]ο
Get names of tables available.
Return type
Iterable[str]
run(command)[source]ο
Execute a DAX command and return a json representing the results.
Parameters
command (str) β
Return type
Any
property headers: Dict[str, str]ο
Get the token.
property request_url: strο
Get the request url.
property table_info: strο
Information about all tables in the database.
class langchain.utilities.PubMedAPIWrapper(*, top_k_results=3, load_max_docs=25, doc_content_chars_max=2000, load_all_available_meta=False, email='your_email@example.com', base_url_esearch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?', base_url_efetch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?', max_retry=5, sleep_time=0.2, ARXIV_MAX_QUERY_LENGTH=300)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around PubMed API.
This wrapper will use the PubMed API to conduct searches and fetch
document summaries. By default, it will return the document summaries
of the top-k results of an input search.
Parameters
top_k_results (int) β number of the top-scored document used for the PubMed tool
load_max_docs (int) β a limit to the number of loaded documents
load_all_available_meta (bool) β
if True: the metadata of the loaded Documents gets all available meta info(see https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch)
if False: the metadata gets only the most informative fields.
doc_content_chars_max (int) β
email (str) β
base_url_esearch (str) β
base_url_efetch (str) β
max_retry (int) β
sleep_time (float) β
ARXIV_MAX_QUERY_LENGTH (int) β
Return type
None
attribute doc_content_chars_max: int = 2000ο
attribute email: str = 'your_email@example.com'ο
attribute load_all_available_meta: bool = Falseο
attribute load_max_docs: int = 25ο
attribute top_k_results: int = 3ο
load(query)[source]ο
Search PubMed for documents matching the query.
Return a list of dictionaries containing the document metadata.
Parameters
query (str) β
Return type
List[dict]
load_docs(query)[source]ο
Parameters
query (str) β
Return type
List[langchain.schema.Document]
retrieve_article(uid, webenv)[source]ο
Parameters
uid (str) β
webenv (str) β
Return type
dict
run(query)[source]ο
Run PubMed search and get the article meta information.
See https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch
It uses only the most informative fields of article meta information.
Parameters
query (str) β
Return type
str
class langchain.utilities.PythonREPL(*, _globals=None, _locals=None)[source]ο
Bases: pydantic.main.BaseModel
Simulates a standalone Python REPL.
Parameters
_globals (Optional[Dict]) β
_locals (Optional[Dict]) β
Return type
None
attribute globals: Optional[Dict] [Optional] (alias '_globals')ο
attribute locals: Optional[Dict] [Optional] (alias '_locals')ο
run(command)[source]ο
Run command with own globals/locals and returns anything printed.
Parameters
command (str) β
Return type
str
pydantic settings langchain.utilities.SceneXplainAPIWrapper[source]ο
Bases: pydantic.env_settings.BaseSettings, pydantic.main.BaseModel
Wrapper for SceneXplain API.
In order to set this up, you need API key for the SceneXplain API.
You can obtain a key by following the steps below.
- Sign up for a free account at https://scenex.jina.ai/.
- Navigate to the API Access page (https://scenex.jina.ai/api)
and create a new API key.
Show JSON schema{
"title": "SceneXplainAPIWrapper",
"description": "Wrapper for SceneXplain API.\n\nIn order to set this up, you need API key for the SceneXplain API.\nYou can obtain a key by following the steps below.\n- Sign up for a free account at https://scenex.jina.ai/.\n- Navigate to the API Access page (https://scenex.jina.ai/api)\n and create a new API key.",
"type": "object",
"properties": {
"scenex_api_key": {
"title": "Scenex Api Key",
"env": "SCENEX_API_KEY",
"env_names": "{'scenex_api_key'}",
"type": "string"
},
"scenex_api_url": {
"title": "Scenex Api Url",
"default": "https://us-central1-causal-diffusion.cloudfunctions.net/describe",
"env_names": "{'scenex_api_url'}",
"type": "string"
}
},
"required": [
"scenex_api_key"
],
"additionalProperties": false
}
Fields
scenex_api_key (str)
scenex_api_url (str)
attribute scenex_api_key: str [Required]ο
attribute scenex_api_url: str = 'https://us-central1-causal-diffusion.cloudfunctions.net/describe'ο
run(image)[source]ο
Run SceneXplain image explainer.
Parameters
image (str) β
Return type
str
validator validate_environment » all fields[source]ο
Validate that api key exists in environment.
Parameters
values (Dict) β
Return type
Dict
class langchain.utilities.SearxSearchWrapper(*, searx_host='', unsecure=False, params=None, headers=None, engines=[], categories=[], query_suffix='', k=10, aiosession=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Searx API.
To use you need to provide the searx host by passing the named parameter
searx_host or exporting the environment variable SEARX_HOST.
In some situations you might want to disable SSL verification, for example
if you are running searx locally. You can do this by passing the named parameter
unsecure. You can also pass the host url scheme as http to disable SSL.
Example
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://localhost:8888")
Example with SSL disabled:from langchain.utilities import SearxSearchWrapper
# note the unsecure parameter is not needed if you pass the url scheme as
# http
searx = SearxSearchWrapper(searx_host="http://localhost:8888",
unsecure=True)
Parameters
searx_host (str) β
unsecure (bool) β
params (dict) β
headers (Optional[dict]) β
engines (Optional[List[str]]) β
categories (Optional[List[str]]) β
query_suffix (Optional[str]) β
k (int) β
aiosession (Optional[Any]) β
Return type
None
attribute aiosession: Optional[Any] = Noneο
attribute categories: Optional[List[str]] = []ο
attribute engines: Optional[List[str]] = []ο
attribute headers: Optional[dict] = Noneο
attribute k: int = 10ο
attribute params: dict [Optional]ο
attribute query_suffix: Optional[str] = ''ο
attribute searx_host: str = ''ο
attribute unsecure: bool = Falseο
async aresults(query, num_results, engines=None, query_suffix='', **kwargs)[source]ο
Asynchronously query with json results.
Uses aiohttp. See results for more info.
Parameters
query (str) β
num_results (int) β
engines (Optional[List[str]]) β
query_suffix (Optional[str]) β
kwargs (Any) β
Return type
List[Dict]
async arun(query, engines=None, query_suffix='', **kwargs)[source]ο
Asynchronously version of run.
Parameters
query (str) β
engines (Optional[List[str]]) β
query_suffix (Optional[str]) β
kwargs (Any) β
Return type
str
results(query, num_results, engines=None, categories=None, query_suffix='', **kwargs)[source]ο
Run query through Searx API and returns the results with metadata.
Parameters
query (str) β The query to search for.
query_suffix (Optional[str]) β Extra suffix appended to the query.
num_results (int) β Limit the number of results to return.
engines (Optional[List[str]]) β List of engines to use for the query.
categories (Optional[List[str]]) β List of categories to use for the query.
**kwargs β extra parameters to pass to the searx API.
kwargs (Any) β
Returns
{snippet: The description of the result.
title: The title of the result.
link: The link to the result.
engines: The engines used for the result.
category: Searx category of the result.
}
Return type
Dict with the following keys
run(query, engines=None, categories=None, query_suffix='', **kwargs)[source]ο
Run query through Searx API and parse results.
You can pass any other params to the searx query API.
Parameters
query (str) β The query to search for.
query_suffix (Optional[str]) β Extra suffix appended to the query.
engines (Optional[List[str]]) β List of engines to use for the query.
categories (Optional[List[str]]) β List of categories to use for the query.
**kwargs β extra parameters to pass to the searx API.
kwargs (Any) β
Returns
The result of the query.
Return type
str
Raises
ValueError β If an error occured with the query.
Example
This will make a query to the qwant engine:
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://my.searx.host")
searx.run("what is the weather in France ?", engine="qwant")
# the same result can be achieved using the `!` syntax of searx
# to select the engine using `query_suffix`
searx.run("what is the weather in France ?", query_suffix="!qwant")
class langchain.utilities.SerpAPIWrapper(*, search_engine=None, params={'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}, serpapi_api_key=None, aiosession=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around SerpAPI.
To use, you should have the google-search-results python package installed,
and the environment variable SERPAPI_API_KEY set with your API key, or pass
serpapi_api_key as a named parameter to the constructor.
Example
from langchain import SerpAPIWrapper
serpapi = SerpAPIWrapper()
Parameters
search_engine (Any) β
params (dict) β
serpapi_api_key (Optional[str]) β
aiosession (Optional[aiohttp.client.ClientSession]) β
Return type
None
attribute aiosession: Optional[aiohttp.client.ClientSession] = Noneο
attribute params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}ο
attribute serpapi_api_key: Optional[str] = Noneο
async aresults(query)[source]ο
Use aiohttp to run query through SerpAPI and return the results async.
Parameters
query (str) β
Return type
dict
async arun(query, **kwargs)[source]ο
Run query through SerpAPI and parse result async.
Parameters
query (str) β
kwargs (Any) β
Return type
str
get_params(query)[source]ο
Get parameters for SerpAPI.
Parameters
query (str) β
Return type
Dict[str, str]
results(query)[source]ο
Run query through SerpAPI and return the raw result.
Parameters
query (str) β
Return type
dict
run(query, **kwargs)[source]ο
Run query through SerpAPI and parse result.
Parameters
query (str) β
kwargs (Any) β
Return type
str
class langchain.utilities.SparkSQL(spark_session=None, catalog=None, schema=None, ignore_tables=None, include_tables=None, sample_rows_in_table_info=3)[source]ο
Bases: object
Parameters
spark_session (Optional[SparkSession]) β
catalog (Optional[str]) β
schema (Optional[str]) β
ignore_tables (Optional[List[str]]) β
include_tables (Optional[List[str]]) β
sample_rows_in_table_info (int) β
classmethod from_uri(database_uri, engine_args=None, **kwargs)[source]ο
Creating a remote Spark Session via Spark connect.
For example: SparkSQL.from_uri(βsc://localhost:15002β)
Parameters
database_uri (str) β
engine_args (Optional[dict]) β
kwargs (Any) β
Return type
langchain.utilities.spark_sql.SparkSQL
get_usable_table_names()[source]ο
Get names of tables available.
Return type
Iterable[str]
get_table_info(table_names=None)[source]ο
Parameters
table_names (Optional[List[str]]) β
Return type
str
run(command, fetch='all')[source]ο
Parameters
command (str) β
fetch (str) β
Return type
str
get_table_info_no_throw(table_names=None)[source]ο
Get information about specified tables.
Follows best practices as specified in: Rajkumar et al, 2022
(https://arxiv.org/abs/2204.00498)
If sample_rows_in_table_info, the specified number of sample rows will be
appended to each table description. This can increase performance as
demonstrated in the paper.
Parameters
table_names (Optional[List[str]]) β
Return type
str
run_no_throw(command, fetch='all')[source]ο
Execute a SQL command and return a string representing the results.
If the statement returns rows, a string of the results is returned.
If the statement returns no rows, an empty string is returned.
If the statement throws an error, the error message is returned.
Parameters
command (str) β
fetch (str) β
Return type
str
class langchain.utilities.TextRequestsWrapper(*, headers=None, aiosession=None)[source]ο
Bases: pydantic.main.BaseModel
Lightweight wrapper around requests library.
The main purpose of this wrapper is to always return a text output.
Parameters
headers (Optional[Dict[str, str]]) β
aiosession (Optional[aiohttp.client.ClientSession]) β
Return type
None
attribute aiosession: Optional[aiohttp.client.ClientSession] = Noneο
attribute headers: Optional[Dict[str, str]] = Noneο
async adelete(url, **kwargs)[source]ο
DELETE the URL and return the text asynchronously.
Parameters
url (str) β
kwargs (Any) β
Return type
str
async aget(url, **kwargs)[source]ο
GET the URL and return the text asynchronously.
Parameters
url (str) β
kwargs (Any) β
Return type
str
async apatch(url, data, **kwargs)[source]ο
PATCH the URL and return the text asynchronously.
Parameters
url (str) β
data (Dict[str, Any]) β
kwargs (Any) β
Return type
str
async apost(url, data, **kwargs)[source]ο
POST to the URL and return the text asynchronously.
Parameters
url (str) β
data (Dict[str, Any]) β
kwargs (Any) β
Return type
str
async aput(url, data, **kwargs)[source]ο
PUT the URL and return the text asynchronously.
Parameters
url (str) β
data (Dict[str, Any]) β
kwargs (Any) β
Return type
str
delete(url, **kwargs)[source]ο
DELETE the URL and return the text.
Parameters
url (str) β
kwargs (Any) β
Return type
str
get(url, **kwargs)[source]ο
GET the URL and return the text.
Parameters
url (str) β
kwargs (Any) β
Return type
str
patch(url, data, **kwargs)[source]ο
PATCH the URL and return the text.
Parameters
url (str) β
data (Dict[str, Any]) β
kwargs (Any) β
Return type
str
post(url, data, **kwargs)[source]ο
POST to the URL and return the text.
Parameters
url (str) β
data (Dict[str, Any]) β
kwargs (Any) β
Return type
str
put(url, data, **kwargs)[source]ο
PUT the URL and return the text.
Parameters
url (str) β
data (Dict[str, Any]) β
kwargs (Any) β
Return type
str
property requests: langchain.requests.Requestsο
class langchain.utilities.TwilioAPIWrapper(*, client=None, account_sid=None, auth_token=None, from_number=None)[source]ο
Bases: pydantic.main.BaseModel
Messaging Client using Twilio.
To use, you should have the twilio python package installed,
and the environment variables TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN, and
TWILIO_FROM_NUMBER, or pass account_sid, auth_token, and from_number as
named parameters to the constructor.
Example
from langchain.utilities.twilio import TwilioAPIWrapper
twilio = TwilioAPIWrapper(
account_sid="ACxxx",
auth_token="xxx",
from_number="+10123456789"
)
twilio.run('test', '+12484345508')
Parameters
client (Any) β
account_sid (Optional[str]) β
auth_token (Optional[str]) β
from_number (Optional[str]) β
Return type
None
attribute account_sid: Optional[str] = Noneο
Twilio account string identifier.
attribute auth_token: Optional[str] = Noneο
Twilio auth token.
attribute from_number: Optional[str] = Noneο
A Twilio phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164)
format, an
[alphanumeric sender ID](https://www.twilio.com/docs/sms/send-messages#use-an-alphanumeric-sender-id),
or a [Channel Endpoint address](https://www.twilio.com/docs/sms/channels#channel-addresses)
that is enabled for the type of message you want to send. Phone numbers or
[short codes](https://www.twilio.com/docs/sms/api/short-code) purchased from
Twilio also work here. You cannot, for example, spoof messages from a private
cell phone number. If you are using messaging_service_sid, this parameter
must be empty.
run(body, to)[source]ο
Run body through Twilio and respond with message sid.
Parameters
body (str) β The text of the message you want to send. Can be up to 1,600
characters in length.
to (str) β The destination phone number in
[E.164](https://www.twilio.com/docs/glossary/what-e164) format for
SMS/MMS or
[Channel user address](https://www.twilio.com/docs/sms/channels#channel-addresses)
for other 3rd-party channels.
Return type
str
class langchain.utilities.WikipediaAPIWrapper(*, wiki_client=None, top_k_results=3, lang='en', load_all_available_meta=False, doc_content_chars_max=4000)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around WikipediaAPI.
To use, you should have the wikipedia python package installed.
This wrapper will use the Wikipedia API to conduct searches and
fetch page summaries. By default, it will return the page summaries
of the top-k results.
It limits the Document content by doc_content_chars_max.
Parameters
wiki_client (Any) β
top_k_results (int) β
lang (str) β
load_all_available_meta (bool) β
doc_content_chars_max (int) β
Return type
None
attribute doc_content_chars_max: int = 4000ο
attribute lang: str = 'en'ο
attribute load_all_available_meta: bool = Falseο
attribute top_k_results: int = 3ο
load(query)[source]ο
Run Wikipedia search and get the article text plus the meta information.
See
Returns: a list of documents.
Parameters
query (str) β
Return type
List[langchain.schema.Document]
run(query)[source]ο
Run Wikipedia search and get page summaries.
Parameters
query (str) β
Return type
str
class langchain.utilities.WolframAlphaAPIWrapper(*, wolfram_client=None, wolfram_alpha_appid=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Wolfram Alpha.
Docs for using:
Go to wolfram alpha and sign up for a developer account
Create an app and get your APP ID
Save your APP ID into WOLFRAM_ALPHA_APPID env variable
pip install wolframalpha
Parameters
wolfram_client (Any) β
wolfram_alpha_appid (Optional[str]) β
Return type
None
attribute wolfram_alpha_appid: Optional[str] = Noneο
run(query)[source]ο
Run query through WolframAlpha and parse result.
Parameters
query (str) β
Return type
str
class langchain.utilities.ZapierNLAWrapper(*, zapier_nla_api_key, zapier_nla_oauth_access_token, zapier_nla_api_base='https://nla.zapier.com/api/v1/')[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Zapier NLA.
Full docs here: https://nla.zapier.com/api/v1/docs
Note: this wrapper currently only implemented the api_key auth method for
testingand server-side production use cases (using the developerβs connected
accounts on Zapier.com)
For use-cases where LangChain + Zapier NLA is powering a user-facing application,
and LangChain needs access to the end-userβs connected accounts on Zapier.com,
youβll need to use oauth. Review the full docs above and reach out to
nla@zapier.com for developer support.
Parameters
zapier_nla_api_key (str) β
zapier_nla_oauth_access_token (str) β
zapier_nla_api_base (str) β
Return type
None
attribute zapier_nla_api_base: str = 'https://nla.zapier.com/api/v1/'ο
attribute zapier_nla_api_key: str [Required]ο
attribute zapier_nla_oauth_access_token: str [Required]ο
list()[source]ο
Returns a list of all exposed (enabled) actions associated with
current user (associated with the set api_key). Change your exposed
actions here: https://nla.zapier.com/demo/start/
The return list can be empty if no actions exposed. Else will contain
a list of action objects:
[{βidβ: str,
βdescriptionβ: str,
βparamsβ: Dict[str, str]
}]
params will always contain an instructions key, the only required
param. All others optional and if provided will override any AI guesses
(see βunderstanding the AI guessing flowβ here:
https://nla.zapier.com/api/v1/docs)
Return type
List[Dict]
list_as_str()[source]ο
Same as list, but returns a stringified version of the JSON for
insertting back into an LLM.
Return type
str
preview(action_id, instructions, params=None)[source]ο
Same as run, but instead of actually executing the action, will
instead return a preview of params that have been guessed by the AI in
case you need to explicitly review before executing.
Parameters
action_id (str) β
instructions (str) β
params (Optional[Dict]) β
Return type
Dict
preview_as_str(*args, **kwargs)[source]ο
Same as preview, but returns a stringified version of the JSON for
insertting back into an LLM.
Return type
str
run(action_id, instructions, params=None)[source]ο
Executes an action that is identified by action_id, must be exposed
(enabled) by the current user (associated with the set api_key). Change
your exposed actions here: https://nla.zapier.com/demo/start/
The return JSON is guaranteed to be less than ~500 words (350
tokens) making it safe to inject into the prompt of another LLM
call.
Parameters
action_id (str) β
instructions (str) β
params (Optional[Dict]) β
Return type
Dict
run_as_str(*args, **kwargs)[source]ο
Same as run, but returns a stringified version of the JSON for
insertting back into an LLM.
Return type
str | https://api.python.langchain.com/en/latest/modules/utilities.html |
90376b9d-3a12-4a0f-b352-1793ecf6a7af | Vector Storesο
Wrappers on top of vector stores.
class langchain.vectorstores.AlibabaCloudOpenSearch(embedding, config, **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Alibaba Cloud OpenSearch Vector Store
Parameters
embedding (langchain.embeddings.base.Embeddings) β
config (langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings) β
kwargs (Any) β
Return type
None
add_texts(texts, metadatas=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search(query, k=4, search_filter=None, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
search_filter (Optional[Dict[str, Any]]) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
similarity_search_with_relevance_scores(query, k=4, search_filter=None, **kwargs)[source]ο
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query (str) β input text
k (int) β Number of Documents to return. Defaults to 4.
**kwargs β kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
search_filter (Optional[dict]) β
kwargs (Any) β
Returns
List of Tuples of (doc, similarity_score)
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_by_vector(embedding, k=4, search_filter=None, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
search_filter (Optional[dict]) β
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[langchain.schema.Document]
inner_embedding_query(embedding, search_filter=None, k=4)[source]ο
Parameters
embedding (List[float]) β
search_filter (Optional[Dict[str, Any]]) β
k (int) β
Return type
Dict[str, Any]
create_results(json_result)[source]ο
Parameters
json_result (Dict[str, Any]) β
Return type
List[langchain.schema.Document]
create_results_with_score(json_result)[source]ο
Parameters
json_result (Dict[str, Any]) β
Return type
List[Tuple[langchain.schema.Document, float]]
classmethod from_texts(texts, embedding, metadatas=None, config=None, **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
config (Optional[langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings]) β
kwargs (Any) β
Return type
langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch
classmethod from_documents(documents, embedding, ids=None, config=None, **kwargs)[source]ο
Return VectorStore initialized from documents and embeddings.
Parameters
documents (List[langchain.schema.Document]) β
embedding (langchain.embeddings.base.Embeddings) β
ids (Optional[List[str]]) β
config (Optional[langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings]) β
kwargs (Any) β
Return type
langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch
class langchain.vectorstores.AlibabaCloudOpenSearchSettings(endpoint, instance_id, username, password, datasource_name, embedding_index_name, field_name_mapping)[source]ο
Bases: object
Opensearch Client Configuration
Attribute:
endpoint (str) : The endpoint of opensearch instance, You can find it
from the console of Alibaba Cloud OpenSearch.
instance_id (str) : The identify of opensearch instance, You can find
it from the console of Alibaba Cloud OpenSearch.
datasource_name (str): The name of the data source specified when creating it.
username (str) : The username specified when purchasing the instance.
password (str) : The password specified when purchasing the instance.
embedding_index_name (str) : The name of the vector attribute specified
when configuring the instance attributes.
field_name_mapping (Dict) : Using field name mapping between opensearch
vector store and opensearch instance configuration table field names:
{
βidβ: βThe id field name map of index document.β,
βdocumentβ: βThe text field name map of index document.β,
βembeddingβ: βIn the embedding field of the opensearch instance,
the values must be in float16 multivalue type and separated by commas.β,
βmetadata_field_xβ: βMetadata field mapping includes the mapped
field name and operator in the mapping value, separated by a comma
between the mapped field name and the operator.β,
}
Parameters
endpoint (str) β
instance_id (str) β
username (str) β
password (str) β
datasource_name (str) β
embedding_index_name (str) β
field_name_mapping (Dict[str, str]) β
Return type
None
endpoint: strο
instance_id: strο
username: strο
password: strο
datasource_name: strο
embedding_index_name: strο
field_name_mapping: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata_field_x': 'metadata_field_x,operator'}ο
class langchain.vectorstores.AnalyticDB(connection_string, embedding_function, embedding_dimension=1536, collection_name='langchain_document', pre_delete_collection=False, logger=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
VectorStore implementation using AnalyticDB.
AnalyticDB is a distributed full PostgresSQL syntax cloud-native database.
- connection_string is a postgres connection string.
- embedding_function any embedding function implementing
langchain.embeddings.base.Embeddings interface.
collection_name is the name of the collection to use. (default: langchain)
NOTE: This is not the name of the table, but the name of the collection.The tables will be created when initializing the store (if not exists)
So, make sure the user has the right permissions to create tables.
pre_delete_collection if True, will delete the collection if it exists.(default: False)
- Useful for testing.
Parameters
connection_string (str) β
embedding_function (Embeddings) β
embedding_dimension (int) β
collection_name (str) β
pre_delete_collection (bool) β
logger (Optional[logging.Logger]) β
Return type
None
create_table_if_not_exists()[source]ο
Return type
None
create_collection()[source]ο
Return type
None
delete_collection()[source]ο
Return type
None
add_texts(texts, metadatas=None, ids=None, batch_size=500, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β vectorstore specific parameters
ids (Optional[List[str]]) β
batch_size (int) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search(query, k=4, filter=None, **kwargs)[source]ο
Run similarity search with AnalyticDB with distance.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[langchain.schema.Document]
similarity_search_with_score(query, k=4, filter=None)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_with_score_by_vector(embedding, k=4, filter=None)[source]ο
Parameters
embedding (List[float]) β
k (int) β
filter (Optional[dict]) β
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_by_vector(embedding, k=4, filter=None, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, embedding_dimension=1536, collection_name='langchain_document', ids=None, pre_delete_collection=False, **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Postgres Connection string is required
Either pass it as a parameter
or set the PG_CONNECTION_STRING environment variable.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
embedding_dimension (int) β
collection_name (str) β
ids (Optional[List[str]]) β
pre_delete_collection (bool) β
kwargs (Any) β
Return type
langchain.vectorstores.analyticdb.AnalyticDB
classmethod get_connection_string(kwargs)[source]ο
Parameters
kwargs (Dict[str, Any]) β
Return type
str
classmethod from_documents(documents, embedding, embedding_dimension=1536, collection_name='langchain_document', ids=None, pre_delete_collection=False, **kwargs)[source]ο
Return VectorStore initialized from documents and embeddings.
Postgres Connection string is required
Either pass it as a parameter
or set the PG_CONNECTION_STRING environment variable.
Parameters
documents (List[langchain.schema.Document]) β
embedding (langchain.embeddings.base.Embeddings) β
embedding_dimension (int) β
collection_name (str) β
ids (Optional[List[str]]) β
pre_delete_collection (bool) β
kwargs (Any) β
Return type
langchain.vectorstores.analyticdb.AnalyticDB
classmethod connection_string_from_db_params(driver, host, port, database, user, password)[source]ο
Return connection string from database parameters.
Parameters
driver (str) β
host (str) β
port (int) β
database (str) β
user (str) β
password (str) β
Return type
str
class langchain.vectorstores.Annoy(embedding_function, index, metric, docstore, index_to_docstore_id)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Annoy vector database.
To use, you should have the annoy python package installed.
Example
from langchain import Annoy
db = Annoy(embedding_function, index, docstore, index_to_docstore_id)
Parameters
embedding_function (Callable) β
index (Any) β
metric (str) β
docstore (Docstore) β
index_to_docstore_id (Dict[int, str]) β
add_texts(texts, metadatas=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
process_index_results(idxs, dists)[source]ο
Turns annoy results into a list of documents and scores.
Parameters
idxs (List[int]) β List of indices of the documents in the index.
dists (List[float]) β List of distances of the documents in the index.
Returns
List of Documents and scores.
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_with_score_by_vector(embedding, k=4, search_k=- 1)[source]ο
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
search_k (int) β inspect up to search_k nodes which defaults
to n_trees * n if not provided
embedding (List[float]) β
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_with_score_by_index(docstore_index, k=4, search_k=- 1)[source]ο
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
search_k (int) β inspect up to search_k nodes which defaults
to n_trees * n if not provided
docstore_index (int) β
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_with_score(query, k=4, search_k=- 1)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
search_k (int) β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_by_vector(embedding, k=4, search_k=- 1, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
search_k (int) β inspect up to search_k nodes which defaults
to n_trees * n if not provided
kwargs (Any) β
Returns
List of Documents most similar to the embedding.
Return type
List[langchain.schema.Document]
similarity_search_by_index(docstore_index, k=4, search_k=- 1, **kwargs)[source]ο
Return docs most similar to docstore_index.
Parameters
docstore_index (int) β Index of document in docstore
k (int) β Number of Documents to return. Defaults to 4.
search_k (int) β inspect up to search_k nodes which defaults
to n_trees * n if not provided
kwargs (Any) β
Returns
List of Documents most similar to the embedding.
Return type
List[langchain.schema.Document]
similarity_search(query, k=4, search_k=- 1, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
search_k (int) β inspect up to search_k nodes which defaults
to n_trees * n if not provided
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
k (int) β Number of Documents to return. Defaults to 4.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, metric='angular', trees=100, n_jobs=- 1, **kwargs)[source]ο
Construct Annoy wrapper from raw documents.
Parameters
texts (List[str]) β List of documents to index.
embedding (langchain.embeddings.base.Embeddings) β Embedding function to use.
metadatas (Optional[List[dict]]) β List of metadata dictionaries to associate with documents.
metric (str) β Metric to use for indexing. Defaults to βangularβ.
trees (int) β Number of trees to use for indexing. Defaults to 100.
n_jobs (int) β Number of jobs to use for indexing. Defaults to -1.
kwargs (Any) β
Return type
langchain.vectorstores.annoy.Annoy
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the Annoy database
This is intended to be a quick way to get started.
Example
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
index = Annoy.from_texts(texts, embeddings)
classmethod from_embeddings(text_embeddings, embedding, metadatas=None, metric='angular', trees=100, n_jobs=- 1, **kwargs)[source]ο
Construct Annoy wrapper from embeddings.
Parameters
text_embeddings (List[Tuple[str, List[float]]]) β List of tuples of (text, embedding)
embedding (langchain.embeddings.base.Embeddings) β Embedding function to use.
metadatas (Optional[List[dict]]) β List of metadata dictionaries to associate with documents.
metric (str) β Metric to use for indexing. Defaults to βangularβ.
trees (int) β Number of trees to use for indexing. Defaults to 100.
n_jobs (int) β Number of jobs to use for indexing. Defaults to -1
kwargs (Any) β
Return type
langchain.vectorstores.annoy.Annoy
This is a user friendly interface that:
Creates an in memory docstore with provided embeddings
Initializes the Annoy database
This is intended to be a quick way to get started.
Example
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
db = Annoy.from_embeddings(text_embedding_pairs, embeddings)
save_local(folder_path, prefault=False)[source]ο
Save Annoy index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path (str) β folder path to save index, docstore,
and index_to_docstore_id to.
prefault (bool) β Whether to pre-load the index into memory.
Return type
None
classmethod load_local(folder_path, embeddings)[source]ο
Load Annoy index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path (str) β folder path to load index, docstore,
and index_to_docstore_id from.
embeddings (langchain.embeddings.base.Embeddings) β Embeddings to use when generating queries.
Return type
langchain.vectorstores.annoy.Annoy
class langchain.vectorstores.AtlasDB(name, embedding_function=None, api_key=None, description='A description for your project', is_public=True, reset_project_if_exists=False)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Atlas: Nomicβs neural database and rhizomatic instrument.
To use, you should have the nomic python package installed.
Example
from langchain.vectorstores import AtlasDB
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = AtlasDB("my_project", embeddings.embed_query)
Parameters
name (str) β
embedding_function (Optional[Embeddings]) β
api_key (Optional[str]) β
description (str) β
is_public (bool) β
reset_project_if_exists (bool) β
Return type
None
add_texts(texts, metadatas=None, ids=None, refresh=True, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]]) β An optional list of ids.
refresh (bool) β Whether or not to refresh indices with the updated data.
Default True.
kwargs (Any) β
Returns
List of IDs of the added texts.
Return type
List[str]
create_index(**kwargs)[source]ο
Creates an index in your project.
See
https://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index
for full detail.
Parameters
kwargs (Any) β
Return type
Any
similarity_search(query, k=4, **kwargs)[source]ο
Run similarity search with AtlasDB
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
kwargs (Any) β
Returns
List of documents most similar to the query text.
Return type
List[Document]
classmethod from_texts(texts, embedding=None, metadatas=None, ids=None, name=None, api_key=None, description='A description for your project', is_public=True, reset_project_if_exists=False, index_kwargs=None, **kwargs)[source]ο
Create an AtlasDB vectorstore from a raw documents.
Parameters
texts (List[str]) β The list of texts to ingest.
name (str) β Name of the project to create.
api_key (str) β Your nomic API key,
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
ids (Optional[List[str]]) β Optional list of document IDs. If None,
ids will be auto created
description (str) β A description for your project.
is_public (bool) β Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) β Whether to reset this project if it
already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) β Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
kwargs (Any) β
Returns
Nomicβs neural database and finest rhizomatic instrument
Return type
AtlasDB
classmethod from_documents(documents, embedding=None, ids=None, name=None, api_key=None, persist_directory=None, description='A description for your project', is_public=True, reset_project_if_exists=False, index_kwargs=None, **kwargs)[source]ο
Create an AtlasDB vectorstore from a list of documents.
Parameters
name (str) β Name of the collection to create.
api_key (str) β Your nomic API key,
documents (List[Document]) β List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
ids (Optional[List[str]]) β Optional list of document IDs. If None,
ids will be auto created
description (str) β A description for your project.
is_public (bool) β Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) β Whether to reset this project if
it already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) β Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
persist_directory (Optional[str]) β
kwargs (Any) β
Returns
Nomicβs neural database and finest rhizomatic instrument
Return type
AtlasDB
class langchain.vectorstores.AwaDB(table_name='langchain_awadb', embedding_model=None, log_and_data_dir=None, client=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Interface implemented by AwaDB vector stores.
Parameters
table_name (str) β
embedding_model (Optional[Embeddings]) β
log_and_data_dir (Optional[str]) β
client (Optional[awadb.Client]) β
Return type
None
add_texts(texts, metadatas=None, is_duplicate_texts=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
:param texts: Iterable of strings to add to the vectorstore.
:param metadatas: Optional list of metadatas associated with the texts.
:param is_duplicate_texts: Optional whether to duplicate texts.
:param kwargs: vectorstore specific parameters.
Returns
List of ids from adding the texts into the vectorstore.
Parameters
texts (Iterable[str]) β
metadatas (Optional[List[dict]]) β
is_duplicate_texts (Optional[bool]) β
kwargs (Any) β
Return type
List[str]
load_local(table_name, **kwargs)[source]ο
Parameters
table_name (str) β
kwargs (Any) β
Return type
bool
similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
similarity_search_with_score(query, k=4, **kwargs)[source]ο
Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_with_relevance_scores(query, k=4, **kwargs)[source]ο
Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_by_vector(embedding=None, k=4, scores=None, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (Optional[List[float]]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
scores (Optional[list]) β
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[langchain.schema.Document]
create_table(table_name, **kwargs)[source]ο
Create a new table.
Parameters
table_name (str) β
kwargs (Any) β
Return type
bool
use(table_name, **kwargs)[source]ο
Use the specified table. Donβt know the tables, please invoke list_tables.
Parameters
table_name (str) β
kwargs (Any) β
Return type
bool
list_tables(**kwargs)[source]ο
List all the tables created by the client.
Parameters
kwargs (Any) β
Return type
List[str]
get_current_table(**kwargs)[source]ο
Get the current table.
Parameters
kwargs (Any) β
Return type
str
classmethod from_texts(texts, embedding=None, metadatas=None, table_name='langchain_awadb', logging_and_data_dir=None, client=None, **kwargs)[source]ο
Create an AwaDB vectorstore from a raw documents.
Parameters
texts (List[str]) β List of texts to add to the table.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
table_name (str) β Name of the table to create.
logging_and_data_dir (Optional[str]) β Directory of logging and persistence.
client (Optional[awadb.Client]) β AwaDB client
kwargs (Any) β
Returns
AwaDB vectorstore.
Return type
AwaDB
classmethod from_documents(documents, embedding=None, table_name='langchain_awadb', logging_and_data_dir=None, client=None, **kwargs)[source]ο
Create an AwaDB vectorstore from a list of documents.
If a logging_and_data_dir specified, the table will be persisted there.
Parameters
documents (List[Document]) β List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
table_name (str) β Name of the table to create.
logging_and_data_dir (Optional[str]) β Directory to persist the table.
client (Optional[awadb.Client]) β AwaDB client
kwargs (Any) β
Returns
AwaDB vectorstore.
Return type
AwaDB
class langchain.vectorstores.AzureSearch(azure_search_endpoint, azure_search_key, index_name, embedding_function, search_type='hybrid', semantic_configuration_name=None, semantic_query_language='en-us', **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Parameters
azure_search_endpoint (str) β
azure_search_key (str) β
index_name (str) β
embedding_function (Callable) β
search_type (str) β
semantic_configuration_name (Optional[str]) β
semantic_query_language (str) β
kwargs (Any) β
add_texts(texts, metadatas=None, **kwargs)[source]ο
Add texts data to an existing index.
Parameters
texts (Iterable[str]) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
List[str]
similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
vector_search(query, k=4, **kwargs)[source]ο
Returns the most similar indexed documents to the query text.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
kwargs (Any) β
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
vector_search_with_score(query, k=4, filters=None)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filters (Optional[str]) β
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
hybrid_search(query, k=4, **kwargs)[source]ο
Returns the most similar indexed documents to the query text.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
kwargs (Any) β
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
hybrid_search_with_score(query, k=4, filters=None)[source]ο
Return docs most similar to query with an hybrid query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filters (Optional[str]) β
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
semantic_hybrid_search(query, k=4, **kwargs)[source]ο
Returns the most similar indexed documents to the query text.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
kwargs (Any) β
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
semantic_hybrid_search_with_score(query, k=4, filters=None)[source]ο
Return docs most similar to query with an hybrid query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filters (Optional[str]) β
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
classmethod from_texts(texts, embedding, metadatas=None, azure_search_endpoint='', azure_search_key='', index_name='langchain-index', **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
azure_search_endpoint (str) β
azure_search_key (str) β
index_name (str) β
kwargs (Any) β
Return type
langchain.vectorstores.azuresearch.AzureSearch
class langchain.vectorstores.Cassandra(embedding, session, keyspace, table_name, ttl_seconds=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Cassandra embeddings platform.
There is no notion of a default table name, since each embedding
function implies its own vector dimension, which is part of the schema.
Example
from langchain.vectorstores import Cassandra
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
session = ...
keyspace = 'my_keyspace'
vectorstore = Cassandra(embeddings, session, keyspace, 'my_doc_archive')
Parameters
embedding (Embeddings) β
session (Session) β
keyspace (str) β
table_name (str) β
ttl_seconds (int | None) β
Return type
None
delete_collection()[source]ο
Just an alias for clear
(to better align with other VectorStore implementations).
Return type
None
clear()[source]ο
Empty the collection.
Return type
None
delete_by_document_id(document_id)[source]ο
Parameters
document_id (str) β
Return type
None
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]], optional) β Optional list of IDs.
kwargs (Any) β
Returns
List of IDs of the added texts.
Return type
List[str]
similarity_search_with_score_id_by_vector(embedding, k=4)[source]ο
Return docs most similar to embedding vector.
No support for filter query (on metadata) along with vector search.
Parameters
embedding (str) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
Returns
List of (Document, score, id), the most similar to the query vector.
Return type
List[Tuple[langchain.schema.Document, float, str]]
similarity_search_with_score_id(query, k=4, **kwargs)[source]ο
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[Tuple[langchain.schema.Document, float, str]]
similarity_search_with_score_by_vector(embedding, k=4)[source]ο
Return docs most similar to embedding vector.
No support for filter query (on metadata) along with vector search.
Parameters
embedding (str) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
Returns
List of (Document, score), the most similar to the query vector.
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
similarity_search_by_vector(embedding, k=4, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[langchain.schema.Document]
similarity_search_with_score(query, k=4, **kwargs)[source]ο
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[Tuple[langchain.schema.Document, float]]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param embedding: Embedding to look up documents similar to.
:param k: Number of Documents to return.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Returns
List of Documents selected by maximal marginal relevance.
Parameters
embedding (List[float]) β
k (int) β
fetch_k (int) β
lambda_mult (float) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Optional.
Returns
List of Documents selected by maximal marginal relevance.
Parameters
query (str) β
k (int) β
fetch_k (int) β
lambda_mult (float) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source]ο
Create a Cassandra vectorstore from raw texts.
No support for specifying text IDs
Returns
a Cassandra vectorstore.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
langchain.vectorstores.cassandra.CVST
classmethod from_documents(documents, embedding, **kwargs)[source]ο
Create a Cassandra vectorstore from a document list.
No support for specifying text IDs
Returns
a Cassandra vectorstore.
Parameters
documents (List[langchain.schema.Document]) β
embedding (langchain.embeddings.base.Embeddings) β
kwargs (Any) β
Return type
langchain.vectorstores.cassandra.CVST
class langchain.vectorstores.Chroma(collection_name='langchain', embedding_function=None, persist_directory=None, client_settings=None, collection_metadata=None, client=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around ChromaDB embeddings platform.
To use, you should have the chromadb python package installed.
Example
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Chroma("langchain_store", embeddings)
Parameters
collection_name (str) β
embedding_function (Optional[Embeddings]) β
persist_directory (Optional[str]) β
client_settings (Optional[chromadb.config.Settings]) β
collection_metadata (Optional[Dict]) β
client (Optional[chromadb.Client]) β
Return type
None
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]], optional) β Optional list of IDs.
kwargs (Any) β
Returns
List of IDs of the added texts.
Return type
List[str]
similarity_search(query, k=4, filter=None, **kwargs)[source]ο
Run similarity search with Chroma.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
kwargs (Any) β
Returns
List of documents most similar to the query text.
Return type
List[Document]
similarity_search_by_vector(embedding, k=4, filter=None, **kwargs)[source]ο
Return docs most similar to embedding vector.
:param embedding: Embedding to look up documents similar to.
:type embedding: str
:param k: Number of Documents to return. Defaults to 4.
:type k: int
:param filter: Filter by metadata. Defaults to None.
:type filter: Optional[Dict[str, str]]
Returns
List of Documents most similar to the query vector.
Parameters
embedding (List[float]) β
k (int) β
filter (Optional[Dict[str, str]]) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
similarity_search_with_score(query, k=4, filter=None, **kwargs)[source]ο
Run similarity search with Chroma with distance.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
kwargs (Any) β
Returns
List of documents most similar to
the query text and cosine distance in float for each.
Lower score represents more similarity.
Return type
List[Tuple[Document, float]]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
delete_collection()[source]ο
Delete the collection.
Return type
None
get(ids=None, where=None, limit=None, offset=None, where_document=None, include=None)[source]ο
Gets the collection.
Parameters
ids (Optional[OneOrMany[ID]]) β The ids of the embeddings to get. Optional.
where (Optional[Where]) β A Where type dict used to filter results by.
E.g. {βcolorβ : βredβ, βpriceβ: 4.20}. Optional.
limit (Optional[int]) β The number of documents to return. Optional.
offset (Optional[int]) β The offset to start returning results from.
Useful for paging results with limit. Optional.
where_document (Optional[WhereDocument]) β A WhereDocument type dict used to filter by the documents.
E.g. {$contains: {βtextβ: βhelloβ}}. Optional.
include (Optional[List[str]]) β A list of what to include in the results.
Can contain βembeddingsβ, βmetadatasβ, βdocumentsβ.
Ids are always included.
Defaults to [βmetadatasβ, βdocumentsβ]. Optional.
Return type
Dict[str, Any]
persist()[source]ο
Persist the collection.
This can be used to explicitly persist the data to disk.
It will also be called automatically when the object is destroyed.
Return type
None
update_document(document_id, document)[source]ο
Update a document in the collection.
Parameters
document_id (str) β ID of the document to update.
document (Document) β Document to update.
Return type
None
classmethod from_texts(texts, embedding=None, metadatas=None, ids=None, collection_name='langchain', persist_directory=None, client_settings=None, client=None, **kwargs)[source]ο
Create a Chroma vectorstore from a raw documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
texts (List[str]) β List of texts to add to the collection.
collection_name (str) β Name of the collection to create.
persist_directory (Optional[str]) β Directory to persist the collection.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
ids (Optional[List[str]]) β List of document IDs. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) β Chroma client settings
client (Optional[chromadb.Client]) β
kwargs (Any) β
Returns
Chroma vectorstore.
Return type
Chroma
classmethod from_documents(documents, embedding=None, ids=None, collection_name='langchain', persist_directory=None, client_settings=None, client=None, **kwargs)[source]ο
Create a Chroma vectorstore from a list of documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
collection_name (str) β Name of the collection to create.
persist_directory (Optional[str]) β Directory to persist the collection.
ids (Optional[List[str]]) β List of document IDs. Defaults to None.
documents (List[Document]) β List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) β Chroma client settings
client (Optional[chromadb.Client]) β
kwargs (Any) β
Returns
Chroma vectorstore.
Return type
Chroma
delete(ids)[source]ο
Delete by vector IDs.
Parameters
ids (List[str]) β List of ids to delete.
Return type
None
class langchain.vectorstores.Clickhouse(embedding, config=None, **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around ClickHouse vector database
You need a clickhouse-connect python package, and a valid account
to connect to ClickHouse.
ClickHouse can not only search with simple vector indexes,
it also supports complex query with multiple conditions,
constraints and even sub-queries.
For more information, please visit[ClickHouse official site](https://clickhouse.com/clickhouse)
Parameters
embedding (Embeddings) β
config (Optional[ClickhouseSettings]) β
kwargs (Any) β
Return type
None
escape_str(value)[source]ο
Parameters
value (str) β
Return type
str
add_texts(texts, metadatas=None, batch_size=32, ids=None, **kwargs)[source]ο
Insert more texts through the embeddings and add to the VectorStore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the VectorStore.
ids (Optional[Iterable[str]]) β Optional list of ids to associate with the texts.
batch_size (int) β Batch size of insertion
metadata β Optional column data to be inserted
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Returns
List of ids from adding the texts into the VectorStore.
Return type
List[str]
classmethod from_texts(texts, embedding, metadatas=None, config=None, text_ids=None, batch_size=32, **kwargs)[source]ο
Create ClickHouse wrapper with existing texts
Parameters
embedding_function (Embeddings) β Function to extract text embedding
texts (Iterable[str]) β List or tuple of strings to be added
config (ClickHouseSettings, Optional) β ClickHouse configuration
text_ids (Optional[Iterable], optional) β IDs for the texts.
Defaults to None.
batch_size (int, optional) β Batchsize when transmitting data to ClickHouse.
Defaults to 32.
metadata (List[dict], optional) β metadata to texts. Defaults to None.
into (Other keyword arguments will pass) β [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[Dict[Any, Any]]]) β
kwargs (Any) β
Returns
ClickHouse Index
Return type
langchain.vectorstores.clickhouse.Clickhouse
similarity_search(query, k=4, where_str=None, **kwargs)[source]ο
Perform a similarity search with ClickHouse
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
kwargs (Any) β
Returns
List of Documents
Return type
List[Document]
similarity_search_by_vector(embedding, k=4, where_str=None, **kwargs)[source]ο
Perform a similarity search with ClickHouse by vectors
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
embedding (List[float]) β
kwargs (Any) β
Returns
List of (Document, similarity)
Return type
List[Document]
similarity_search_with_relevance_scores(query, k=4, where_str=None, **kwargs)[source]ο
Perform a similarity search with ClickHouse
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
kwargs (Any) β
Returns
List of documents
Return type
List[Document]
drop()[source]ο
Helper function: Drop data
Return type
None
property metadata_column: strο
pydantic settings langchain.vectorstores.ClickhouseSettings[source]ο
Bases: pydantic.env_settings.BaseSettings
ClickHouse Client Configuration
Attribute:
clickhouse_host (str)An URL to connect to MyScale backend.Defaults to βlocalhostβ.
clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.
username (str) : Username to login. Defaults to None.
password (str) : Password to login. Defaults to None.
index_type (str): index type string.
index_param (list): index build parameter.
index_query_params(dict): index query parameters.
database (str) : Database name to find the table. Defaults to βdefaultβ.
table (str) : Table name to operate on.
Defaults to βvector_tableβ.
metric (str)Metric to compute distance,supported are (βangularβ, βeuclideanβ, βmanhattanβ, βhammingβ,
βdotβ). Defaults to βangularβ.
https://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169
column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,
must be same size to number of columns. For example:
.. code-block:: python
{βidβ: βtext_idβ,
βuuidβ: βglobal_unique_idβ
βembeddingβ: βtext_embeddingβ,
βdocumentβ: βtext_plainβ,
βmetadataβ: βmetadata_dictionary_in_jsonβ,
}
Defaults to identity map.
Show JSON schema{
"title": "ClickhouseSettings",
"description": "ClickHouse Client Configuration\n\nAttribute:\n clickhouse_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (list): index build parameter.\n index_query_params(dict): index query parameters.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('angular', 'euclidean', 'manhattan', 'hamming',\n 'dot'). Defaults to 'angular'.\n https://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169\n\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n\n {\n 'id': 'text_id',\n 'uuid': 'global_unique_id'\n 'embedding': 'text_embedding',\n 'document': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n\n Defaults to identity map.",
"type": "object",
"properties": {
"host": {
"title": "Host",
"default": "localhost",
"env_names": "{'clickhouse_host'}",
"type": "string"
},
"port": {
"title": "Port",
"default": 8123,
"env_names": "{'clickhouse_port'}",
"type": "integer"
},
"username": {
"title": "Username",
"env_names": "{'clickhouse_username'}",
"type": "string"
},
"password": {
"title": "Password",
"env_names": "{'clickhouse_password'}",
"type": "string"
},
"index_type": {
"title": "Index Type",
"default": "annoy",
"env_names": "{'clickhouse_index_type'}",
"type": "string"
},
"index_param": {
"title": "Index Param",
"default": [
"'L2Distance'",
100
],
"env_names": "{'clickhouse_index_param'}",
"anyOf": [
{
"type": "array",
"items": {}
},
{
"type": "object"
}
]
},
"index_query_params": {
"title": "Index Query Params",
"default": {},
"env_names": "{'clickhouse_index_query_params'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"column_map": {
"title": "Column Map",
"default": {
"id": "id",
"uuid": "uuid",
"document": "document",
"embedding": "embedding",
"metadata": "metadata"
},
"env_names": "{'clickhouse_column_map'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"database": {
"title": "Database",
"default": "default",
"env_names": "{'clickhouse_database'}",
"type": "string"
},
"table": {
"title": "Table",
"default": "langchain",
"env_names": "{'clickhouse_table'}",
"type": "string"
},
"metric": {
"title": "Metric",
"default": "angular",
"env_names": "{'clickhouse_metric'}",
"type": "string"
}
},
"additionalProperties": false
}
Config
env_file: str = .env
env_file_encoding: str = utf-8
env_prefix: str = clickhouse_
Fields
column_map (Dict[str, str])
database (str)
host (str)
index_param (Optional[Union[List, Dict]])
index_query_params (Dict[str, str])
index_type (str)
metric (str)
password (Optional[str])
port (int)
table (str)
username (Optional[str])
attribute column_map: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata': 'metadata', 'uuid': 'uuid'}ο
attribute database: str = 'default'ο
attribute host: str = 'localhost'ο
attribute index_param: Optional[Union[List, Dict]] = ["'L2Distance'", 100]ο
attribute index_query_params: Dict[str, str] = {}ο
attribute index_type: str = 'annoy'ο
attribute metric: str = 'angular'ο
attribute password: Optional[str] = Noneο
attribute port: int = 8123ο
attribute table: str = 'langchain'ο
attribute username: Optional[str] = Noneο
class langchain.vectorstores.DeepLake(dataset_path='./deeplake/', token=None, embedding_function=None, read_only=False, ingestion_batch_size=1000, num_workers=0, verbose=True, exec_option='python', **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Deep Lake, a data lake for deep learning applications.
We integrated deeplakeβs similarity search and filtering for fast prototyping,
Now, it supports Tensor Query Language (TQL) for production use cases
over billion rows.
Why Deep Lake?
Not only stores embeddings, but also the original data with version control.
Serverless, doesnβt require another service and can be used with majorcloud providers (S3, GCS, etc.)
More than just a multi-modal vector store. You can use the datasetto fine-tune your own LLM models.
To use, you should have the deeplake python package installed.
Example
from langchain.vectorstores import DeepLake
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = DeepLake("langchain_store", embeddings.embed_query)
Parameters
dataset_path (str) β
token (Optional[str]) β
embedding_function (Optional[Embeddings]) β
read_only (bool) β
ingestion_batch_size (int) β
num_workers (int) β
verbose (bool) β
exec_option (str) β
kwargs (Any) β
Return type
None
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Examples
>>> ids = deeplake_vectorstore.add_texts(
... texts = <list_of_texts>,
... metadatas = <list_of_metadata_jsons>,
... ids = <list_of_ids>,
... )
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]], optional) β Optional list of IDs.
**kwargs β other optional keyword arguments.
kwargs (Any) β
Returns
List of IDs of the added texts.
Return type
List[str]
similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Examples
>>> # Search using an embedding
>>> data = vector_store.similarity_search(
... query=<your_query>,
... k=<num_items>,
... exec_option=<preferred_exec_option>,
... )
>>> # Run tql search:
>>> data = vector_store.tql_search(
... tql_query="SELECT * WHERE id == <id>",
... exec_option="compute_engine",
... )
Parameters
k (int) β Number of Documents to return. Defaults to 4.
query (str) β Text to look up similar documents.
**kwargs β Additional keyword arguments include:
embedding (Callable): Embedding function to use. Defaults to None.
distance_metric (str): βL2β for Euclidean, βL1β for Nuclear, βmaxβ
for L-infinity, βcosβ for cosine, βdotβ for dot product.
Defaults to βL2β.
filter (Union[Dict, Callable], optional): Additional filterbefore embedding search.
- Dict: Key-value search on tensors of htype json,
(sample must satisfy all key-value filters)
Dict = {βtensor_1β: {βkeyβ: value}, βtensor_2β: {βkeyβ: value}}
Function: Compatible with deeplake.filter.
Defaults to None.
exec_option (str): Supports 3 ways to perform searching.βpythonβ, βcompute_engineβ, or βtensor_dbβ. Defaults to βpythonβ.
- βpythonβ: Pure-python implementation for the client.
WARNING: not recommended for big datasets.
βcompute_engineβ: C++ implementation of the Compute Engine forthe client. Not for in-memory or local datasets.
βtensor_dbβ: Managed Tensor Database for storage and query.Only for data in Deep Lake Managed Database.
Use runtime = {βdb_engineβ: True} during dataset creation.
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[Document]
similarity_search_by_vector(embedding, k=4, **kwargs)[source]ο
Return docs most similar to embedding vector.
Examples
>>> # Search using an embedding
>>> data = vector_store.similarity_search_by_vector(
... embedding=<your_embedding>,
... k=<num_items_to_return>,
... exec_option=<preferred_exec_option>,
... )
Parameters
embedding (Union[List[float], np.ndarray]) β Embedding to find similar docs.
k (int) β Number of Documents to return. Defaults to 4.
**kwargs β Additional keyword arguments including:
filter (Union[Dict, Callable], optional):
Additional filter before embedding search.
- Dict - Key-value search on tensors of htype json. True
if all key-value filters are satisfied.
Dict = {βtensor_name_1β: {βkeyβ: value},
βtensor_name_2β: {βkeyβ: value}}
Function - Any function compatible withdeeplake.filter.
Defaults to None.
exec_option (str): Options for search execution includeβpythonβ, βcompute_engineβ, or βtensor_dbβ. Defaults to
βpythonβ.
- βpythonβ - Pure-python implementation running on the client.
Can be used for data stored anywhere. WARNING: using this
option with big datasets is discouraged due to potential
memory issues.
βcompute_engineβ - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for
any data stored in or connected to Deep Lake. It cannot be
used with in-memory or local datasets.
βtensor_dbβ - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available
for data stored in the Deep Lake Managed Database.
To store datasets in this database, specify
runtime = {βdb_engineβ: True} during dataset creation.
distance_metric (str): L2 for Euclidean, L1 for Nuclear,max for L-infinity distance, cos for cosine similarity,
βdotβ for dot product. Defaults to L2.
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[Document]
similarity_search_with_score(query, k=4, **kwargs)[source]ο
Run similarity search with Deep Lake with distance returned.
Examples:
>>> data = vector_store.similarity_search_with_score(
β¦ query=<your_query>,
β¦ embedding=<your_embedding_function>
β¦ k=<number_of_items_to_return>,
β¦ exec_option=<preferred_exec_option>,
β¦ )
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
**kwargs β Additional keyword arguments. Some of these arguments are:
distance_metric: L2 for Euclidean, L1 for Nuclear, max L-infinity
distance, cos for cosine similarity, βdotβ for dot product.
Defaults to L2.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.embedding_function (Callable): Embedding function to use. Defaults
to None.
exec_option (str): DeepLakeVectorStore supports 3 ways to performsearching. It could be either βpythonβ, βcompute_engineβ or
βtensor_dbβ. Defaults to βpythonβ.
- βpythonβ - Pure-python implementation running on the client.
Can be used for data stored anywhere. WARNING: using this
option with big datasets is discouraged due to potential
memory issues.
βcompute_engineβ - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for
any data stored in or connected to Deep Lake. It cannot be used
with in-memory or local datasets.
βtensor_dbβ - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for
data stored in the Deep Lake Managed Database. To store datasets
in this database, specify runtime = {βdb_engineβ: True}
during dataset creation.
kwargs (Any) β
Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, exec_option=None, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance. Maximal marginal
relevance optimizes for similarity to query AND diversity among selected docs.
Examples:
>>> data = vector_store.max_marginal_relevance_search_by_vector(
β¦ embedding=<your_embedding>,
β¦ fetch_k=<elements_to_fetch_before_mmr_search>,
β¦ k=<number_of_items_to_return>,
β¦ exec_option=<preferred_exec_option>,
β¦ )
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch for MMR algorithm.
lambda_mult (float) β Number between 0 and 1 determining the degree of diversity.
0 corresponds to max diversity and 1 to min diversity. Defaults to 0.5.
exec_option (str) β DeepLakeVectorStore supports 3 ways for searching.
Could be βpythonβ, βcompute_engineβ or βtensor_dbβ. Defaults to
βpythonβ.
- βpythonβ - Pure-python implementation running on the client.
Can be used for data stored anywhere. WARNING: using this
option with big datasets is discouraged due to potential
memory issues.
βcompute_engineβ - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for
any data stored in or connected to Deep Lake. It cannot be used
with in-memory or local datasets.
βtensor_dbβ - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for
data stored in the Deep Lake Managed Database. To store datasets
in this database, specify runtime = {βdb_engineβ: True}
during dataset creation.
**kwargs β Additional keyword arguments.
kwargs (Any) β
Returns
List[Documents] - A list of documents.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, exec_option=None, **kwargs)[source]ο
Return docs selected using maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Examples:
>>> # Search using an embedding
>>> data = vector_store.max_marginal_relevance_search(
β¦ query = <query_to_search>,
β¦ embedding_function = <embedding_function_for_query>,
β¦ k = <number_of_items_to_return>,
β¦ exec_option = <preferred_exec_option>,
β¦ )
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents for MMR algorithm.
lambda_mult (float) β Value between 0 and 1. 0 corresponds
to maximum diversity and 1 to minimum.
Defaults to 0.5.
exec_option (str) β Supports 3 ways to perform searching.
- βpythonβ - Pure-python implementation running on the client.
Can be used for data stored anywhere. WARNING: using this
option with big datasets is discouraged due to potential
memory issues.
βcompute_engineβ - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for
any data stored in or connected to Deep Lake. It cannot be
used with in-memory or local datasets.
βtensor_dbβ - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available
for data stored in the Deep Lake Managed Database. To store
datasets in this database, specify
runtime = {βdb_engineβ: True} during dataset creation.
**kwargs β Additional keyword arguments
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Raises
ValueError β when MRR search is on but embedding function is
not specified.
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding=None, metadatas=None, ids=None, dataset_path='./deeplake/', **kwargs)[source]ο
Create a Deep Lake dataset from a raw documents.
If a dataset_path is specified, the dataset will be persisted in that location,
otherwise by default at ./deeplake
Examples:
>>> # Search using an embedding
>>> vector_store = DeepLake.from_texts(
β¦ texts = <the_texts_that_you_want_to_embed>,
β¦ embedding_function = <embedding_function_for_query>,
β¦ k = <number_of_items_to_return>,
β¦ exec_option = <preferred_exec_option>,
β¦ )
Parameters
dataset_path (str) β
The full path to the dataset. Can be:
Deep Lake cloud path of the form hub://username/dataset_name.To write to Deep Lake cloud datasets,
ensure that you are logged in to Deep Lake
(use βactiveloop loginβ from command line)
AWS S3 path of the form s3://bucketname/path/to/dataset.Credentials are required in either the environment
Google Cloud Storage path of the formgcs://bucketname/path/to/dataset Credentials are required
in either the environment
Local file system path of the form ./path/to/dataset or~/path/to/dataset or path/to/dataset.
In-memory path of the form mem://path/to/dataset which doesnβtsave the dataset, but keeps it in memory instead.
Should be used only for testing as it does not persist.
texts (List[Document]) β List of documents to add.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
Note, in other places, it is called embedding_function.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
ids (Optional[List[str]]) β List of document IDs. Defaults to None.
**kwargs β Additional keyword arguments.
kwargs (Any) β
Returns
Deep Lake dataset.
Return type
DeepLake
Raises
ValueError β If βembeddingβ is provided in kwargs. This is deprecated,
please use embedding_function instead.
delete(ids=None, filter=None, delete_all=None)[source]ο
Delete the entities in the dataset.
Parameters
ids (Optional[List[str]], optional) β The document_ids to delete.
Defaults to None.
filter (Optional[Dict[str, str]], optional) β The filter to delete by.
Defaults to None.
delete_all (Optional[bool], optional) β Whether to drop the dataset.
Defaults to None.
Returns
Whether the delete operation was successful.
Return type
bool
classmethod force_delete_by_path(path)[source]ο
Force delete dataset by path.
Parameters
path (str) β path of the dataset to delete.
Raises
ValueError β if deeplake is not installed.
Return type
None
delete_dataset()[source]ο
Delete the collection.
Return type
None
class langchain.vectorstores.DocArrayHnswSearch(doc_index, embedding)[source]ο
Bases: langchain.vectorstores.docarray.base.DocArrayIndex
Wrapper around HnswLib storage.
To use it, you should have the docarray package with version >=0.32.0 installed.
You can install it with pip install βlangchain[docarray]β.
Parameters
doc_index (BaseDocIndex) β
embedding (langchain.embeddings.base.Embeddings) β
classmethod from_params(embedding, work_dir, n_dim, dist_metric='cosine', max_elements=1024, index=True, ef_construction=200, ef=10, M=16, allow_replace_deleted=True, num_threads=1, **kwargs)[source]ο
Initialize DocArrayHnswSearch store.
Parameters
embedding (Embeddings) β Embedding function.
work_dir (str) β path to the location where all the data will be stored.
n_dim (int) β dimension of an embedding.
dist_metric (str) β Distance metric for DocArrayHnswSearch can be one of:
βcosineβ, βipβ, and βl2β. Defaults to βcosineβ.
max_elements (int) β Maximum number of vectors that can be stored.
Defaults to 1024.
index (bool) β Whether an index should be built for this field.
Defaults to True.
ef_construction (int) β defines a construction time/accuracy trade-off.
Defaults to 200.
ef (int) β parameter controlling query time/accuracy trade-off.
Defaults to 10.
M (int) β parameter that defines the maximum number of outgoing
connections in the graph. Defaults to 16.
allow_replace_deleted (bool) β Enables replacing of deleted elements
with new added ones. Defaults to True.
num_threads (int) β Sets the number of cpu threads to use. Defaults to 1.
**kwargs β Other keyword arguments to be passed to the get_doc_cls method.
kwargs (Any) β
Return type
langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch
classmethod from_texts(texts, embedding, metadatas=None, work_dir=None, n_dim=None, **kwargs)[source]ο
Create an DocArrayHnswSearch store and insert data.
Parameters
texts (List[str]) β Text data.
embedding (Embeddings) β Embedding function.
metadatas (Optional[List[dict]]) β Metadata for each text if it exists.
Defaults to None.
work_dir (str) β path to the location where all the data will be stored.
n_dim (int) β dimension of an embedding.
**kwargs β Other keyword arguments to be passed to the __init__ method.
kwargs (Any) β
Returns
DocArrayHnswSearch Vector Store
Return type
langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch
class langchain.vectorstores.DocArrayInMemorySearch(doc_index, embedding)[source]ο
Bases: langchain.vectorstores.docarray.base.DocArrayIndex
Wrapper around in-memory storage for exact search.
To use it, you should have the docarray package with version >=0.32.0 installed.
You can install it with pip install βlangchain[docarray]β.
Parameters
doc_index (BaseDocIndex) β
embedding (langchain.embeddings.base.Embeddings) β
classmethod from_params(embedding, metric='cosine_sim', **kwargs)[source]ο
Initialize DocArrayInMemorySearch store.
Parameters
embedding (Embeddings) β Embedding function.
metric (str) β metric for exact nearest-neighbor search.
Can be one of: βcosine_simβ, βeuclidean_distβ and βsqeuclidean_distβ.
Defaults to βcosine_simβ.
**kwargs β Other keyword arguments to be passed to the get_doc_cls method.
kwargs (Any) β
Return type
langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch
classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source]ο
Create an DocArrayInMemorySearch store and insert data.
Parameters
texts (List[str]) β Text data.
embedding (Embeddings) β Embedding function.
metadatas (Optional[List[Dict[Any, Any]]]) β Metadata for each text
if it exists. Defaults to None.
metric (str) β metric for exact nearest-neighbor search.
Can be one of: βcosine_simβ, βeuclidean_distβ and βsqeuclidean_distβ.
Defaults to βcosine_simβ.
kwargs (Any) β
Returns
DocArrayInMemorySearch Vector Store
Return type
langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch
class langchain.vectorstores.ElasticVectorSearch(elasticsearch_url, index_name, embedding, *, ssl_verify=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore, abc.ABC
Wrapper around Elasticsearch as a vector database.
To connect to an Elasticsearch instance that does not require
login credentials, pass the Elasticsearch URL and index name along with the
embedding object to the constructor.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url="http://localhost:9200",
index_name="test_index",
embedding=embedding
)
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the βDeploymentsβ page.
To obtain your Elastic Cloud password for the default βelasticβ user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to βSecurityβ > βUsersβ
Locate the βelasticβ user and click βEditβ
Click βReset passwordβ
Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_host = "cluster_id.region_id.gcp.cloud.es.io"
elasticsearch_url = f"https://username:password@{elastic_host}:9243"
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url=elasticsearch_url,
index_name="test_index",
embedding=embedding
)
Parameters
elasticsearch_url (str) β The URL for the Elasticsearch instance.
index_name (str) β The name of the Elasticsearch index for the embeddings.
embedding (Embeddings) β An object that provides the ability to embed text.
It should be an instance of a class that subclasses the Embeddings
abstract base class, such as OpenAIEmbeddings()
ssl_verify (Optional[Dict[str, Any]]) β
Raises
ValueError β If the elasticsearch python package is not installed.
add_texts(texts, metadatas=None, refresh_indices=True, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
refresh_indices (bool) β bool to refresh ElasticSearch indices
ids (Optional[List[str]]) β
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search(query, k=4, filter=None, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[dict]) β
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[langchain.schema.Document]
similarity_search_with_score(query, k=4, filter=None, **kwargs)[source]ο
Return docs most similar to query.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
Parameters
query (str) β
k (int) β
filter (Optional[dict]) β
kwargs (Any) β
Return type
List[Tuple[langchain.schema.Document, float]]
classmethod from_texts(texts, embedding, metadatas=None, elasticsearch_url=None, index_name=None, refresh_indices=True, **kwargs)[source]ο
Construct ElasticVectorSearch wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in the Elasticsearch instance.
Adds the documents to the newly created Elasticsearch index.
This is intended to be a quick way to get started.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch.from_texts(
texts,
embeddings,
elasticsearch_url="http://localhost:9200"
)
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
elasticsearch_url (Optional[str]) β
index_name (Optional[str]) β
refresh_indices (bool) β
kwargs (Any) β
Return type
langchain.vectorstores.elastic_vector_search.ElasticVectorSearch
create_index(client, index_name, mapping)[source]ο
Parameters
client (Any) β
index_name (str) β
mapping (Dict) β
Return type
None
client_search(client, index_name, script_query, size)[source]ο
Parameters
client (Any) β
index_name (str) β
script_query (Dict) β
size (int) β
Return type
Any
delete(ids)[source]ο
Delete by vector IDs.
Parameters
ids (List[str]) β List of ids to delete.
Return type
None
class langchain.vectorstores.FAISS(embedding_function, index, docstore, index_to_docstore_id, relevance_score_fn=<function _default_relevance_score_fn>, normalize_L2=False)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around FAISS vector database.
To use, you should have the faiss python package installed.
Example
from langchain import FAISS
faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)
Parameters
embedding_function (Callable) β
index (Any) β
docstore (Docstore) β
index_to_docstore_id (Dict[int, str]) β
relevance_score_fn (Optional[Callable[[float], float]]) β
normalize_L2 (bool) β
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
ids (Optional[List[str]]) β Optional list of unique IDs.
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
add_embeddings(text_embeddings, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
text_embeddings (Iterable[Tuple[str, List[float]]]) β Iterable pairs of string and embedding to
add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
ids (Optional[List[str]]) β Optional list of unique IDs.
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search_with_score_by_vector(embedding, k=4, filter=None, fetch_k=20, **kwargs)[source]ο
Return docs most similar to query.
Parameters
embedding (List[float]) β Embedding vector to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, Any]]) β Filter by metadata. Defaults to None.
fetch_k (int) β (Optional[int]) Number of Documents to fetch before filtering.
Defaults to 20.
**kwargs β kwargs to be passed to similarity search. Can include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
kwargs (Any) β
Returns
List of documents most similar to the query text and L2 distance
in float for each. Lower score represents more similarity.
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_with_score(query, k=4, filter=None, fetch_k=20, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
fetch_k (int) β (Optional[int]) Number of Documents to fetch before filtering.
Defaults to 20.
kwargs (Any) β
Returns
List of documents most similar to the query text with
L2 distance in float. Lower score represents more similarity.
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_by_vector(embedding, k=4, filter=None, fetch_k=20, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
fetch_k (int) β (Optional[int]) Number of Documents to fetch before filtering.
Defaults to 20.
kwargs (Any) β
Returns
List of Documents most similar to the embedding.
Return type
List[langchain.schema.Document]
similarity_search(query, k=4, filter=None, fetch_k=20, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, Any]]) β (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
fetch_k (int) β (Optional[int]) Number of Documents to fetch before filtering.
Defaults to 20.
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch before filtering to
pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, Any]]) β
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch before filtering (if needed) to
pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, Any]]) β
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
merge_from(target)[source]ο
Merge another FAISS object with the current one.
Add the target FAISS to the current one.
Parameters
target (langchain.vectorstores.faiss.FAISS) β FAISS object you wish to merge into the current one
Returns
None.
Return type
None
classmethod from_texts(texts, embedding, metadatas=None, ids=None, **kwargs)[source]ο
Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the FAISS database
This is intended to be a quick way to get started.
Example
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
faiss = FAISS.from_texts(texts, embeddings)
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
ids (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.vectorstores.faiss.FAISS
classmethod from_embeddings(text_embeddings, embedding, metadatas=None, ids=None, **kwargs)[source]ο
Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the FAISS database
This is intended to be a quick way to get started.
Example
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
faiss = FAISS.from_embeddings(text_embedding_pairs, embeddings)
Parameters
text_embeddings (List[Tuple[str, List[float]]]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
ids (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.vectorstores.faiss.FAISS
save_local(folder_path, index_name='index')[source]ο
Save FAISS index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path (str) β folder path to save index, docstore,
and index_to_docstore_id to.
index_name (str) β for saving with a specific index file name
Return type
None
classmethod load_local(folder_path, embeddings, index_name='index')[source]ο
Load FAISS index, docstore, and index_to_docstore_id from disk.
Parameters
folder_path (str) β folder path to load index, docstore,
and index_to_docstore_id from.
embeddings (langchain.embeddings.base.Embeddings) β Embeddings to use when generating queries
index_name (str) β for saving with a specific index file name
Return type
langchain.vectorstores.faiss.FAISS
class langchain.vectorstores.Hologres(connection_string, embedding_function, ndims=1536, table_name='langchain_pg_embedding', pre_delete_table=False, logger=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
VectorStore implementation using Hologres.
connection_string is a hologres connection string.
embedding_function any embedding function implementinglangchain.embeddings.base.Embeddings interface.
ndims is the number of dimensions of the embedding output.
table_name is the name of the table to store embeddings and data.(default: langchain_pg_embedding)
- NOTE: The table will be created when initializing the store (if not exists)
So, make sure the user has the right permissions to create tables.
pre_delete_table if True, will delete the table if it exists.(default: False)
- Useful for testing.
Parameters
connection_string (str) β
embedding_function (Embeddings) β
ndims (int) β
table_name (str) β
pre_delete_table (bool) β
logger (Optional[logging.Logger]) β
Return type
None
create_vector_extension()[source]ο
Return type
None
create_table()[source]ο
Return type
None
add_embeddings(texts, embeddings, metadatas, ids, **kwargs)[source]ο
Add embeddings to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
embeddings (List[List[float]]) β List of list of embedding vectors.
metadatas (List[dict]) β List of metadatas associated with the texts.
kwargs (Any) β vectorstore specific parameters
ids (List[str]) β
Return type
None
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β vectorstore specific parameters
ids (Optional[List[str]]) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search(query, k=4, filter=None, **kwargs)[source]ο
Run similarity search with Hologres with distance.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[langchain.schema.Document]
similarity_search_by_vector(embedding, k=4, filter=None, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[langchain.schema.Document]
similarity_search_with_score(query, k=4, filter=None)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_with_score_by_vector(embedding, k=4, filter=None)[source]ο
Parameters
embedding (List[float]) β
k (int) β
filter (Optional[dict]) β
Return type
List[Tuple[langchain.schema.Document, float]]
classmethod from_texts(texts, embedding, metadatas=None, ndims=1536, table_name='langchain_pg_embedding', ids=None, pre_delete_table=False, **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Postgres connection string is required
βEither pass it as a parameter
or set the HOLOGRES_CONNECTION_STRING environment variable.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
ndims (int) β
table_name (str) β
ids (Optional[List[str]]) β
pre_delete_table (bool) β
kwargs (Any) β
Return type
langchain.vectorstores.hologres.Hologres
classmethod from_embeddings(text_embeddings, embedding, metadatas=None, ndims=1536, table_name='langchain_pg_embedding', ids=None, pre_delete_table=False, **kwargs)[source]ο
Construct Hologres wrapper from raw documents and pre-
generated embeddings.
Return VectorStore initialized from documents and embeddings.
Postgres connection string is required
βEither pass it as a parameter
or set the HOLOGRES_CONNECTION_STRING environment variable.
Example
from langchain import Hologres
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
faiss = Hologres.from_embeddings(text_embedding_pairs, embeddings)
Parameters
text_embeddings (List[Tuple[str, List[float]]]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
ndims (int) β
table_name (str) β
ids (Optional[List[str]]) β
pre_delete_table (bool) β
kwargs (Any) β
Return type
langchain.vectorstores.hologres.Hologres
classmethod from_existing_index(embedding, ndims=1536, table_name='langchain_pg_embedding', pre_delete_table=False, **kwargs)[source]ο
Get intsance of an existing Hologres store.This method will
return the instance of the store without inserting any new
embeddings
Parameters
embedding (langchain.embeddings.base.Embeddings) β
ndims (int) β
table_name (str) β
pre_delete_table (bool) β
kwargs (Any) β
Return type
langchain.vectorstores.hologres.Hologres
classmethod get_connection_string(kwargs)[source]ο
Parameters
kwargs (Dict[str, Any]) β
Return type
str
classmethod from_documents(documents, embedding, ndims=1536, table_name='langchain_pg_embedding', ids=None, pre_delete_collection=False, **kwargs)[source]ο
Return VectorStore initialized from documents and embeddings.
Postgres connection string is required
βEither pass it as a parameter
or set the HOLOGRES_CONNECTION_STRING environment variable.
Parameters
documents (List[langchain.schema.Document]) β
embedding (langchain.embeddings.base.Embeddings) β
ndims (int) β
table_name (str) β
ids (Optional[List[str]]) β
pre_delete_collection (bool) β
kwargs (Any) β
Return type
langchain.vectorstores.hologres.Hologres
classmethod connection_string_from_db_params(host, port, database, user, password)[source]ο
Return connection string from database parameters.
Parameters
host (str) β
port (int) β
database (str) β
user (str) β
password (str) β
Return type
str
class langchain.vectorstores.LanceDB(connection, embedding, vector_key='vector', id_key='id', text_key='text')[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around LanceDB vector database.
To use, you should have lancedb python package installed.
Example
db = lancedb.connect('./lancedb')
table = db.open_table('my_table')
vectorstore = LanceDB(table, embedding_function)
vectorstore.add_texts(['text1', 'text2'])
result = vectorstore.similarity_search('text1')
Parameters
connection (Any) β
embedding (Embeddings) β
vector_key (Optional[str]) β
id_key (Optional[str]) β
text_key (Optional[str]) β
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Turn texts into embedding and add it to the database
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
ids (Optional[List[str]]) β Optional list of ids to associate with the texts.
kwargs (Any) β
Returns
List of ids of the added texts.
Return type
List[str]
similarity_search(query, k=4, **kwargs)[source]ο
Return documents most similar to the query
Parameters
query (str) β String to query the vectorstore with.
k (int) β Number of documents to return.
kwargs (Any) β
Returns
List of documents most similar to the query.
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, connection=None, vector_key='vector', id_key='id', text_key='text', **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
connection (Any) β
vector_key (Optional[str]) β
id_key (Optional[str]) β
text_key (Optional[str]) β
kwargs (Any) β
Return type
langchain.vectorstores.lancedb.LanceDB
class langchain.vectorstores.MatchingEngine(project_id, index, endpoint, embedding, gcs_client, gcs_bucket_name, credentials=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Vertex Matching Engine implementation of the vector store.
While the embeddings are stored in the Matching Engine, the embedded
documents will be stored in GCS.
An existing Index and corresponding Endpoint are preconditions for
using this module.
See usage in docs/modules/indexes/vectorstores/examples/matchingengine.ipynb
Note that this implementation is mostly meant for reading if you are
planning to do a real time implementation. While reading is a real time
operation, updating the index takes close to one hour.
Parameters
project_id (str) β
index (MatchingEngineIndex) β
endpoint (MatchingEngineIndexEndpoint) β
embedding (Embeddings) β
gcs_client (storage.Client) β
gcs_bucket_name (str) β
credentials (Optional[Credentials]) β
add_texts(texts, metadatas=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β vectorstore specific parameters.
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β The string that will be used to search for similar documents.
k (int) β The amount of neighbors that will be retrieved.
kwargs (Any) β
Returns
A list of k matching documents.
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source]ο
Use from components instead.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
langchain.vectorstores.matching_engine.MatchingEngine
classmethod from_components(project_id, region, gcs_bucket_name, index_id, endpoint_id, credentials_path=None, embedding=None)[source]ο
Takes the object creation out of the constructor.
Parameters
project_id (str) β The GCP project id.
region (str) β The default location making the API calls. It must have
regional. (the same location as the GCS bucket and must be) β
gcs_bucket_name (str) β The location where the vectors will be stored in
created. (order for the index to be) β
index_id (str) β The id of the created index.
endpoint_id (str) β The id of the created endpoint.
credentials_path (Optional[str]) β (Optional) The path of the Google credentials on
system. (the local file) β
embedding (Optional[langchain.embeddings.base.Embeddings]) β The Embeddings that will be used for
texts. (embedding the) β
Returns
A configured MatchingEngine with the texts added to the index.
Return type
langchain.vectorstores.matching_engine.MatchingEngine
class langchain.vectorstores.Milvus(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', index_params=None, search_params=None, drop_old=False)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around the Milvus vector database.
Parameters
embedding_function (Embeddings) β
collection_name (str) β
connection_args (Optional[dict[str, Any]]) β
consistency_level (str) β
index_params (Optional[dict]) β
search_params (Optional[dict]) β
drop_old (Optional[bool]) β
add_texts(texts, metadatas=None, timeout=None, batch_size=1000, **kwargs)[source]ο
Insert text data into Milvus.
Inserting data when the collection has not be made yet will result
in creating a new Collection. The data of the first entity decides
the schema of the new collection, the dim is extracted from the first
embedding and the columns are decided by the first metadata dict.
Metada keys will need to be present for all inserted values. At
the moment there is no None equivalent in Milvus.
Parameters
texts (Iterable[str]) β The texts to embed, it is assumed
that they all fit in memory.
metadatas (Optional[List[dict]]) β Metadata dicts attached to each of
the texts. Defaults to None.
timeout (Optional[int]) β Timeout for each batch insert. Defaults
to None.
batch_size (int, optional) β Batch size to use for insertion.
Defaults to 1000.
kwargs (Any) β
Raises
MilvusException β Failure to add texts
Returns
The resulting keys for each inserted element.
Return type
List[str]
similarity_search(query, k=4, param=None, expr=None, timeout=None, **kwargs)[source]ο
Perform a similarity search against the query string.
Parameters
query (str) β The text to search.
k (int, optional) β How many results to return. Defaults to 4.
param (dict, optional) β The search params for the index type.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs (Any) β Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
similarity_search_by_vector(embedding, k=4, param=None, expr=None, timeout=None, **kwargs)[source]ο
Perform a similarity search against the query string.
Parameters
embedding (List[float]) β The embedding vector to search.
k (int, optional) β How many results to return. Defaults to 4.
param (dict, optional) β The search params for the index type.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs (Any) β Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
similarity_search_with_score(query, k=4, param=None, expr=None, timeout=None, **kwargs)[source]ο
Perform a search on a query string and return results with score.
For more information about the search parameters, take a look at the pymilvus
documentation found here:
https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md
Parameters
query (str) β The text being searched.
k (int, optional) β The amount of results ot return. Defaults to 4.
param (dict) β The search params for the specified index.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs (Any) β Collection.search() keyword arguments.
Return type
List[float], List[Tuple[Document, any, any]]
similarity_search_with_score_by_vector(embedding, k=4, param=None, expr=None, timeout=None, **kwargs)[source]ο
Perform a search on a query string and return results with score.
For more information about the search parameters, take a look at the pymilvus
documentation found here:
https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md
Parameters
embedding (List[float]) β The embedding vector being searched.
k (int, optional) β The amount of results ot return. Defaults to 4.
param (dict) β The search params for the specified index.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs (Any) β Collection.search() keyword arguments.
Returns
Result doc and score.
Return type
List[Tuple[Document, float]]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, param=None, expr=None, timeout=None, **kwargs)[source]ο
Perform a search and return results that are reordered by MMR.
Parameters
query (str) β The text being searched.
k (int, optional) β How many results to give. Defaults to 4.
fetch_k (int, optional) β Total results to select k from.
Defaults to 20.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5
param (dict, optional) β The search params for the specified index.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs (Any) β Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, param=None, expr=None, timeout=None, **kwargs)[source]ο
Perform a search and return results that are reordered by MMR.
Parameters
embedding (str) β The embedding vector being searched.
k (int, optional) β How many results to give. Defaults to 4.
fetch_k (int, optional) β Total results to select k from.
Defaults to 20.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5
param (dict, optional) β The search params for the specified index.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs (Any) β Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
classmethod from_texts(texts, embedding, metadatas=None, collection_name='LangChainCollection', connection_args={'host': 'localhost', 'password': '', 'port': '19530', 'secure': False, 'user': ''}, consistency_level='Session', index_params=None, search_params=None, drop_old=False, **kwargs)[source]ο
Create a Milvus collection, indexes it with HNSW, and insert data.
Parameters
texts (List[str]) β Text data.
embedding (Embeddings) β Embedding function.
metadatas (Optional[List[dict]]) β Metadata for each text if it exists.
Defaults to None.
collection_name (str, optional) β Collection name to use. Defaults to
βLangChainCollectionβ.
connection_args (dict[str, Any], optional) β Connection args to use. Defaults
to DEFAULT_MILVUS_CONNECTION.
consistency_level (str, optional) β Which consistency level to use. Defaults
to βSessionβ.
index_params (Optional[dict], optional) β Which index_params to use. Defaults
to None.
search_params (Optional[dict], optional) β Which search params to use.
Defaults to None.
drop_old (Optional[bool], optional) β Whether to drop the collection with
that name if it exists. Defaults to False.
kwargs (Any) β
Returns
Milvus Vector Store
Return type
Milvus
class langchain.vectorstores.Zilliz(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', index_params=None, search_params=None, drop_old=False)[source]ο
Bases: langchain.vectorstores.milvus.Milvus
Parameters
embedding_function (Embeddings) β
collection_name (str) β
connection_args (Optional[dict[str, Any]]) β
consistency_level (str) β
index_params (Optional[dict]) β
search_params (Optional[dict]) β
drop_old (Optional[bool]) β
classmethod from_texts(texts, embedding, metadatas=None, collection_name='LangChainCollection', connection_args={}, consistency_level='Session', index_params=None, search_params=None, drop_old=False, **kwargs)[source]ο
Create a Zilliz collection, indexes it with HNSW, and insert data.
Parameters
texts (List[str]) β Text data.
embedding (Embeddings) β Embedding function.
metadatas (Optional[List[dict]]) β Metadata for each text if it exists.
Defaults to None.
collection_name (str, optional) β Collection name to use. Defaults to
βLangChainCollectionβ.
connection_args (dict[str, Any], optional) β Connection args to use. Defaults
to DEFAULT_MILVUS_CONNECTION.
consistency_level (str, optional) β Which consistency level to use. Defaults
to βSessionβ.
index_params (Optional[dict], optional) β Which index_params to use.
Defaults to None.
search_params (Optional[dict], optional) β Which search params to use.
Defaults to None.
drop_old (Optional[bool], optional) β Whether to drop the collection with
that name if it exists. Defaults to False.
kwargs (Any) β
Returns
Zilliz Vector Store
Return type
Zilliz
class langchain.vectorstores.SingleStoreDB(embedding, *, distance_strategy=DistanceStrategy.DOT_PRODUCT, table_name='embeddings', content_field='content', metadata_field='metadata', vector_field='vector', pool_size=5, max_overflow=10, timeout=30, **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
This class serves as a Pythonic interface to the SingleStore DB database.
The prerequisite for using this class is the installation of the singlestoredb
Python package.
The SingleStoreDB vectorstore can be created by providing an embedding function and
the relevant parameters for the database connection, connection pool, and
optionally, the names of the table and the fields to use.
Parameters
embedding (Embeddings) β
distance_strategy (DistanceStrategy) β
table_name (str) β
content_field (str) β
metadata_field (str) β
vector_field (str) β
pool_size (int) β
max_overflow (int) β
timeout (float) β
kwargs (Any) β
vector_fieldο
Pass the rest of the kwargs to the connection.
connection_kwargsο
Add program name and version to connection attributes.
add_texts(texts, metadatas=None, embeddings=None, **kwargs)[source]ο
Add more texts to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings/text to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
Defaults to None.
embeddings (Optional[List[List[float]]], optional) β Optional pre-generated
embeddings. Defaults to None.
kwargs (Any) β
Returns
empty list
Return type
List[str]
similarity_search(query, k=4, filter=None, **kwargs)[source]ο
Returns the most similar indexed documents to the query text.
Uses cosine similarity.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
filter (dict) β A dictionary of metadata fields and values to filter by.
kwargs (Any) β
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
Examples
similarity_search_with_score(query, k=4, filter=None)[source]ο
Return docs most similar to query. Uses cosine similarity.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[dict]) β A dictionary of metadata fields and values to filter by.
Defaults to None.
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
classmethod from_texts(texts, embedding, metadatas=None, distance_strategy=DistanceStrategy.DOT_PRODUCT, table_name='embeddings', content_field='content', metadata_field='metadata', vector_field='vector', pool_size=5, max_overflow=10, timeout=30, **kwargs)[source]ο
Create a SingleStoreDB vectorstore from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new table for the embeddings in SingleStoreDB.
Adds the documents to the newly created table.
This is intended to be a quick way to get started.
.. rubric:: Example
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
distance_strategy (langchain.vectorstores.singlestoredb.DistanceStrategy) β
table_name (str) β
content_field (str) β
metadata_field (str) β
vector_field (str) β
pool_size (int) β
max_overflow (int) β
timeout (float) β
kwargs (Any) β
Return type
langchain.vectorstores.singlestoredb.SingleStoreDB
as_retriever(**kwargs)[source]ο
Parameters
kwargs (Any) β
Return type
langchain.vectorstores.singlestoredb.SingleStoreDBRetriever
class langchain.vectorstores.Clarifai(user_id=None, app_id=None, pat=None, number_of_docs=None, api_base=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Clarifai AI platformβs vector store.
To use, you should have the clarifai python package installed.
Example
from langchain.vectorstores import Clarifai
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Clarifai("langchain_store", embeddings.embed_query)
Parameters
user_id (Optional[str]) β
app_id (Optional[str]) β
pat (Optional[str]) β
number_of_docs (Optional[int]) β
api_base (Optional[str]) β
Return type
None
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Add texts to the Clarifai vectorstore. This will push the text
to a Clarifai application.
Application use base workflow that create and store embedding for each text.
Make sure you are using a base workflow that is compatible with text
(such as Language Understanding).
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]], optional) β Optional list of IDs.
kwargs (Any) β
Returns
List of IDs of the added texts.
Return type
List[str]
similarity_search_with_score(query, k=4, filter=None, namespace=None, **kwargs)[source]ο
Run similarity search with score using Clarifai.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata.
None. (Defaults to) β
namespace (Optional[str]) β
kwargs (Any) β
Returns
List of documents most simmilar to the query text.
Return type
List[Document]
similarity_search(query, k=4, **kwargs)[source]ο
Run similarity search using Clarifai.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
kwargs (Any) β
Returns
List of Documents most similar to the query and score for each
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding=None, metadatas=None, user_id=None, app_id=None, pat=None, number_of_docs=None, api_base=None, **kwargs)[source]ο
Create a Clarifai vectorstore from a list of texts.
Parameters
user_id (str) β User ID.
app_id (str) β App ID.
texts (List[str]) β List of texts to add.
pat (Optional[str]) β Personal access token. Defaults to None.
number_of_docs (Optional[int]) β Number of documents to return
None. (Defaults to) β
api_base (Optional[str]) β API base. Defaults to None.
metadatas (Optional[List[dict]]) β Optional list of metadatas.
None. β
embedding (Optional[langchain.embeddings.base.Embeddings]) β
kwargs (Any) β
Returns
Clarifai vectorstore.
Return type
Clarifai
classmethod from_documents(documents, embedding=None, user_id=None, app_id=None, pat=None, number_of_docs=None, api_base=None, **kwargs)[source]ο
Create a Clarifai vectorstore from a list of documents.
Parameters
user_id (str) β User ID.
app_id (str) β App ID.
documents (List[Document]) β List of documents to add.
pat (Optional[str]) β Personal access token. Defaults to None.
number_of_docs (Optional[int]) β Number of documents to return
None. (during vector search. Defaults to) β
api_base (Optional[str]) β API base. Defaults to None.
embedding (Optional[langchain.embeddings.base.Embeddings]) β
kwargs (Any) β
Returns
Clarifai vectorstore.
Return type
Clarifai
class langchain.vectorstores.OpenSearchVectorSearch(opensearch_url, index_name, embedding_function, **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around OpenSearch as a vector database.
Example
from langchain import OpenSearchVectorSearch
opensearch_vector_search = OpenSearchVectorSearch(
"http://localhost:9200",
"embeddings",
embedding_function
)
Parameters
opensearch_url (str) β
index_name (str) β
embedding_function (Embeddings) β
kwargs (Any) β
add_texts(texts, metadatas=None, ids=None, bulk_size=500, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
ids (Optional[List[str]]) β Optional list of ids to associate with the texts.
bulk_size (int) β Bulk API request count; Default: 500
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
βvector_fieldβ.
text_field: Document field the text of the document is stored in. Defaults
to βtextβ.
similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
By default, supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[langchain.schema.Document]
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
βvector_fieldβ.
text_field: Document field the text of the document is stored in. Defaults
to βtextβ.
metadata_field: Document field that metadata is stored in. Defaults to
βmetadataβ.
Can be set to a special value β*β to include the entire document.
Optional Args for Approximate Search:search_type: βapproximate_searchβ; default: βapproximate_searchβ
boolean_filter: A Boolean filter consists of a Boolean query that
contains a k-NN query and a filter.
subquery_clause: Query clause on the knn vector field; default: βmustβ
lucene_filter: the Lucene algorithm decides whether to perform an exact
k-NN search with pre-filtering or an approximate search with modified
post-filtering.
Optional Args for Script Scoring Search:search_type: βscript_scoringβ; default: βapproximate_searchβ
space_type: βl2β, βl1β, βlinfβ, βcosinesimilβ, βinnerproductβ,
βhammingbitβ; default: βl2β
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {βmatch_allβ: {}}
Optional Args for Painless Scripting Search:search_type: βpainless_scriptingβ; default: βapproximate_searchβ
space_type: βl2Squaredβ, βl1Normβ, βcosineSimilarityβ; default: βl2Squaredβ
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {βmatch_allβ: {}}
similarity_search_with_score(query, k=4, **kwargs)[source]ο
Return docs and itβs scores most similar to query.
By default, supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
kwargs (Any) β
Returns
List of Documents along with its scores most similar to the query.
Return type
List[Tuple[langchain.schema.Document, float]]
Optional Args:same as similarity_search
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
list[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, bulk_size=500, **kwargs)[source]ο
Construct OpenSearchVectorSearch wrapper from raw documents.
Example
from langchain import OpenSearchVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
opensearch_vector_search = OpenSearchVectorSearch.from_texts(
texts,
embeddings,
opensearch_url="http://localhost:9200"
)
OpenSearch by default supports Approximate Search powered by nmslib, faiss
and lucene engines recommended for large datasets. Also supports brute force
search through Script Scoring and Painless Scripting.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
βvector_fieldβ.
text_field: Document field the text of the document is stored in. Defaults
to βtextβ.
Optional Keyword Args for Approximate Search:engine: βnmslibβ, βfaissβ, βluceneβ; default: βnmslibβ
space_type: βl2β, βl1β, βcosinesimilβ, βlinfβ, βinnerproductβ; default: βl2β
ef_search: Size of the dynamic list used during k-NN searches. Higher values
lead to more accurate but slower searches; default: 512
ef_construction: Size of the dynamic list used during k-NN graph creation.
Higher values lead to more accurate graph but slower indexing speed;
default: 512
m: Number of bidirectional links created for each new element. Large impact
on memory consumption. Between 2 and 100; default: 16
Keyword Args for Script Scoring or Painless Scripting:is_appx_search: False
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
bulk_size (int) β
kwargs (Any) β
Return type
langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch
class langchain.vectorstores.MongoDBAtlasVectorSearch(collection, embedding, *, index_name='default', text_key='text', embedding_key='embedding')[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around MongoDB Atlas Vector Search.
To use, you should have both:
- the pymongo python package installed
- a connection string associated with a MongoDB Atlas Cluster having deployed an
Atlas Search index
Example
from langchain.vectorstores import MongoDBAtlasVectorSearch
from langchain.embeddings.openai import OpenAIEmbeddings
from pymongo import MongoClient
mongo_client = MongoClient("<YOUR-CONNECTION-STRING>")
collection = mongo_client["<db_name>"]["<collection_name>"]
embeddings = OpenAIEmbeddings()
vectorstore = MongoDBAtlasVectorSearch(collection, embeddings)
Parameters
collection (Collection[MongoDBDocumentType]) β
embedding (Embeddings) β
index_name (str) β
text_key (str) β
embedding_key (str) β
classmethod from_connection_string(connection_string, namespace, embedding, **kwargs)[source]ο
Parameters
connection_string (str) β
namespace (str) β
embedding (langchain.embeddings.base.Embeddings) β
kwargs (Any) β
Return type
langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch
add_texts(texts, metadatas=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[Dict[str, Any]]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List
similarity_search_with_score(query, *, k=4, pre_filter=None, post_filter_pipeline=None)[source]ο
Return MongoDB documents most similar to query, along with scores.
Use the knnBeta Operator available in MongoDB Atlas Search
This feature is in early access and available only for evaluation purposes, to
validate functionality, and to gather feedback from a small closed group of
early access users. It is not recommended for production deployments as we
may introduce breaking changes.
For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta
Parameters
query (str) β Text to look up documents similar to.
k (int) β Optional Number of Documents to return. Defaults to 4.
pre_filter (Optional[dict]) β Optional Dictionary of argument(s) to prefilter on document
fields.
post_filter_pipeline (Optional[List[Dict]]) β Optional Pipeline of MongoDB aggregation stages
following the knnBeta search.
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search(query, k=4, pre_filter=None, post_filter_pipeline=None, **kwargs)[source]ο
Return MongoDB documents most similar to query.
Use the knnBeta Operator available in MongoDB Atlas Search
This feature is in early access and available only for evaluation purposes, to
validate functionality, and to gather feedback from a small closed group of
early access users. It is not recommended for production deployments as we may
introduce breaking changes.
For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta
Parameters
query (str) β Text to look up documents similar to.
k (int) β Optional Number of Documents to return. Defaults to 4.
pre_filter (Optional[dict]) β Optional Dictionary of argument(s) to prefilter on document
fields.
post_filter_pipeline (Optional[List[Dict]]) β Optional Pipeline of MongoDB aggregation stages
following the knnBeta search.
kwargs (Any) β
Returns
List of Documents most similar to the query and score for each
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, collection=None, **kwargs)[source]ο
Construct MongoDBAtlasVectorSearch wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Adds the documents to a provided MongoDB Atlas Vector Search index(Lucene)
This is intended to be a quick way to get started.
Example
Parameters
texts (List[str]) β
embedding (Embeddings) β
metadatas (Optional[List[dict]]) β
collection (Optional[Collection[MongoDBDocumentType]]) β
kwargs (Any) β
Return type
MongoDBAtlasVectorSearch
class langchain.vectorstores.MyScale(embedding, config=None, **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around MyScale vector database
You need a clickhouse-connect python package, and a valid account
to connect to MyScale.
MyScale can not only search with simple vector indexes,
it also supports complex query with multiple conditions,
constraints and even sub-queries.
For more information, please visit[myscale official site](https://docs.myscale.com/en/overview/)
Parameters
embedding (Embeddings) β
config (Optional[MyScaleSettings]) β
kwargs (Any) β
Return type
None
escape_str(value)[source]ο
Parameters
value (str) β
Return type
str
add_texts(texts, metadatas=None, batch_size=32, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
ids (Optional[Iterable[str]]) β Optional list of ids to associate with the texts.
batch_size (int) β Batch size of insertion
metadata β Optional column data to be inserted
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
classmethod from_texts(texts, embedding, metadatas=None, config=None, text_ids=None, batch_size=32, **kwargs)[source]ο
Create Myscale wrapper with existing texts
Parameters
embedding_function (Embeddings) β Function to extract text embedding
texts (Iterable[str]) β List or tuple of strings to be added
config (MyScaleSettings, Optional) β Myscale configuration
text_ids (Optional[Iterable], optional) β IDs for the texts.
Defaults to None.
batch_size (int, optional) β Batchsize when transmitting data to MyScale.
Defaults to 32.
metadata (List[dict], optional) β metadata to texts. Defaults to None.
into (Other keyword arguments will pass) β [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[Dict[Any, Any]]]) β
kwargs (Any) β
Returns
MyScale Index
Return type
langchain.vectorstores.myscale.MyScale
similarity_search(query, k=4, where_str=None, **kwargs)[source]ο
Perform a similarity search with MyScale
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
kwargs (Any) β
Returns
List of Documents
Return type
List[Document]
similarity_search_by_vector(embedding, k=4, where_str=None, **kwargs)[source]ο
Perform a similarity search with MyScale by vectors
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
embedding (List[float]) β
kwargs (Any) β
Returns
List of (Document, similarity)
Return type
List[Document]
similarity_search_with_relevance_scores(query, k=4, where_str=None, **kwargs)[source]ο
Perform a similarity search with MyScale
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
kwargs (Any) β
Returns
List of documents most similar to the query text
and cosine distance in float for each.
Lower score represents more similarity.
Return type
List[Document]
drop()[source]ο
Helper function: Drop data
Return type
None
property metadata_column: strο
pydantic settings langchain.vectorstores.MyScaleSettings[source]ο
Bases: pydantic.env_settings.BaseSettings
MyScale Client Configuration
Attribute:
myscale_host (str)An URL to connect to MyScale backend.Defaults to βlocalhostβ.
myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.
username (str) : Username to login. Defaults to None.
password (str) : Password to login. Defaults to None.
index_type (str): index type string.
index_param (dict): index build parameter.
database (str) : Database name to find the table. Defaults to βdefaultβ.
table (str) : Table name to operate on.
Defaults to βvector_tableβ.
metric (str)Metric to compute distance,supported are (βl2β, βcosineβ, βipβ). Defaults to βcosineβ.
column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,
must be same size to number of columns. For example:
.. code-block:: python
{βidβ: βtext_idβ,
βvectorβ: βtext_embeddingβ,
βtextβ: βtext_plainβ,
βmetadataβ: βmetadata_dictionary_in_jsonβ,
}
Defaults to identity map.
Show JSON schema{
"title": "MyScaleSettings",
"description": "MyScale Client Configuration\n\nAttribute:\n myscale_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (dict): index build parameter.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('l2', 'cosine', 'ip'). Defaults to 'cosine'.\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n\n {\n 'id': 'text_id',\n 'vector': 'text_embedding',\n 'text': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n\n Defaults to identity map.",
"type": "object",
"properties": {
"host": {
"title": "Host",
"default": "localhost",
"env_names": "{'myscale_host'}",
"type": "string"
},
"port": {
"title": "Port",
"default": 8443,
"env_names": "{'myscale_port'}",
"type": "integer"
},
"username": {
"title": "Username",
"env_names": "{'myscale_username'}",
"type": "string"
},
"password": {
"title": "Password",
"env_names": "{'myscale_password'}",
"type": "string"
},
"index_type": {
"title": "Index Type",
"default": "IVFFLAT",
"env_names": "{'myscale_index_type'}",
"type": "string"
},
"index_param": {
"title": "Index Param",
"env_names": "{'myscale_index_param'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"column_map": {
"title": "Column Map",
"default": {
"id": "id",
"text": "text",
"vector": "vector",
"metadata": "metadata"
},
"env_names": "{'myscale_column_map'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"database": {
"title": "Database",
"default": "default",
"env_names": "{'myscale_database'}",
"type": "string"
},
"table": {
"title": "Table",
"default": "langchain",
"env_names": "{'myscale_table'}",
"type": "string"
},
"metric": {
"title": "Metric",
"default": "cosine",
"env_names": "{'myscale_metric'}",
"type": "string"
}
},
"additionalProperties": false
}
Config
env_file: str = .env
env_file_encoding: str = utf-8
env_prefix: str = myscale_
Fields
column_map (Dict[str, str])
database (str)
host (str)
index_param (Optional[Dict[str, str]])
index_type (str)
metric (str)
password (Optional[str])
port (int)
table (str)
username (Optional[str])
attribute column_map: Dict[str, str] = {'id': 'id', 'metadata': 'metadata', 'text': 'text', 'vector': 'vector'}ο
attribute database: str = 'default'ο
attribute host: str = 'localhost'ο
attribute index_param: Optional[Dict[str, str]] = Noneο
attribute index_type: str = 'IVFFLAT'ο
attribute metric: str = 'cosine'ο
attribute password: Optional[str] = Noneο
attribute port: int = 8443ο
attribute table: str = 'langchain'ο
attribute username: Optional[str] = Noneο
class langchain.vectorstores.Pinecone(index, embedding_function, text_key, namespace=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Pinecone vector database.
To use, you should have the pinecone-client python package installed.
Example
from langchain.vectorstores import Pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
import pinecone
# The environment should be the one specified next to the API key
# in your Pinecone console
pinecone.init(api_key="***", environment="...")
index = pinecone.Index("langchain-demo")
embeddings = OpenAIEmbeddings()
vectorstore = Pinecone(index, embeddings.embed_query, "text")
Parameters
index (Any) β
embedding_function (Callable) β
text_key (str) β
namespace (Optional[str]) β
add_texts(texts, metadatas=None, ids=None, namespace=None, batch_size=32, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
ids (Optional[List[str]]) β Optional list of ids to associate with the texts.
namespace (Optional[str]) β Optional pinecone namespace to add the texts to.
batch_size (int) β
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search_with_score(query, k=4, filter=None, namespace=None)[source]ο
Return pinecone documents most similar to query, along with scores.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[dict]) β Dictionary of argument(s) to filter on metadata
namespace (Optional[str]) β Namespace to search in. Default will search in ββ namespace.
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search(query, k=4, filter=None, namespace=None, **kwargs)[source]ο
Return pinecone documents most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[dict]) β Dictionary of argument(s) to filter on metadata
namespace (Optional[str]) β Namespace to search in. Default will search in ββ namespace.
kwargs (Any) β
Returns
List of Documents most similar to the query and score for each
Return type
List[langchain.schema.Document]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, filter=None, namespace=None, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[dict]) β
namespace (Optional[str]) β
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, filter=None, namespace=None, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[dict]) β
namespace (Optional[str]) β
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, ids=None, batch_size=32, text_key='text', index_name=None, namespace=None, **kwargs)[source]ο
Construct Pinecone wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Adds the documents to a provided Pinecone index
This is intended to be a quick way to get started.
Example
from langchain import Pinecone
from langchain.embeddings import OpenAIEmbeddings
import pinecone
# The environment should be the one specified next to the API key
# in your Pinecone console
pinecone.init(api_key="***", environment="...")
embeddings = OpenAIEmbeddings()
pinecone = Pinecone.from_texts(
texts,
embeddings,
index_name="langchain-demo"
)
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
ids (Optional[List[str]]) β
batch_size (int) β
text_key (str) β
index_name (Optional[str]) β
namespace (Optional[str]) β
kwargs (Any) β
Return type
langchain.vectorstores.pinecone.Pinecone
classmethod from_existing_index(index_name, embedding, text_key='text', namespace=None)[source]ο
Load pinecone vectorstore from index name.
Parameters
index_name (str) β
embedding (langchain.embeddings.base.Embeddings) β
text_key (str) β
namespace (Optional[str]) β
Return type
langchain.vectorstores.pinecone.Pinecone
delete(ids)[source]ο
Delete by vector IDs.
Parameters
ids (List[str]) β List of ids to delete.
Return type
None
class langchain.vectorstores.Qdrant(client, collection_name, embeddings=None, content_payload_key='page_content', metadata_payload_key='metadata', embedding_function=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Qdrant vector database.
To use you should have the qdrant-client package installed.
Example
from qdrant_client import QdrantClient
from langchain import Qdrant
client = QdrantClient()
collection_name = "MyCollection"
qdrant = Qdrant(client, collection_name, embedding_function)
Parameters
client (Any) β
collection_name (str) β
embeddings (Optional[Embeddings]) β
content_payload_key (str) β
metadata_payload_key (str) β
embedding_function (Optional[Callable]) β
CONTENT_KEY = 'page_content'ο
METADATA_KEY = 'metadata'ο
add_texts(texts, metadatas=None, ids=None, batch_size=64, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
ids (Optional[Sequence[str]]) β Optional list of ids to associate with the texts. Ids have to be
uuid-like strings.
batch_size (int) β How many vectors upload per-request.
Default: 64
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search(query, k=4, filter=None, search_params=None, offset=0, score_threshold=None, consistency=None, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[MetadataFilter]) β Filter by metadata. Defaults to None.
search_params (Optional[common_types.SearchParams]) β Additional search params
offset (int) β Offset of the first result to return.
May be used to paginate results.
Note: large offset values may cause performance issues.
score_threshold (Optional[float]) β Define a minimal score threshold for the result.
If defined, less similar results will not be returned.
Score of the returned result might be higher or smaller than the
threshold depending on the Distance function used.
E.g. for cosine similarity only higher scores will be returned.
consistency (Optional[common_types.ReadConsistency]) β Read consistency of the search. Defines how many replicas should be
queried before returning the result.
Values:
- int - number of replicas to query, values should present in all
queried replicas
βmajorityβ - query all replicas, but return values present in themajority of replicas
βquorumβ - query the majority of replicas, return values present inall of them
βallβ - query all replicas, and return values present in all replicas
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[Document]
similarity_search_with_score(query, k=4, filter=None, search_params=None, offset=0, score_threshold=None, consistency=None, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[MetadataFilter]) β Filter by metadata. Defaults to None.
search_params (Optional[common_types.SearchParams]) β Additional search params
offset (int) β Offset of the first result to return.
May be used to paginate results.
Note: large offset values may cause performance issues.
score_threshold (Optional[float]) β Define a minimal score threshold for the result.
If defined, less similar results will not be returned.
Score of the returned result might be higher or smaller than the
threshold depending on the Distance function used.
E.g. for cosine similarity only higher scores will be returned.
consistency (Optional[common_types.ReadConsistency]) β Read consistency of the search. Defines how many replicas should be
queried before returning the result.
Values:
- int - number of replicas to query, values should present in all
queried replicas
βmajorityβ - query all replicas, but return values present in themajority of replicas
βquorumβ - query the majority of replicas, return values present inall of them
βallβ - query all replicas, and return values present in all replicas
kwargs (Any) β
Returns
List of documents most similar to the query text and cosine
distance in float for each.
Lower score represents more similarity.
Return type
List[Tuple[Document, float]]
similarity_search_by_vector(embedding, k=4, filter=None, search_params=None, offset=0, score_threshold=None, consistency=None, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding vector to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[MetadataFilter]) β Filter by metadata. Defaults to None.
search_params (Optional[common_types.SearchParams]) β Additional search params
offset (int) β Offset of the first result to return.
May be used to paginate results.
Note: large offset values may cause performance issues.
score_threshold (Optional[float]) β Define a minimal score threshold for the result.
If defined, less similar results will not be returned.
Score of the returned result might be higher or smaller than the
threshold depending on the Distance function used.
E.g. for cosine similarity only higher scores will be returned.
consistency (Optional[common_types.ReadConsistency]) β Read consistency of the search. Defines how many replicas should be
queried before returning the result.
Values:
- int - number of replicas to query, values should present in all
queried replicas
βmajorityβ - query all replicas, but return values present in themajority of replicas
βquorumβ - query the majority of replicas, return values present inall of them
βallβ - query all replicas, and return values present in all replicas
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[Document]
similarity_search_with_score_by_vector(embedding, k=4, filter=None, search_params=None, offset=0, score_threshold=None, consistency=None, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding vector to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[MetadataFilter]) β Filter by metadata. Defaults to None.
search_params (Optional[common_types.SearchParams]) β Additional search params
offset (int) β Offset of the first result to return.
May be used to paginate results.
Note: large offset values may cause performance issues.
score_threshold (Optional[float]) β Define a minimal score threshold for the result.
If defined, less similar results will not be returned.
Score of the returned result might be higher or smaller than the
threshold depending on the Distance function used.
E.g. for cosine similarity only higher scores will be returned.
consistency (Optional[common_types.ReadConsistency]) β Read consistency of the search. Defines how many replicas should be
queried before returning the result.
Values:
- int - number of replicas to query, values should present in all
queried replicas
βmajorityβ - query all replicas, but return values present in themajority of replicas
βquorumβ - query the majority of replicas, return values present inall of them
βallβ - query all replicas, and return values present in all replicas
kwargs (Any) β
Returns
List of documents most similar to the query text and cosine
distance in float for each.
Lower score represents more similarity.
Return type
List[Tuple[Document, float]]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, ids=None, location=None, url=None, port=6333, grpc_port=6334, prefer_grpc=False, https=None, api_key=None, prefix=None, timeout=None, host=None, path=None, collection_name=None, distance_func='Cosine', content_payload_key='page_content', metadata_payload_key='metadata', batch_size=64, shard_number=None, replication_factor=None, write_consistency_factor=None, on_disk_payload=None, hnsw_config=None, optimizers_config=None, wal_config=None, quantization_config=None, init_from=None, **kwargs)[source]ο
Construct Qdrant wrapper from a list of texts.
Parameters
texts (List[str]) β A list of texts to be indexed in Qdrant.
embedding (Embeddings) β A subclass of Embeddings, responsible for text vectorization.
metadatas (Optional[List[dict]]) β An optional list of metadata. If provided it has to be of the same
length as a list of texts.
ids (Optional[Sequence[str]]) β Optional list of ids to associate with the texts. Ids have to be
uuid-like strings.
location (Optional[str]) β If :memory: - use in-memory Qdrant instance.
If str - use it as a url parameter.
If None - fallback to relying on host and port parameters.
url (Optional[str]) β either host or str of βOptional[scheme], host, Optional[port],
Optional[prefix]β. Default: None
port (Optional[int]) β Port of the REST API interface. Default: 6333
grpc_port (int) β Port of the gRPC interface. Default: 6334
prefer_grpc (bool) β If true - use gPRC interface whenever possible in custom methods.
Default: False
https (Optional[bool]) β If true - use HTTPS(SSL) protocol. Default: None
api_key (Optional[str]) β API key for authentication in Qdrant Cloud. Default: None
prefix (Optional[str]) β If not None - add prefix to the REST URL path.
Example: service/v1 will result in
http://localhost:6333/service/v1/{qdrant-endpoint} for REST API.
Default: None
timeout (Optional[float]) β Timeout for REST and gRPC API requests.
Default: 5.0 seconds for REST and unlimited for gRPC
host (Optional[str]) β Host name of Qdrant service. If url and host are None, set to
βlocalhostβ. Default: None
path (Optional[str]) β Path in which the vectors will be stored while using local mode.
Default: None
collection_name (Optional[str]) β Name of the Qdrant collection to be used. If not provided,
it will be created randomly. Default: None
distance_func (str) β Distance function. One of: βCosineβ / βEuclidβ / βDotβ.
Default: βCosineβ
content_payload_key (str) β A payload key used to store the content of the document.
Default: βpage_contentβ
metadata_payload_key (str) β A payload key used to store the metadata of the document.
Default: βmetadataβ
batch_size (int) β How many vectors upload per-request.
Default: 64
shard_number (Optional[int]) β Number of shards in collection. Default is 1, minimum is 1.
replication_factor (Optional[int]) β Replication factor for collection. Default is 1, minimum is 1.
Defines how many copies of each shard will be created.
Have effect only in distributed mode.
write_consistency_factor (Optional[int]) β Write consistency factor for collection. Default is 1, minimum is 1.
Defines how many replicas should apply the operation for us to consider
it successful. Increasing this number will make the collection more
resilient to inconsistencies, but will also make it fail if not enough
replicas are available.
Does not have any performance impact.
Have effect only in distributed mode.
on_disk_payload (Optional[bool]) β If true - point`s payload will not be stored in memory.
It will be read from the disk every time it is requested.
This setting saves RAM by (slightly) increasing the response time.
Note: those payload values that are involved in filtering and are
indexed - remain in RAM.
hnsw_config (Optional[common_types.HnswConfigDiff]) β Params for HNSW index
optimizers_config (Optional[common_types.OptimizersConfigDiff]) β Params for optimizer
wal_config (Optional[common_types.WalConfigDiff]) β Params for Write-Ahead-Log
quantization_config (Optional[common_types.QuantizationConfig]) β Params for quantization, if None - quantization will be disabled
init_from (Optional[common_types.InitFrom]) β Use data stored in another collection to initialize this collection
**kwargs β Additional arguments passed directly into REST client initialization
kwargs (Any) β
Return type
Qdrant
This is a user-friendly interface that:
1. Creates embeddings, one for each text
2. Initializes the Qdrant database as an in-memory docstore by default
(and overridable to a remote docstore)
Adds the text embeddings to the Qdrant database
This is intended to be a quick way to get started.
Example
from langchain import Qdrant
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
qdrant = Qdrant.from_texts(texts, embeddings, "localhost")
class langchain.vectorstores.Redis(redis_url, index_name, embedding_function, content_key='content', metadata_key='metadata', vector_key='content_vector', relevance_score_fn=<function _default_relevance_score>, **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Redis vector database.
To use, you should have the redis python package installed.
Example
from langchain.vectorstores import Redis
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Redis(
redis_url="redis://username:password@localhost:6379"
index_name="my-index",
embedding_function=embeddings.embed_query,
)
Parameters
redis_url (str) β
index_name (str) β
embedding_function (Callable) β
content_key (str) β
metadata_key (str) β
vector_key (str) β
relevance_score_fn (Optional[Callable[[float], float]]) β
kwargs (Any) β
add_texts(texts, metadatas=None, embeddings=None, batch_size=1000, **kwargs)[source]ο
Add more texts to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings/text to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
Defaults to None.
embeddings (Optional[List[List[float]]], optional) β Optional pre-generated
embeddings. Defaults to None.
keys (List[str]) or ids (List[str]) β Identifiers of entries.
Defaults to None.
batch_size (int, optional) β Batch size to use for writes. Defaults to 1000.
kwargs (Any) β
Returns
List of ids added to the vectorstore
Return type
List[str]
similarity_search(query, k=4, **kwargs)[source]ο
Returns the most similar indexed documents to the query text.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
kwargs (Any) β
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
similarity_search_limit_score(query, k=4, score_threshold=0.2, **kwargs)[source]ο
Returns the most similar indexed documents to the query text within the
score_threshold range.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
score_threshold (float) β The minimum matching score required for a document
0.2. (to be considered a match. Defaults to) β
similarity (Because the similarity calculation algorithm is based on cosine) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
:param :
:param the smaller the angle:
:param the higher the similarity.:
Returns
A list of documents that are most similar to the query text,
including the match score for each document.
Return type
List[Document]
Parameters
query (str) β
k (int) β
score_threshold (float) β
kwargs (Any) β
Note
If there are no documents that satisfy the score_threshold value,
an empty list is returned.
similarity_search_with_score(query, k=4)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
classmethod from_texts_return_keys(texts, embedding, metadatas=None, index_name=None, content_key='content', metadata_key='metadata', vector_key='content_vector', distance_metric='COSINE', **kwargs)[source]ο
Create a Redis vectorstore from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in Redis.
Adds the documents to the newly created Redis index.
Returns the keys of the newly created documents.
This is intended to be a quick way to get started.
.. rubric:: Example
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
index_name (Optional[str]) β
content_key (str) β
metadata_key (str) β
vector_key (str) β
distance_metric (Literal['COSINE', 'IP', 'L2']) β
kwargs (Any) β
Return type
Tuple[langchain.vectorstores.redis.Redis, List[str]]
classmethod from_texts(texts, embedding, metadatas=None, index_name=None, content_key='content', metadata_key='metadata', vector_key='content_vector', **kwargs)[source]ο
Create a Redis vectorstore from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in Redis.
Adds the documents to the newly created Redis index.
This is intended to be a quick way to get started.
.. rubric:: Example
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
index_name (Optional[str]) β
content_key (str) β
metadata_key (str) β
vector_key (str) β
kwargs (Any) β
Return type
langchain.vectorstores.redis.Redis
static delete(ids, **kwargs)[source]ο
Delete a Redis entry.
Parameters
ids (List[str]) β List of ids (keys) to delete.
kwargs (Any) β
Returns
Whether or not the deletions were successful.
Return type
bool
static drop_index(index_name, delete_documents, **kwargs)[source]ο
Drop a Redis search index.
Parameters
index_name (str) β Name of the index to drop.
delete_documents (bool) β Whether to drop the associated documents.
kwargs (Any) β
Returns
Whether or not the drop was successful.
Return type
bool
classmethod from_existing_index(embedding, index_name, content_key='content', metadata_key='metadata', vector_key='content_vector', **kwargs)[source]ο
Connect to an existing Redis index.
Parameters
embedding (langchain.embeddings.base.Embeddings) β
index_name (str) β
content_key (str) β
metadata_key (str) β
vector_key (str) β
kwargs (Any) β
Return type
langchain.vectorstores.redis.Redis
as_retriever(**kwargs)[source]ο
Parameters
kwargs (Any) β
Return type
langchain.vectorstores.redis.RedisVectorStoreRetriever
class langchain.vectorstores.Rockset(client, embeddings, collection_name, text_key, embedding_key)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper arpund Rockset vector database.
To use, you should have the rockset python package installed. Note that to use
this, the collection being used must already exist in your Rockset instance.
You must also ensure you use a Rockset ingest transformation to apply
VECTOR_ENFORCE on the column being used to store embedding_key in the
collection.
See: https://rockset.com/blog/introducing-vector-search-on-rockset/ for more details
Everything below assumes commons Rockset workspace.
TODO: Add support for workspace args.
Example
from langchain.vectorstores import Rockset
from langchain.embeddings.openai import OpenAIEmbeddings
import rockset
# Make sure you use the right host (region) for your Rockset instance
# and APIKEY has both read-write access to your collection.
rs = rockset.RocksetClient(host=rockset.Regions.use1a1, api_key="***")
collection_name = "langchain_demo"
embeddings = OpenAIEmbeddings()
vectorstore = Rockset(rs, collection_name, embeddings,
"description", "description_embedding")
Parameters
client (Any) β
embeddings (Embeddings) β
collection_name (str) β
text_key (str) β
embedding_key (str) β
add_texts(texts, metadatas=None, ids=None, batch_size=32, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
ids: Optional list of ids to associate with the texts.
batch_size: Send documents in batches to rockset.
Returns
List of ids from adding the texts into the vectorstore.
Parameters
texts (Iterable[str]) β
metadatas (Optional[List[dict]]) β
ids (Optional[List[str]]) β
batch_size (int) β
kwargs (Any) β
Return type
List[str]
classmethod from_texts(texts, embedding, metadatas=None, client=None, collection_name='', text_key='', embedding_key='', ids=None, batch_size=32, **kwargs)[source]ο
Create Rockset wrapper with existing texts.
This is intended as a quicker way to get started.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
client (Any) β
collection_name (str) β
text_key (str) β
embedding_key (str) β
ids (Optional[List[str]]) β
batch_size (int) β
kwargs (Any) β
Return type
langchain.vectorstores.rocksetdb.Rockset
class DistanceFunction(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]ο
Bases: enum.Enum
COSINE_SIM = 'COSINE_SIM'ο
EUCLIDEAN_DIST = 'EUCLIDEAN_DIST'ο
DOT_PRODUCT = 'DOT_PRODUCT'ο
order_by()[source]ο
Return type
str
similarity_search_with_relevance_scores(query, k=4, distance_func=DistanceFunction.COSINE_SIM, where_str=None, **kwargs)[source]ο
Perform a similarity search with Rockset
Parameters
query (str) β Text to look up documents similar to.
distance_func (DistanceFunction) β how to compute distance between two
vectors in Rockset.
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β Metadata filters supplied as a
SQL where condition string. Defaults to None.
eg. βprice<=70.0 AND brand=βNintendoββ
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection.
kwargs (Any) β
Returns
List of documents with their relevance score
Return type
List[Tuple[Document, float]]
similarity_search(query, k=4, distance_func=DistanceFunction.COSINE_SIM, where_str=None, **kwargs)[source]ο
Same as similarity_search_with_relevance_scores but
doesnβt return the scores.
Parameters
query (str) β
k (int) β
distance_func (DistanceFunction) β
where_str (Optional[str]) β
kwargs (Any) β
Return type
List[Document]
similarity_search_by_vector(embedding, k=4, distance_func=DistanceFunction.COSINE_SIM, where_str=None, **kwargs)[source]ο
Accepts a query_embedding (vector), and returns documents with
similar embeddings.
Parameters
embedding (List[float]) β
k (int) β
distance_func (DistanceFunction) β
where_str (Optional[str]) β
kwargs (Any) β
Return type
List[Document]
similarity_search_by_vector_with_relevance_scores(embedding, k=4, distance_func=DistanceFunction.COSINE_SIM, where_str=None, **kwargs)[source]ο
Accepts a query_embedding (vector), and returns documents with
similar embeddings along with their relevance scores.
Parameters
embedding (List[float]) β
k (int) β
distance_func (DistanceFunction) β
where_str (Optional[str]) β
kwargs (Any) β
Return type
List[Tuple[Document, float]]
delete_texts(ids)[source]ο
Delete a list of docs from the Rockset collection
Parameters
ids (List[str]) β
Return type
None
class langchain.vectorstores.SKLearnVectorStore(embedding, *, persist_path=None, serializer='json', metric='cosine', **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
A simple in-memory vector store based on the scikit-learn library
NearestNeighbors implementation.
Parameters
embedding (langchain.embeddings.base.Embeddings) β
persist_path (Optional[str]) β
serializer (Literal['json', 'bson', 'parquet']) β
metric (str) β
kwargs (Any) β
Return type
None
persist()[source]ο
Return type
None
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β vectorstore specific parameters
ids (Optional[List[str]]) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search_with_score(query, *, k=4, **kwargs)[source]ο
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param embedding: Embedding to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
Parameters
embedding (List[float]) β
k (int) β
fetch_k (int) β
lambda_mult (float) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
Parameters
query (str) β
k (int) β
fetch_k (int) β
lambda_mult (float) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, ids=None, persist_path=None, **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
ids (Optional[List[str]]) β
persist_path (Optional[str]) β
kwargs (Any) β
Return type
langchain.vectorstores.sklearn.SKLearnVectorStore
class langchain.vectorstores.StarRocks(embedding, config=None, **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around StarRocks vector database
You need a pymysql python package, and a valid account
to connect to StarRocks.
Right now StarRocks has only implemented cosine_similarity function to
compute distance between two vectors. And there is no vector inside right now,
so we have to iterate all vectors and compute spatial distance.
For more information, please visit[StarRocks official site](https://www.starrocks.io/)
[StarRocks github](https://github.com/StarRocks/starrocks)
Parameters
embedding (Embeddings) β
config (Optional[StarRocksSettings]) β
kwargs (Any) β
Return type
None
escape_str(value)[source]ο
Parameters
value (str) β
Return type
str
add_texts(texts, metadatas=None, batch_size=32, ids=None, **kwargs)[source]ο
Insert more texts through the embeddings and add to the VectorStore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the VectorStore.
ids (Optional[Iterable[str]]) β Optional list of ids to associate with the texts.
batch_size (int) β Batch size of insertion
metadata β Optional column data to be inserted
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Returns
List of ids from adding the texts into the VectorStore.
Return type
List[str]
classmethod from_texts(texts, embedding, metadatas=None, config=None, text_ids=None, batch_size=32, **kwargs)[source]ο
Create StarRocks wrapper with existing texts
Parameters
embedding_function (Embeddings) β Function to extract text embedding
texts (Iterable[str]) β List or tuple of strings to be added
config (StarRocksSettings, Optional) β StarRocks configuration
text_ids (Optional[Iterable], optional) β IDs for the texts.
Defaults to None.
batch_size (int, optional) β Batchsize when transmitting data to StarRocks.
Defaults to 32.
metadata (List[dict], optional) β metadata to texts. Defaults to None.
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[Dict[Any, Any]]]) β
kwargs (Any) β
Returns
StarRocks Index
Return type
langchain.vectorstores.starrocks.StarRocks
similarity_search(query, k=4, where_str=None, **kwargs)[source]ο
Perform a similarity search with StarRocks
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
kwargs (Any) β
Returns
List of Documents
Return type
List[Document]
similarity_search_by_vector(embedding, k=4, where_str=None, **kwargs)[source]ο
Perform a similarity search with StarRocks by vectors
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
embedding (List[float]) β
kwargs (Any) β
Returns
List of (Document, similarity)
Return type
List[Document]
similarity_search_with_relevance_scores(query, k=4, where_str=None, **kwargs)[source]ο
Perform a similarity search with StarRocks
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
kwargs (Any) β
Returns
List of documents
Return type
List[Document]
drop()[source]ο
Helper function: Drop data
Return type
None
property metadata_column: strο
class langchain.vectorstores.SupabaseVectorStore(client, embedding, table_name, query_name=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
VectorStore for a Supabase postgres database. Assumes you have the pgvector
extension installed and a match_documents (or similar) function. For more details:
https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabase
You can implement your own match_documents function in order to limit the search
space to a subset of documents based on your own authorization or business logic.
Note that the Supabase Python client does not yet support async operations.
If youβd like to use max_marginal_relevance_search, please review the instructions
below on modifying the match_documents function to return matched embeddings.
Parameters
client (supabase.client.Client) β
embedding (Embeddings) β
table_name (str) β
query_name (Union[str, None]) β
Return type
None
table_name: strο
query_name: strο
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict[Any, Any]]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β vectorstore specific parameters
ids (Optional[List[str]]) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
classmethod from_texts(texts, embedding, metadatas=None, client=None, table_name='documents', query_name='match_documents', ids=None, **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Parameters
texts (List[str]) β
embedding (Embeddings) β
metadatas (Optional[List[dict]]) β
client (Optional[supabase.client.Client]) β
table_name (Optional[str]) β
query_name (Union[str, None]) β
ids (Optional[List[str]]) β
kwargs (Any) β
Return type
SupabaseVectorStore
add_vectors(vectors, documents, ids)[source]ο
Parameters
vectors (List[List[float]]) β
documents (List[langchain.schema.Document]) β
ids (List[str]) β
Return type
List[str]
similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
similarity_search_by_vector(embedding, k=4, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[langchain.schema.Document]
similarity_search_with_relevance_scores(query, k=4, **kwargs)[source]ο
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query (str) β input text
k (int) β Number of Documents to return. Defaults to 4.
**kwargs β kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
kwargs (Any) β
Returns
List of Tuples of (doc, similarity_score)
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_by_vector_with_relevance_scores(query, k)[source]ο
Parameters
query (List[float]) β
k (int) β
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_by_vector_returning_embeddings(query, k)[source]ο
Parameters
query (List[float]) β
k (int) β
Return type
List[Tuple[langchain.schema.Document, float, numpy.ndarray[numpy.float32, Any]]]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search requires that query_name returns matched
embeddings alongside the match documents. The following function
demonstrates how to do this:
```sql
CREATE FUNCTION match_documents_embeddings(query_embedding vector(1536),
match_count int)
RETURNS TABLE(id bigint,
content text,
metadata jsonb,
embedding vector(1536),
similarity float)
LANGUAGE plpgsql
AS $$
# variable_conflict use_column
BEGINRETURN query
SELECT
id,
content,
metadata,
embedding,
1 -(docstore.embedding <=> query_embedding) AS similarity
FROMdocstore
ORDER BYdocstore.embedding <=> query_embedding
LIMIT match_count;
END;
$$;
```
delete(ids)[source]ο
Delete by vector IDs.
Parameters
ids (List[str]) β List of ids to delete.
Return type
None
class langchain.vectorstores.Tair(embedding_function, url, index_name, content_key='content', metadata_key='metadata', search_params=None, **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Tair Vector store.
Parameters
embedding_function (Embeddings) β
url (str) β
index_name (str) β
content_key (str) β
metadata_key (str) β
search_params (Optional[dict]) β
kwargs (Any) β
create_index_if_not_exist(dim, distance_type, index_type, data_type, **kwargs)[source]ο
Parameters
dim (int) β
distance_type (str) β
index_type (str) β
data_type (str) β
kwargs (Any) β
Return type
bool
add_texts(texts, metadatas=None, **kwargs)[source]ο
Add texts data to an existing index.
Parameters
texts (Iterable[str]) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
List[str]
similarity_search(query, k=4, **kwargs)[source]ο
Returns the most similar indexed documents to the query text.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
kwargs (Any) β
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
classmethod from_texts(texts, embedding, metadatas=None, index_name='langchain', content_key='content', metadata_key='metadata', **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
index_name (str) β
content_key (str) β
metadata_key (str) β
kwargs (Any) β
Return type
langchain.vectorstores.tair.Tair
classmethod from_documents(documents, embedding, metadatas=None, index_name='langchain', content_key='content', metadata_key='metadata', **kwargs)[source]ο
Return VectorStore initialized from documents and embeddings.
Parameters
documents (List[langchain.schema.Document]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
index_name (str) β
content_key (str) β
metadata_key (str) β
kwargs (Any) β
Return type
langchain.vectorstores.tair.Tair
static drop_index(index_name='langchain', **kwargs)[source]ο
Drop an existing index.
Parameters
index_name (str) β Name of the index to drop.
kwargs (Any) β
Returns
True if the index is dropped successfully.
Return type
bool
classmethod from_existing_index(embedding, index_name='langchain', content_key='content', metadata_key='metadata', **kwargs)[source]ο
Connect to an existing Tair index.
Parameters
embedding (langchain.embeddings.base.Embeddings) β
index_name (str) β
content_key (str) β
metadata_key (str) β
kwargs (Any) β
Return type
langchain.vectorstores.tair.Tair
class langchain.vectorstores.Tigris(client, embeddings, index_name)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Parameters
client (TigrisClient) β
embeddings (Embeddings) β
index_name (str) β
property search_index: TigrisVectorStoreο
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
ids (Optional[List[str]]) β Optional list of ids for documents.
Ids will be autogenerated if not provided.
kwargs (Any) β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search(query, k=4, filter=None, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
filter (Optional[TigrisFilter]) β
kwargs (Any) β
Return type
List[Document]
similarity_search_with_score(query, k=4, filter=None)[source]ο
Run similarity search with Chroma with distance.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[TigrisFilter]) β Filter by metadata. Defaults to None.
Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]]
classmethod from_texts(texts, embedding, metadatas=None, ids=None, client=None, index_name=None, **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Parameters
texts (List[str]) β
embedding (Embeddings) β
metadatas (Optional[List[dict]]) β
ids (Optional[List[str]]) β
client (Optional[TigrisClient]) β
index_name (Optional[str]) β
kwargs (Any) β
Return type
Tigris
class langchain.vectorstores.Typesense(typesense_client, embedding, *, typesense_collection_name=None, text_key='text')[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Typesense vector search.
To use, you should have the typesense python package installed.
Example
from langchain.embedding.openai import OpenAIEmbeddings
from langchain.vectorstores import Typesense
import typesense
node = {
"host": "localhost", # For Typesense Cloud use xxx.a1.typesense.net
"port": "8108", # For Typesense Cloud use 443
"protocol": "http" # For Typesense Cloud use https
}
typesense_client = typesense.Client(
{
"nodes": [node],
"api_key": "<API_KEY>",
"connection_timeout_seconds": 2
}
)
typesense_collection_name = "langchain-memory"
embedding = OpenAIEmbeddings()
vectorstore = Typesense(
typesense_client=typesense_client,
embedding=embedding,
typesense_collection_name=typesense_collection_name,
text_key="text",
)
Parameters
typesense_client (Client) β
embedding (Embeddings) β
typesense_collection_name (Optional[str]) β
text_key (str) β
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embedding and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
ids (Optional[List[str]]) β Optional list of ids to associate with the texts.
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search_with_score(query, k=10, filter='')[source]ο
Return typesense documents most similar to query, along with scores.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 10.
Minimum 10 results would be returned.
filter (Optional[str]) β typesense filter_by expression to filter documents on
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search(query, k=10, filter='', **kwargs)[source]ο
Return typesense documents most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 10.
Minimum 10 results would be returned.
filter (Optional[str]) β typesense filter_by expression to filter documents on
kwargs (Any) β
Returns
List of Documents most similar to the query and score for each
Return type
List[langchain.schema.Document]
classmethod from_client_params(embedding, *, host='localhost', port='8108', protocol='http', typesense_api_key=None, connection_timeout_seconds=2, **kwargs)[source]ο
Initialize Typesense directly from client parameters.
Example
from langchain.embedding.openai import OpenAIEmbeddings
from langchain.vectorstores import Typesense
# Pass in typesense_api_key as kwarg or set env var "TYPESENSE_API_KEY".
vectorstore = Typesense(
OpenAIEmbeddings(),
host="localhost",
port="8108",
protocol="http",
typesense_collection_name="langchain-memory",
)
Parameters
embedding (langchain.embeddings.base.Embeddings) β
host (str) β
port (Union[str, int]) β
protocol (str) β
typesense_api_key (Optional[str]) β
connection_timeout_seconds (int) β
kwargs (Any) β
Return type
langchain.vectorstores.typesense.Typesense
classmethod from_texts(texts, embedding, metadatas=None, ids=None, typesense_client=None, typesense_client_params=None, typesense_collection_name=None, text_key='text', **kwargs)[source]ο
Construct Typesense wrapper from raw text.
Parameters
texts (List[str]) β
embedding (Embeddings) β
metadatas (Optional[List[dict]]) β
ids (Optional[List[str]]) β
typesense_client (Optional[Client]) β
typesense_client_params (Optional[dict]) β
typesense_collection_name (Optional[str]) β
text_key (str) β
kwargs (Any) β
Return type
Typesense
class langchain.vectorstores.Vectara(vectara_customer_id=None, vectara_corpus_id=None, vectara_api_key=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Implementation of Vector Store using Vectara (https://vectara.com).
.. rubric:: Example
from langchain.vectorstores import Vectara
vectorstore = Vectara(
vectara_customer_id=vectara_customer_id,
vectara_corpus_id=vectara_corpus_id,
vectara_api_key=vectara_api_key
)
Parameters
vectara_customer_id (Optional[str]) β
vectara_corpus_id (Optional[str]) β
vectara_api_key (Optional[str]) β
add_texts(texts, metadatas=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search_with_score(query, k=5, lambda_val=0.025, filter=None, n_sentence_context=0, **kwargs)[source]ο
Return Vectara documents most similar to query, along with scores.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 5.
lambda_val (float) β lexical match parameter for hybrid search.
filter (Optional[str]) β Dictionary of argument(s) to filter on metadata. For example a
filter can be βdoc.rating > 3.0 and part.lang = βdeuββ} see
https://docs.vectara.com/docs/search-apis/sql/filter-overview
for more details.
n_sentence_context (int) β number of sentences before/after the matching segment
to add
kwargs (Any) β
Returns
List of Documents most similar to the query and score for each.
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search(query, k=5, lambda_val=0.025, filter=None, n_sentence_context=0, **kwargs)[source]ο
Return Vectara documents most similar to query, along with scores.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 5.
filter (Optional[str]) β Dictionary of argument(s) to filter on metadata. For example a
filter can be βdoc.rating > 3.0 and part.lang = βdeuββ} see
https://docs.vectara.com/docs/search-apis/sql/filter-overview for more
details.
n_sentence_context (int) β number of sentences before/after the matching segment
to add
lambda_val (float) β
kwargs (Any) β
Returns
List of Documents most similar to the query
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding=None, metadatas=None, **kwargs)[source]ο
Construct Vectara wrapper from raw documents.
This is intended to be a quick way to get started.
.. rubric:: Example
from langchain import Vectara
vectara = Vectara.from_texts(
texts,
vectara_customer_id=customer_id,
vectara_corpus_id=corpus_id,
vectara_api_key=api_key,
)
Parameters
texts (List[str]) β
embedding (Optional[langchain.embeddings.base.Embeddings]) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
langchain.vectorstores.vectara.Vectara
as_retriever(**kwargs)[source]ο
Parameters
kwargs (Any) β
Return type
langchain.vectorstores.vectara.VectaraRetriever
class langchain.vectorstores.VectorStore[source]ο
Bases: abc.ABC
Interface for vector stores.
abstract add_texts(texts, metadatas=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
delete(ids)[source]ο
Delete by vector ID.
Parameters
ids (List[str]) β List of ids to delete.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
async aadd_texts(texts, metadatas=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
List[str]
add_documents(documents, **kwargs)[source]ο
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) β Documents to add to the vectorstore.
documents (List[langchain.schema.Document]) β
kwargs (Any) β
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_documents(documents, **kwargs)[source]ο
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) β Documents to add to the vectorstore.
documents (List[langchain.schema.Document]) β
kwargs (Any) β
Returns
List of IDs of the added texts.
Return type
List[str]
search(query, search_type, **kwargs)[source]ο
Return docs most similar to query using specified search type.
Parameters
query (str) β
search_type (str) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
async asearch(query, search_type, **kwargs)[source]ο
Return docs most similar to query using specified search type.
Parameters
query (str) β
search_type (str) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
abstract similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
similarity_search_with_relevance_scores(query, k=4, **kwargs)[source]ο
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query (str) β input text
k (int) β Number of Documents to return. Defaults to 4.
**kwargs β kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
kwargs (Any) β
Returns
List of Tuples of (doc, similarity_score)
Return type
List[Tuple[langchain.schema.Document, float]]
async asimilarity_search_with_relevance_scores(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[Tuple[langchain.schema.Document, float]]
async asimilarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
similarity_search_by_vector(embedding, k=4, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[langchain.schema.Document]
async asimilarity_search_by_vector(embedding, k=4, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β
k (int) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
async amax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Parameters
query (str) β
k (int) β
fetch_k (int) β
lambda_mult (float) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
async amax_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Parameters
embedding (List[float]) β
k (int) β
fetch_k (int) β
lambda_mult (float) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
classmethod from_documents(documents, embedding, **kwargs)[source]ο
Return VectorStore initialized from documents and embeddings.
Parameters
documents (List[langchain.schema.Document]) β
embedding (langchain.embeddings.base.Embeddings) β
kwargs (Any) β
Return type
langchain.vectorstores.base.VST
async classmethod afrom_documents(documents, embedding, **kwargs)[source]ο
Return VectorStore initialized from documents and embeddings.
Parameters
documents (List[langchain.schema.Document]) β
embedding (langchain.embeddings.base.Embeddings) β
kwargs (Any) β
Return type
langchain.vectorstores.base.VST
abstract classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
langchain.vectorstores.base.VST
async classmethod afrom_texts(texts, embedding, metadatas=None, **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
langchain.vectorstores.base.VST
as_retriever(**kwargs)[source]ο
Parameters
kwargs (Any) β
Return type
langchain.vectorstores.base.VectorStoreRetriever
class langchain.vectorstores.Weaviate(client, index_name, text_key, embedding=None, attributes=None, relevance_score_fn=<function _default_score_normalizer>, by_text=True)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Weaviate vector database.
To use, you should have the weaviate-client python package installed.
Example
import weaviate
from langchain.vectorstores import Weaviate
client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...)
weaviate = Weaviate(client, index_name, text_key)
Parameters
client (Any) β
index_name (str) β
text_key (str) β
embedding (Optional[Embeddings]) β
attributes (Optional[List[str]]) β
relevance_score_fn (Optional[Callable[[float], float]]) β
by_text (bool) β
add_texts(texts, metadatas=None, **kwargs)[source]ο
Upload texts with metadata (properties) to Weaviate.
Parameters
texts (Iterable[str]) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
List[str]
similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[langchain.schema.Document]
similarity_search_by_text(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[langchain.schema.Document]
similarity_search_by_vector(embedding, k=4, **kwargs)[source]ο
Look up similar documents by embedding vector in Weaviate.
Parameters
embedding (List[float]) β
k (int) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
similarity_search_with_score(query, k=4, **kwargs)[source]ο
Return list of documents most similar to the query
text and cosine distance in float for each.
Lower score represents more similarity.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[Tuple[langchain.schema.Document, float]]
classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source]ο
Construct Weaviate wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in the Weaviate instance.
Adds the documents to the newly created Weaviate index.
This is intended to be a quick way to get started.
Example
from langchain.vectorstores.weaviate import Weaviate
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
weaviate = Weaviate.from_texts(
texts,
embeddings,
weaviate_url="http://localhost:8080"
)
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
langchain.vectorstores.weaviate.Weaviate
delete(ids)[source]ο
Delete by vector IDs.
Parameters
ids (List[str]) β List of ids to delete.
Return type
None | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
cd96d496-587c-4066-8f33-4134e0c470ce | LLMsο
Wrappers on top of large language models APIs.
class langchain.llms.AI21(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model='j2-jumbo-instruct', temperature=0.7, maxTokens=256, minTokens=0, topP=1.0, presencePenalty=AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), countPenalty=AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), frequencyPenalty=AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), numResults=1, logitBias=None, ai21_api_key=None, stop=None, base_url=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around AI21 large language models.
To use, you should have the environment variable AI21_API_KEY
set with your API key.
Example
from langchain.llms import AI21
ai21 = AI21(model="j2-jumbo-instruct")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model (str) β
temperature (float) β
maxTokens (int) β
minTokens (int) β
topP (float) β
presencePenalty (langchain.llms.ai21.AI21PenaltyData) β
countPenalty (langchain.llms.ai21.AI21PenaltyData) β
frequencyPenalty (langchain.llms.ai21.AI21PenaltyData) β
numResults (int) β
logitBias (Optional[Dict[str, float]]) β
ai21_api_key (Optional[str]) β
stop (Optional[List[str]]) β
base_url (Optional[str]) β
Return type
None
attribute base_url: Optional[str] = Noneο
Base url to use, if None decides based on model name.
attribute countPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)ο
Penalizes repeated tokens according to count.
attribute frequencyPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)ο
Penalizes repeated tokens according to frequency.
attribute logitBias: Optional[Dict[str, float]] = Noneο
Adjust the probability of specific tokens being generated.
attribute maxTokens: int = 256ο
The maximum number of tokens to generate in the completion.
attribute minTokens: int = 0ο
The minimum number of tokens to generate in the completion.
attribute model: str = 'j2-jumbo-instruct'ο
Model name to use.
attribute numResults: int = 1ο
How many completions to generate for each prompt.
attribute presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)ο
Penalizes repeated tokens.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.7ο
What sampling temperature to use.
attribute topP: float = 1.0ο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.AlephAlpha(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='luminous-base', maximum_tokens=64, temperature=0.0, top_k=0, top_p=0.0, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalties_include_prompt=False, use_multiplicative_presence_penalty=False, penalty_bias=None, penalty_exceptions=None, penalty_exceptions_include_stop_sequences=None, best_of=None, n=1, logit_bias=None, log_probs=None, tokens=False, disable_optimizations=False, minimum_tokens=0, echo=False, use_multiplicative_frequency_penalty=False, sequence_penalty=0.0, sequence_penalty_min_length=2, use_multiplicative_sequence_penalty=False, completion_bias_inclusion=None, completion_bias_inclusion_first_token_only=False, completion_bias_exclusion=None, completion_bias_exclusion_first_token_only=False, contextual_control_threshold=None, control_log_additive=True, repetition_penalties_include_completion=True, raw_completion=False, aleph_alpha_api_key=None, stop_sequences=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Aleph Alpha large language models.
To use, you should have the aleph_alpha_client python package installed, and the
environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Parameters are explained more in depth here:
https://github.com/Aleph-Alpha/aleph-alpha-client/blob/c14b7dd2b4325c7da0d6a119f6e76385800e097b/aleph_alpha_client/completion.py#L10
Example
from langchain.llms import AlephAlpha
aleph_alpha = AlephAlpha(aleph_alpha_api_key="my-api-key")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (Optional[str]) β
maximum_tokens (int) β
temperature (float) β
top_k (int) β
top_p (float) β
presence_penalty (float) β
frequency_penalty (float) β
repetition_penalties_include_prompt (Optional[bool]) β
use_multiplicative_presence_penalty (Optional[bool]) β
penalty_bias (Optional[str]) β
penalty_exceptions (Optional[List[str]]) β
penalty_exceptions_include_stop_sequences (Optional[bool]) β
best_of (Optional[int]) β
n (int) β
logit_bias (Optional[Dict[int, float]]) β
log_probs (Optional[int]) β
tokens (Optional[bool]) β
disable_optimizations (Optional[bool]) β
minimum_tokens (Optional[int]) β
echo (bool) β
use_multiplicative_frequency_penalty (bool) β
sequence_penalty (float) β
sequence_penalty_min_length (int) β
use_multiplicative_sequence_penalty (bool) β
completion_bias_inclusion (Optional[Sequence[str]]) β
completion_bias_inclusion_first_token_only (bool) β
completion_bias_exclusion (Optional[Sequence[str]]) β
completion_bias_exclusion_first_token_only (bool) β
contextual_control_threshold (Optional[float]) β
control_log_additive (Optional[bool]) β
repetition_penalties_include_completion (bool) β
raw_completion (bool) β
aleph_alpha_api_key (Optional[str]) β
stop_sequences (Optional[List[str]]) β
Return type
None
attribute aleph_alpha_api_key: Optional[str] = Noneο
API key for Aleph Alpha API.
attribute best_of: Optional[int] = Noneο
returns the one with the βbest ofβ results
(highest log probability per token)
attribute completion_bias_exclusion_first_token_only: bool = Falseο
Only consider the first token for the completion_bias_exclusion.
attribute contextual_control_threshold: Optional[float] = Noneο
If set to None, attention control parameters only apply to those tokens that have
explicitly been set in the request.
If set to a non-None value, control parameters are also applied to similar tokens.
attribute control_log_additive: Optional[bool] = Trueο
True: apply control by adding the log(control_factor) to attention scores.
False: (attention_scores - - attention_scores.min(-1)) * control_factor
attribute echo: bool = Falseο
Echo the prompt in the completion.
attribute frequency_penalty: float = 0.0ο
Penalizes repeated tokens according to frequency.
attribute log_probs: Optional[int] = Noneο
Number of top log probabilities to be returned for each generated token.
attribute logit_bias: Optional[Dict[int, float]] = Noneο
The logit bias allows to influence the likelihood of generating tokens.
attribute maximum_tokens: int = 64ο
The maximum number of tokens to be generated.
attribute minimum_tokens: Optional[int] = 0ο
Generate at least this number of tokens.
attribute model: Optional[str] = 'luminous-base'ο
Model name to use.
attribute n: int = 1ο
How many completions to generate for each prompt.
attribute penalty_bias: Optional[str] = Noneο
Penalty bias for the completion.
attribute penalty_exceptions: Optional[List[str]] = Noneο
List of strings that may be generated without penalty,
regardless of other penalty settings
attribute penalty_exceptions_include_stop_sequences: Optional[bool] = Noneο
Should stop_sequences be included in penalty_exceptions.
attribute presence_penalty: float = 0.0ο
Penalizes repeated tokens.
attribute raw_completion: bool = Falseο
Force the raw completion of the model to be returned.
attribute repetition_penalties_include_completion: bool = Trueο
Flag deciding whether presence penalty or frequency penalty
are updated from the completion.
attribute repetition_penalties_include_prompt: Optional[bool] = Falseο
Flag deciding whether presence penalty or frequency penalty are
updated from the prompt.
attribute stop_sequences: Optional[List[str]] = Noneο
Stop sequences to use.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.0ο
A non-negative float that tunes the degree of randomness in generation.
attribute tokens: Optional[bool] = Falseο
return tokens of completion.
attribute top_k: int = 0ο
Number of most likely tokens to consider at each step.
attribute top_p: float = 0.0ο
Total probability mass of tokens to consider at each step.
attribute use_multiplicative_presence_penalty: Optional[bool] = Falseο
Flag deciding whether presence penalty is applied
multiplicatively (True) or additively (False).
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.AmazonAPIGateway(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, api_url, model_kwargs=None, content_handler=<langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway object>)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around custom Amazon API Gateway
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
api_url (str) β
model_kwargs (Optional[Dict]) β
content_handler (langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway) β
Return type
None
attribute api_url: str [Required]ο
API Gateway URL
attribute content_handler: langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway = <langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway object>ο
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
attribute model_kwargs: Optional[Dict] = Noneο
Key word arguments to pass to the model.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Anthropic(*, client=None, model='claude-v1', max_tokens_to_sample=256, temperature=None, top_k=None, top_p=None, streaming=False, default_request_timeout=None, anthropic_api_url=None, anthropic_api_key=None, HUMAN_PROMPT=None, AI_PROMPT=None, count_tokens=None, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None)[source]ο
Bases: langchain.llms.base.LLM, langchain.llms.anthropic._AnthropicCommon
Wrapper around Anthropicβs large language models.
To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
import anthropic
from langchain.llms import Anthropic
model = Anthropic(model="<model_name>", anthropic_api_key="my-api-key")
# Simplest invocation, automatically wrapped with HUMAN_PROMPT
# and AI_PROMPT.
response = model("What are the biggest risks facing humanity?")
# Or if you want to use the chat mode, build a few-shot-prompt, or
# put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT:
raw_prompt = "What are the biggest risks facing humanity?"
prompt = f"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}"
response = model(prompt)
Parameters
client (Any) β
model (str) β
max_tokens_to_sample (int) β
temperature (Optional[float]) β
top_k (Optional[int]) β
top_p (Optional[float]) β
streaming (bool) β
default_request_timeout (Optional[Union[float, Tuple[float, float]]]) β
anthropic_api_url (Optional[str]) β
anthropic_api_key (Optional[str]) β
HUMAN_PROMPT (Optional[str]) β
AI_PROMPT (Optional[str]) β
count_tokens (Optional[Callable[[str], int]]) β
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
Return type
None
attribute default_request_timeout: Optional[Union[float, Tuple[float, float]]] = Noneο
Timeout for requests to Anthropic Completion API. Default is 600 seconds.
attribute max_tokens_to_sample: int = 256ο
Denotes the number of tokens to predict per generation.
attribute model: str = 'claude-v1'ο
Model name to use.
attribute streaming: bool = Falseο
Whether to stream the results.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: Optional[float] = Noneο
A non-negative float that tunes the degree of randomness in generation.
attribute top_k: Optional[int] = Noneο
Number of most likely tokens to consider at each step.
attribute top_p: Optional[float] = Noneο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)[source]ο
Calculate number of tokens.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt, stop=None)[source]ο
Call Anthropic completion_stream and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt (str) β The prompt to pass into the model.
stop (Optional[List[str]]) β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from Anthropic.
Return type
Generator
Example
prompt = "Write a poem about a stream."
prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
generator = anthropic.stream(prompt)
for token in generator:
yield token
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Anyscale(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_kwargs=None, anyscale_service_url=None, anyscale_service_route=None, anyscale_service_token=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Anyscale Services.
To use, you should have the environment variable ANYSCALE_SERVICE_URL,
ANYSCALE_SERVICE_ROUTE and ANYSCALE_SERVICE_TOKEN set with your Anyscale
Service, or pass it as a named parameter to the constructor.
Example
from langchain.llms import Anyscale
anyscale = Anyscale(anyscale_service_url="SERVICE_URL",
anyscale_service_route="SERVICE_ROUTE",
anyscale_service_token="SERVICE_TOKEN")
# Use Ray for distributed processing
import ray
prompt_list=[]
@ray.remote
def send_query(llm, prompt):
resp = llm(prompt)
return resp
futures = [send_query.remote(anyscale, prompt) for prompt in prompt_list]
results = ray.get(futures)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model_kwargs (Optional[dict]) β
anyscale_service_url (Optional[str]) β
anyscale_service_route (Optional[str]) β
anyscale_service_token (Optional[str]) β
Return type
None
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model. Reserved for future use
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Aviary(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model='amazon/LightGPT', aviary_url=None, aviary_token=None, use_prompt_format=True, version=None)[source]ο
Bases: langchain.llms.base.LLM
Allow you to use an Aviary.
Aviary is a backend for hosted models. You can
find out more about aviary at
http://github.com/ray-project/aviary
To get a list of the models supported on an
aviary, follow the instructions on the web site to
install the aviary CLI and then use:
aviary models
AVIARY_URL and AVIARY_TOKEN environement variables must be set.
Example
from langchain.llms import Aviary
os.environ["AVIARY_URL"] = "<URL>"
os.environ["AVIARY_TOKEN"] = "<TOKEN>"
light = Aviary(model='amazon/LightGPT')
output = light('How do you make fried rice?')
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model (str) β
aviary_url (Optional[str]) β
aviary_token (Optional[str]) β
use_prompt_format (bool) β
version (Optional[str]) β
Return type
None
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.AzureMLOnlineEndpoint(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='', endpoint_api_key='', deployment_name='', http_client=None, content_formatter=None, model_kwargs=None)[source]ο
Bases: langchain.llms.base.LLM, pydantic.main.BaseModel
Wrapper around Azure ML Hosted models using Managed Online Endpoints.
Example
azure_llm = AzureMLModel(
endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score",
endpoint_api_key="my-api-key",
deployment_name="my-deployment-name",
content_formatter=content_formatter,
)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
endpoint_url (str) β
endpoint_api_key (str) β
deployment_name (str) β
http_client (Any) β
content_formatter (Any) β
model_kwargs (Optional[dict]) β
Return type
None
attribute content_formatter: Any = Noneο
The content formatter that provides an input and output
transform function to handle formats between the LLM and
the endpoint
attribute deployment_name: str = ''ο
Deployment Name for Endpoint. Should be passed to constructor or specified as
env var AZUREML_DEPLOYMENT_NAME.
attribute endpoint_api_key: str = ''ο
Authentication Key for Endpoint. Should be passed to constructor or specified as
env var AZUREML_ENDPOINT_API_KEY.
attribute endpoint_url: str = ''ο
URL of pre-existing Endpoint. Should be passed to constructor or specified as
env var AZUREML_ENDPOINT_URL.
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.AzureOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='text-davinci-003', temperature=0.7, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, batch_size=20, request_timeout=None, logit_bias=None, max_retries=6, streaming=False, allowed_special={}, disallowed_special='all', tiktoken_model_name=None, deployment_name='', openai_api_type='azure', openai_api_version='')[source]ο
Bases: langchain.llms.openai.BaseOpenAI
Wrapper around Azure-specific OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import AzureOpenAI
openai = AzureOpenAI(model_name="text-davinci-003")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
temperature (float) β
max_tokens (int) β
top_p (float) β
frequency_penalty (float) β
presence_penalty (float) β
n (int) β
best_of (int) β
model_kwargs (Dict[str, Any]) β
openai_api_key (Optional[str]) β
openai_api_base (Optional[str]) β
openai_organization (Optional[str]) β
openai_proxy (Optional[str]) β
batch_size (int) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
logit_bias (Optional[Dict[str, float]]) β
max_retries (int) β
streaming (bool) β
allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) β
disallowed_special (Union[Literal['all'], typing.Collection[str]]) β
tiktoken_model_name (Optional[str]) β
deployment_name (str) β
openai_api_type (str) β
openai_api_version (str) β
Return type
None
attribute allowed_special: Union[Literal['all'], AbstractSet[str]] = {}ο
Set of special tokens that are allowedγ
attribute batch_size: int = 20ο
Batch size to use when passing multiple documents to generate.
attribute best_of: int = 1ο
Generates best_of completions server-side and returns the βbestβ.
attribute deployment_name: str = ''ο
Deployment name to use.
attribute disallowed_special: Union[Literal['all'], Collection[str]] = 'all'ο
Set of special tokens that are not allowedγ
attribute frequency_penalty: float = 0ο
Penalizes repeated tokens according to frequency.
attribute logit_bias: Optional[Dict[str, float]] [Optional]ο
Adjust the probability of specific tokens being generated.
attribute max_retries: int = 6ο
Maximum number of retries to make when generating.
attribute max_tokens: int = 256ο
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not explicitly specified.
attribute model_name: str = 'text-davinci-003' (alias 'model')ο
Model name to use.
attribute n: int = 1ο
How many completions to generate for each prompt.
attribute presence_penalty: float = 0ο
Penalizes repeated tokens.
attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = Noneο
Timeout for requests to OpenAI completion API. Default is 600 seconds.
attribute streaming: bool = Falseο
Whether to stream the results or not.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.7ο
What sampling temperature to use.
attribute tiktoken_model_name: Optional[str] = Noneο
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
attribute top_p: float = 1ο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
create_llm_result(choices, prompts, token_usage)ο
Create the LLMResult from the choices and prompts.
Parameters
choices (Any) β
prompts (List[str]) β
token_usage (Dict[str, int]) β
Return type
langchain.schema.LLMResult
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_sub_prompts(params, prompts, stop=None)ο
Get the sub prompts for llm call.
Parameters
params (Dict[str, Any]) β
prompts (List[str]) β
stop (Optional[List[str]]) β
Return type
List[List[str]]
get_token_ids(text)ο
Get the token IDs using the tiktoken package.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
max_tokens_for_prompt(prompt)ο
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt (str) β The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Return type
int
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
static modelname_to_contextsize(modelname)ο
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname (str) β The modelname we want to know the context size for.
Returns
The maximum context size
Return type
int
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
prep_streaming_params(stop=None)ο
Prepare the params for streaming.
Parameters
stop (Optional[List[str]]) β
Return type
Dict[str, Any]
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt, stop=None)ο
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt (str) β The prompts to pass into the model.
stop (Optional[List[str]]) β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Return type
Generator
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
property max_context_size: intο
Get max context size for this model.
class langchain.llms.Banana(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_key='', model_kwargs=None, banana_api_key=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Banana large language models.
To use, you should have the banana-dev python package installed,
and the environment variable BANANA_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import Banana
banana = Banana(model_key="")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model_key (str) β
model_kwargs (Dict[str, Any]) β
banana_api_key (Optional[str]) β
Return type
None
attribute model_key: str = ''ο
model endpoint to use
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not
explicitly specified.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Baseten(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model, input=None, model_kwargs=None)[source]ο
Bases: langchain.llms.base.LLM
Use your Baseten models in Langchain
To use, you should have the baseten python package installed,
and run baseten.login() with your Baseten API key.
The required model param can be either a model id or model
version id. Using a model version ID will result in
slightly faster invocation.
Any other model parameters can also
be passed in with the format input={model_param: value, β¦}
The Baseten model must accept a dictionary of input with the key
βpromptβ and return a dictionary with a key βdataβ which maps
to a list of response strings.
Example
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model (str) β
input (Dict[str, Any]) β
model_kwargs (Dict[str, Any]) β
Return type
None
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Beam(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_name='', name='', cpu='', memory='', gpu='', python_version='', python_packages=[], max_length='', url='', model_kwargs=None, beam_client_id='', beam_client_secret='', app_id=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Beam API for gpt2 large language model.
To use, you should have the beam-sdk python package installed,
and the environment variable BEAM_CLIENT_ID set with your client id
and BEAM_CLIENT_SECRET set with your client secret. Information on how
to get these is available here: https://docs.beam.cloud/account/api-keys.
The wrapper can then be called as follows, where the name, cpu, memory, gpu,
python version, and python packages can be updated accordingly. Once deployed,
the instance can be called.
Example
llm = Beam(model_name="gpt2",
name="langchain-gpt2",
cpu=8,
memory="32Gi",
gpu="A10G",
python_version="python3.8",
python_packages=[
"diffusers[torch]>=0.10",
"transformers",
"torch",
"pillow",
"accelerate",
"safetensors",
"xformers",],
max_length=50)
llm._deploy()
call_result = llm._call(input)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model_name (str) β
name (str) β
cpu (str) β
memory (str) β
gpu (str) β
python_version (str) β
python_packages (List[str]) β
max_length (str) β
url (str) β
model_kwargs (Dict[str, Any]) β
beam_client_id (str) β
beam_client_secret (str) β
app_id (Optional[str]) β
Return type
None
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not
explicitly specified.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute url: str = ''ο
model endpoint to use
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
app_creation()[source]ο
Creates a Python file which will contain your Beam app definition.
Return type
None
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
run_creation()[source]ο
Creates a Python file which will be deployed on beam.
Return type
None
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Bedrock(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, region_name=None, credentials_profile_name=None, model_id, model_kwargs=None)[source]ο
Bases: langchain.llms.base.LLM
LLM provider to invoke Bedrock models.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Bedrock service.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
region_name (Optional[str]) β
credentials_profile_name (Optional[str]) β
model_id (str) β
model_kwargs (Optional[Dict]) β
Return type
None
attribute credentials_profile_name: Optional[str] = Noneο
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
attribute model_id: str [Required]ο
Id of the model to call, e.g., amazon.titan-tg1-large, this is
equivalent to the modelId property in the list-foundation-models api
attribute model_kwargs: Optional[Dict] = Noneο
Key word arguments to pass to the model.
attribute region_name: Optional[str] = Noneο
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.CTransformers(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model, model_type=None, model_file=None, config=None, lib=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around the C Transformers LLM interface.
To use, you should have the ctransformers python package installed.
See https://github.com/marella/ctransformers
Example
from langchain.llms import CTransformers
llm = CTransformers(model="/path/to/ggml-gpt-2.bin", model_type="gpt2")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
model_type (Optional[str]) β
model_file (Optional[str]) β
config (Optional[Dict[str, Any]]) β
lib (Optional[str]) β
Return type
None
attribute config: Optional[Dict[str, Any]] = Noneο
The config parameters.
See https://github.com/marella/ctransformers#config
attribute lib: Optional[str] = Noneο
The path to a shared library or one of avx2, avx, basic.
attribute model: str [Required]ο
The path to a model file or directory or the name of a Hugging Face Hub
model repo.
attribute model_file: Optional[str] = Noneο
The name of the model file in repo or directory.
attribute model_type: Optional[str] = Noneο
The model type.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.CerebriumAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='', model_kwargs=None, cerebriumai_api_key=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around CerebriumAI large language models.
To use, you should have the cerebrium python package installed, and the
environment variable CEREBRIUMAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import CerebriumAI
cerebrium = CerebriumAI(endpoint_url="")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
endpoint_url (str) β
model_kwargs (Dict[str, Any]) β
cerebriumai_api_key (Optional[str]) β
Return type
None
attribute endpoint_url: str = ''ο
model endpoint to use
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not
explicitly specified.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Clarifai(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, stub=None, metadata=None, userDataObject=None, model_id=None, model_version_id=None, app_id=None, user_id=None, clarifai_pat_key=None, api_base='https://api.clarifai.com', stop=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Clarifaiβs large language models.
To use, you should have an account on the Clarifai platform,
the clarifai python package installed, and the
environment variable CLARIFAI_PAT_KEY set with your PAT key,
or pass it as a named parameter to the constructor.
Example
from langchain.llms import Clarifai
clarifai_llm = Clarifai(clarifai_pat_key=CLARIFAI_PAT_KEY, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
stub (Any) β
metadata (Any) β
userDataObject (Any) β
model_id (Optional[str]) β
model_version_id (Optional[str]) β
app_id (Optional[str]) β
user_id (Optional[str]) β
clarifai_pat_key (Optional[str]) β
api_base (str) β
stop (Optional[List[str]]) β
Return type
None
attribute app_id: Optional[str] = Noneο
Clarifai application id to use.
attribute model_id: Optional[str] = Noneο
Model id to use.
attribute model_version_id: Optional[str] = Noneο
Model version id to use.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute user_id: Optional[str] = Noneο
Clarifai user id to use.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Cohere(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model=None, max_tokens=256, temperature=0.75, k=0, p=1, frequency_penalty=0.0, presence_penalty=0.0, truncate=None, max_retries=10, cohere_api_key=None, stop=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Cohere large language models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import Cohere
cohere = Cohere(model="gptd-instruct-tft", cohere_api_key="my-api-key")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (Optional[str]) β
max_tokens (int) β
temperature (float) β
k (int) β
p (int) β
frequency_penalty (float) β
presence_penalty (float) β
truncate (Optional[str]) β
max_retries (int) β
cohere_api_key (Optional[str]) β
stop (Optional[List[str]]) β
Return type
None
attribute frequency_penalty: float = 0.0ο
Penalizes repeated tokens according to frequency. Between 0 and 1.
attribute k: int = 0ο
Number of most likely tokens to consider at each step.
attribute max_retries: int = 10ο
Maximum number of retries to make when generating.
attribute max_tokens: int = 256ο
Denotes the number of tokens to predict per generation.
attribute model: Optional[str] = Noneο
Model name to use.
attribute p: int = 1ο
Total probability mass of tokens to consider at each step.
attribute presence_penalty: float = 0.0ο
Penalizes repeated tokens. Between 0 and 1.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.75ο
A non-negative float that tunes the degree of randomness in generation.
attribute truncate: Optional[str] = Noneο
Specify how the client handles inputs longer than the maximum token
length: Truncate from START, END or NONE
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Databricks(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, host=None, api_token=None, endpoint_name=None, cluster_id=None, cluster_driver_port=None, model_kwargs=None, transform_input_fn=None, transform_output_fn=None)[source]ο
Bases: langchain.llms.base.LLM
LLM wrapper around a Databricks serving endpoint or a cluster driver proxy app.
It supports two endpoint types:
Serving endpoint (recommended for both production and development).
We assume that an LLM was registered and deployed to a serving endpoint.
To wrap it as an LLM you must have βCan Queryβ permission to the endpoint.
Set endpoint_name accordingly and do not set cluster_id and
cluster_driver_port.
The expected model signature is:
inputs:
[{"name": "prompt", "type": "string"},
{"name": "stop", "type": "list[string]"}]
outputs: [{"type": "string"}]
Cluster driver proxy app (recommended for interactive development).
One can load an LLM on a Databricks interactive cluster and start a local HTTP
server on the driver node to serve the model at / using HTTP POST method
with JSON input/output.
Please use a port number between [3000, 8000] and let the server listen to
the driver IP address or simply 0.0.0.0 instead of localhost only.
To wrap it as an LLM you must have βCan Attach Toβ permission to the cluster.
Set cluster_id and cluster_driver_port and do not set endpoint_name.
The expected server schema (using JSON schema) is:
inputs:
{"type": "object",
"properties": {
"prompt": {"type": "string"},
"stop": {"type": "array", "items": {"type": "string"}}},
"required": ["prompt"]}`
outputs: {"type": "string"}
If the endpoint model signature is different or you want to set extra params,
you can use transform_input_fn and transform_output_fn to apply necessary
transformations before and after the query.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
host (str) β
api_token (str) β
endpoint_name (Optional[str]) β
cluster_id (Optional[str]) β
cluster_driver_port (Optional[str]) β
model_kwargs (Optional[Dict[str, Any]]) β
transform_input_fn (Optional[Callable]) β
transform_output_fn (Optional[Callable[[...], str]]) β
Return type
None
attribute api_token: str [Optional]ο
Databricks personal access token.
If not provided, the default value is determined by
the DATABRICKS_TOKEN environment variable if present, or
an automatically generated temporary token if running inside a Databricks
notebook attached to an interactive cluster in βsingle userβ or
βno isolation sharedβ mode.
attribute cluster_driver_port: Optional[str] = Noneο
The port number used by the HTTP server running on the cluster driver node.
The server should listen on the driver IP address or simply 0.0.0.0 to connect.
We recommend the server using a port number between [3000, 8000].
attribute cluster_id: Optional[str] = Noneο
ID of the cluster if connecting to a cluster driver proxy app.
If neither endpoint_name nor cluster_id is not provided and the code runs
inside a Databricks notebook attached to an interactive cluster in βsingle userβ
or βno isolation sharedβ mode, the current cluster ID is used as default.
You must not set both endpoint_name and cluster_id.
attribute endpoint_name: Optional[str] = Noneο
Name of the model serving endpont.
You must specify the endpoint name to connect to a model serving endpoint.
You must not set both endpoint_name and cluster_id.
attribute host: str [Optional]ο
Databricks workspace hostname.
If not provided, the default value is determined by
the DATABRICKS_HOST environment variable if present, or
the hostname of the current Databricks workspace if running inside
a Databricks notebook attached to an interactive cluster in βsingle userβ
or βno isolation sharedβ mode.
attribute model_kwargs: Optional[Dict[str, Any]] = Noneο
Extra parameters to pass to the endpoint.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute transform_input_fn: Optional[Callable] = Noneο
A function that transforms {prompt, stop, **kwargs} into a JSON-compatible
request object that the endpoint accepts.
For example, you can apply a prompt template to the input prompt.
attribute transform_output_fn: Optional[Callable[[...], str]] = Noneο
A function that transforms the output from the endpoint to the generated text.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.DeepInfra(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_id='google/flan-t5-xl', model_kwargs=None, deepinfra_api_token=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around DeepInfra deployed models.
To use, you should have the requests python package installed, and the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import DeepInfra
di = DeepInfra(model_id="google/flan-t5-xl",
deepinfra_api_token="my-api-key")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model_id (str) β
model_kwargs (Optional[dict]) β
deepinfra_api_token (Optional[str]) β
Return type
None
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.FakeListLLM(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, responses, i=0)[source]ο
Bases: langchain.llms.base.LLM
Fake LLM wrapper for testing purposes.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
responses (List) β
i (int) β
Return type
None
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.ForefrontAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='', temperature=0.7, length=256, top_p=1.0, top_k=40, repetition_penalty=1, forefrontai_api_key=None, base_url=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around ForefrontAI large language models.
To use, you should have the environment variable FOREFRONTAI_API_KEY
set with your API key.
Example
from langchain.llms import ForefrontAI
forefrontai = ForefrontAI(endpoint_url="")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
endpoint_url (str) β
temperature (float) β
length (int) β
top_p (float) β
top_k (int) β
repetition_penalty (int) β
forefrontai_api_key (Optional[str]) β
base_url (Optional[str]) β
Return type
None
attribute base_url: Optional[str] = Noneο
Base url to use, if None decides based on model name.
attribute endpoint_url: str = ''ο
Model name to use.
attribute length: int = 256ο
The maximum number of tokens to generate in the completion.
attribute repetition_penalty: int = 1ο
Penalizes repeated tokens according to frequency.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.7ο
What sampling temperature to use.
attribute top_k: int = 40ο
The number of highest probability vocabulary tokens to
keep for top-k-filtering.
attribute top_p: float = 1.0ο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.GPT4All(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model, backend=None, n_ctx=512, n_parts=- 1, seed=0, f16_kv=False, logits_all=False, vocab_only=False, use_mlock=False, embedding=False, n_threads=4, n_predict=256, temp=0.8, top_p=0.95, top_k=40, echo=False, stop=[], repeat_last_n=64, repeat_penalty=1.3, n_batch=1, streaming=False, context_erase=0.5, allow_download=False, client=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around GPT4All language models.
To use, you should have the gpt4all python package installed, the
pre-trained model file, and the modelβs config information.
Example
from langchain.llms import GPT4All
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)
# Simplest invocation
response = model("Once upon a time, ")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model (str) β
backend (Optional[str]) β
n_ctx (int) β
n_parts (int) β
seed (int) β
f16_kv (bool) β
logits_all (bool) β
vocab_only (bool) β
use_mlock (bool) β
embedding (bool) β
n_threads (Optional[int]) β
n_predict (Optional[int]) β
temp (Optional[float]) β
top_p (Optional[float]) β
top_k (Optional[int]) β
echo (Optional[bool]) β
stop (Optional[List[str]]) β
repeat_last_n (Optional[int]) β
repeat_penalty (Optional[float]) β
n_batch (int) β
streaming (bool) β
context_erase (float) β
allow_download (bool) β
client (Any) β
Return type
None
attribute allow_download: bool = Falseο
If model does not exist in ~/.cache/gpt4all/, download it.
attribute context_erase: float = 0.5ο
Leave (n_ctx * context_erase) tokens
starting from beginning if the context has run out.
attribute echo: Optional[bool] = Falseο
Whether to echo the prompt.
attribute embedding: bool = Falseο
Use embedding mode only.
attribute f16_kv: bool = Falseο
Use half-precision for key/value cache.
attribute logits_all: bool = Falseο
Return logits for all tokens, not just the last token.
attribute model: str [Required]ο
Path to the pre-trained GPT4All model file.
attribute n_batch: int = 1ο
Batch size for prompt processing.
attribute n_ctx: int = 512ο
Token context window.
attribute n_parts: int = -1ο
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
attribute n_predict: Optional[int] = 256ο
The maximum number of tokens to generate.
attribute n_threads: Optional[int] = 4ο
Number of threads to use.
attribute repeat_last_n: Optional[int] = 64ο
Last n tokens to penalize
attribute repeat_penalty: Optional[float] = 1.3ο
The penalty to apply to repeated tokens.
attribute seed: int = 0ο
Seed. If -1, a random seed is used.
attribute stop: Optional[List[str]] = []ο
A list of strings to stop generation when encountered.
attribute streaming: bool = Falseο
Whether to stream the results or not.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temp: Optional[float] = 0.8ο
The temperature to use for sampling.
attribute top_k: Optional[int] = 40ο
The top-k value to use for sampling.
attribute top_p: Optional[float] = 0.95ο
The top-p value to use for sampling.
attribute use_mlock: bool = Falseο
Force system to keep model in RAM.
attribute verbose: bool [Optional]ο
Whether to print out response text.
attribute vocab_only: bool = Falseο
Only load the vocabulary, no weights.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.GooglePalm(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, google_api_key=None, model_name='models/text-bison-001', temperature=0.7, top_p=None, top_k=None, max_output_tokens=None, n=1)[source]ο
Bases: langchain.llms.base.BaseLLM, pydantic.main.BaseModel
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
google_api_key (Optional[str]) β
model_name (str) β
temperature (float) β
top_p (Optional[float]) β
top_k (Optional[int]) β
max_output_tokens (Optional[int]) β
n (int) β
Return type
None
attribute max_output_tokens: Optional[int] = Noneο
Maximum number of tokens to include in a candidate. Must be greater than zero.
If unset, will default to 64.
attribute model_name: str = 'models/text-bison-001'ο
Model name to use.
attribute n: int = 1ο
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.7ο
Run inference with this temperature. Must by in the closed interval
[0.0, 1.0].
attribute top_k: Optional[int] = Noneο
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
attribute top_p: Optional[float] = Noneο
Decode using nucleus sampling: consider the smallest set of tokens whose
probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.GooseAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='gpt-neo-20b', temperature=0.7, max_tokens=256, top_p=1, min_tokens=1, frequency_penalty=0, presence_penalty=0, n=1, model_kwargs=None, logit_bias=None, gooseai_api_key=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable GOOSEAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import GooseAI
gooseai = GooseAI(model_name="gpt-neo-20b")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model_name (str) β
temperature (float) β
max_tokens (int) β
top_p (float) β
min_tokens (int) β
frequency_penalty (float) β
presence_penalty (float) β
n (int) β
model_kwargs (Dict[str, Any]) β
logit_bias (Optional[Dict[str, float]]) β
gooseai_api_key (Optional[str]) β
Return type
None
attribute frequency_penalty: float = 0ο
Penalizes repeated tokens according to frequency.
attribute logit_bias: Optional[Dict[str, float]] [Optional]ο
Adjust the probability of specific tokens being generated.
attribute max_tokens: int = 256ο
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
attribute min_tokens: int = 1ο
The minimum number of tokens to generate in the completion.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not explicitly specified.
attribute model_name: str = 'gpt-neo-20b'ο
Model name to use
attribute n: int = 1ο
How many completions to generate for each prompt.
attribute presence_penalty: float = 0ο
Penalizes repeated tokens.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.7ο
What sampling temperature to use
attribute top_p: float = 1ο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.HuggingFaceEndpoint(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='', task=None, model_kwargs=None, huggingfacehub_api_token=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around HuggingFaceHub Inference Endpoints.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import HuggingFaceEndpoint
endpoint_url = (
"https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud"
)
hf = HuggingFaceEndpoint(
endpoint_url=endpoint_url,
huggingfacehub_api_token="my-api-key"
)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
endpoint_url (str) β
task (Optional[str]) β
model_kwargs (Optional[dict]) β
huggingfacehub_api_token (Optional[str]) β
Return type
None
attribute endpoint_url: str = ''ο
Endpoint URL to use.
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute task: Optional[str] = Noneο
Task to call the model with.
Should be a task that returns generated_text or summary_text.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.HuggingFaceHub(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, repo_id='gpt2', task=None, model_kwargs=None, huggingfacehub_api_token=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around HuggingFaceHub models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation, text2text-generation and summarization for now.
Example
from langchain.llms import HuggingFaceHub
hf = HuggingFaceHub(repo_id="gpt2", huggingfacehub_api_token="my-api-key")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
repo_id (str) β
task (Optional[str]) β
model_kwargs (Optional[dict]) β
huggingfacehub_api_token (Optional[str]) β
Return type
None
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model.
attribute repo_id: str = 'gpt2'ο
Model name to use.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute task: Optional[str] = Noneο
Task to call the model with.
Should be a task that returns generated_text or summary_text.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.HuggingFacePipeline(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline=None, model_id='gpt2', model_kwargs=None, pipeline_kwargs=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around HuggingFace Pipeline API.
To use, you should have the transformers python package installed.
Only supports text-generation, text2text-generation and summarization for now.
Example using from_model_id:from langchain.llms import HuggingFacePipeline
hf = HuggingFacePipeline.from_model_id(
model_id="gpt2",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 10},
)
Example passing pipeline in directly:from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
hf = HuggingFacePipeline(pipeline=pipe)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline (Any) β
model_id (str) β
model_kwargs (Optional[dict]) β
pipeline_kwargs (Optional[dict]) β
Return type
None
attribute model_id: str = 'gpt2'ο
Model name to use.
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments passed to the model.
attribute pipeline_kwargs: Optional[dict] = Noneο
Key word arguments passed to the pipeline.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_model_id(model_id, task, device=- 1, model_kwargs=None, pipeline_kwargs=None, **kwargs)[source]ο
Construct the pipeline object from model_id and task.
Parameters
model_id (str) β
task (str) β
device (int) β
model_kwargs (Optional[dict]) β
pipeline_kwargs (Optional[dict]) β
kwargs (Any) β
Return type
langchain.llms.base.LLM
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.HuggingFaceTextGenInference(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, max_new_tokens=512, top_k=None, top_p=0.95, typical_p=0.95, temperature=0.8, repetition_penalty=None, stop_sequences=None, seed=None, inference_server_url='', timeout=120, server_kwargs=None, stream=False, client=None, async_client=None)[source]ο
Bases: langchain.llms.base.LLM
HuggingFace text generation inference API.
This class is a wrapper around the HuggingFace text generation inference API.
It is used to generate text from a given prompt.
Attributes:
- max_new_tokens: The maximum number of tokens to generate.
- top_k: The number of top-k tokens to consider when generating text.
- top_p: The cumulative probability threshold for generating text.
- typical_p: The typical probability threshold for generating text.
- temperature: The temperature to use when generating text.
- repetition_penalty: The repetition penalty to use when generating text.
- stop_sequences: A list of stop sequences to use when generating text.
- seed: The seed to use when generating text.
- inference_server_url: The URL of the inference server to use.
- timeout: The timeout value in seconds to use while connecting to inference server.
- server_kwargs: The keyword arguments to pass to the inference server.
- client: The client object used to communicate with the inference server.
- async_client: The async client object used to communicate with the server.
Methods:
- _call: Generates text based on a given prompt and stop sequences.
- _acall: Async generates text based on a given prompt and stop sequences.
- _llm_type: Returns the type of LLM.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
max_new_tokens (int) β
top_k (Optional[int]) β
top_p (Optional[float]) β
typical_p (Optional[float]) β
temperature (float) β
repetition_penalty (Optional[float]) β
stop_sequences (List[str]) β
seed (Optional[int]) β
inference_server_url (str) β
timeout (int) β
server_kwargs (Dict[str, Any]) β
stream (bool) β
client (Any) β
async_client (Any) β
Return type
None
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.HumanInputLLM(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, input_func=None, prompt_func=None, separator='\n', input_kwargs={}, prompt_kwargs={})[source]ο
Bases: langchain.llms.base.LLM
A LLM wrapper which returns user input as the response.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
input_func (Callable) β
prompt_func (Callable[[str], None]) β
separator (str) β
input_kwargs (Mapping[str, Any]) β
prompt_kwargs (Mapping[str, Any]) β
Return type
None
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.LlamaCpp(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_path, lora_base=None, lora_path=None, n_ctx=512, n_parts=- 1, seed=- 1, f16_kv=True, logits_all=False, vocab_only=False, use_mlock=False, n_threads=None, n_batch=8, n_gpu_layers=None, suffix=None, max_tokens=256, temperature=0.8, top_p=0.95, logprobs=None, echo=False, stop=[], repeat_penalty=1.1, top_k=40, last_n_tokens_size=64, use_mmap=True, streaming=True)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around the llama.cpp model.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: https://github.com/abetlen/llama-cpp-python
Example
from langchain.llms import LlamaCppEmbeddings
llm = LlamaCppEmbeddings(model_path="/path/to/llama/model")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model_path (str) β
lora_base (Optional[str]) β
lora_path (Optional[str]) β
n_ctx (int) β
n_parts (int) β
seed (int) β
f16_kv (bool) β
logits_all (bool) β
vocab_only (bool) β
use_mlock (bool) β
n_threads (Optional[int]) β
n_batch (Optional[int]) β
n_gpu_layers (Optional[int]) β
suffix (Optional[str]) β
max_tokens (Optional[int]) β
temperature (Optional[float]) β
top_p (Optional[float]) β
logprobs (Optional[int]) β
echo (Optional[bool]) β
stop (Optional[List[str]]) β
repeat_penalty (Optional[float]) β
top_k (Optional[int]) β
last_n_tokens_size (Optional[int]) β
use_mmap (Optional[bool]) β
streaming (bool) β
Return type
None
attribute echo: Optional[bool] = Falseο
Whether to echo the prompt.
attribute f16_kv: bool = Trueο
Use half-precision for key/value cache.
attribute last_n_tokens_size: Optional[int] = 64ο
The number of tokens to look back when applying the repeat_penalty.
attribute logits_all: bool = Falseο
Return logits for all tokens, not just the last token.
attribute logprobs: Optional[int] = Noneο
The number of logprobs to return. If None, no logprobs are returned.
attribute lora_base: Optional[str] = Noneο
The path to the Llama LoRA base model.
attribute lora_path: Optional[str] = Noneο
The path to the Llama LoRA. If None, no LoRa is loaded.
attribute max_tokens: Optional[int] = 256ο
The maximum number of tokens to generate.
attribute model_path: str [Required]ο
The path to the Llama model file.
attribute n_batch: Optional[int] = 8ο
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
attribute n_ctx: int = 512ο
Token context window.
attribute n_gpu_layers: Optional[int] = Noneο
Number of layers to be loaded into gpu memory. Default None.
attribute n_parts: int = -1ο
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
attribute n_threads: Optional[int] = Noneο
Number of threads to use.
If None, the number of threads is automatically determined.
attribute repeat_penalty: Optional[float] = 1.1ο
The penalty to apply to repeated tokens.
attribute seed: int = -1ο
Seed. If -1, a random seed is used.
attribute stop: Optional[List[str]] = []ο
A list of strings to stop generation when encountered.
attribute streaming: bool = Trueο
Whether to stream the results, token by token.
attribute suffix: Optional[str] = Noneο
A suffix to append to the generated text. If None, no suffix is appended.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: Optional[float] = 0.8ο
The temperature to use for sampling.
attribute top_k: Optional[int] = 40ο
The top-k value to use for sampling.
attribute top_p: Optional[float] = 0.95ο
The top-p value to use for sampling.
attribute use_mlock: bool = Falseο
Force system to keep model in RAM.
attribute use_mmap: Optional[bool] = Trueο
Whether to keep the model loaded in RAM
attribute verbose: bool [Optional]ο
Whether to print out response text.
attribute vocab_only: bool = Falseο
Only load the vocabulary, no weights.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)[source]ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt, stop=None, run_manager=None)[source]ο
Yields results objects as they are generated in real time.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
It also calls the callback managerβs on_llm_new_token event with
similar parameters to the OpenAI LLM class method of the same name.
Args:prompt: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:A generator representing the stream of tokens being generated.
Yields:A dictionary like objects containing a string token and metadata.
See llama-cpp-python docs and below for more.
Example:from langchain.llms import LlamaCpp
llm = LlamaCpp(
model_path="/path/to/local/model.bin",
temperature = 0.5
)
for chunk in llm.stream("Ask 'Hi, how are you?' like a pirate:'",
stop=["'","
β]):result = chunk[βchoicesβ][0]
print(result[βtextβ], end=ββ, flush=True)
Parameters
prompt (str) β
stop (Optional[List[str]]) β
run_manager (Optional[langchain.callbacks.manager.CallbackManagerForLLMRun]) β
Return type
Generator[Dict, None, None]
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.TextGen(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_url, max_new_tokens=250, do_sample=True, temperature=1.3, top_p=0.1, typical_p=1, epsilon_cutoff=0, eta_cutoff=0, repetition_penalty=1.18, top_k=40, min_length=0, no_repeat_ngram_size=0, num_beams=1, penalty_alpha=0, length_penalty=1, early_stopping=False, seed=- 1, add_bos_token=True, truncation_length=2048, ban_eos_token=False, skip_special_tokens=True, stopping_strings=[], streaming=False)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around the text-generation-webui model.
To use, you should have the text-generation-webui installed, a model loaded,
and βapi added as a command-line option.
Suggested installation, use one-click installer for your OS:
https://github.com/oobabooga/text-generation-webui#one-click-installers
Paremeters below taken from text-generation-webui api example:
https://github.com/oobabooga/text-generation-webui/blob/main/api-examples/api-example.py
Example
from langchain.llms import TextGen
llm = TextGen(model_url="http://localhost:8500")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model_url (str) β
max_new_tokens (Optional[int]) β
do_sample (bool) β
temperature (Optional[float]) β
top_p (Optional[float]) β
typical_p (Optional[float]) β
epsilon_cutoff (Optional[float]) β
eta_cutoff (Optional[float]) β
repetition_penalty (Optional[float]) β
top_k (Optional[float]) β
min_length (Optional[int]) β
no_repeat_ngram_size (Optional[int]) β
num_beams (Optional[int]) β
penalty_alpha (Optional[float]) β
length_penalty (Optional[float]) β
early_stopping (bool) β
seed (int) β
add_bos_token (bool) β
truncation_length (Optional[int]) β
ban_eos_token (bool) β
skip_special_tokens (bool) β
stopping_strings (Optional[List[str]]) β
streaming (bool) β
Return type
None
attribute add_bos_token: bool = Trueο
Add the bos_token to the beginning of prompts.
Disabling this can make the replies more creative.
attribute ban_eos_token: bool = Falseο
Ban the eos_token. Forces the model to never end the generation prematurely.
attribute do_sample: bool = Trueο
Do sample
attribute early_stopping: bool = Falseο
Early stopping
attribute epsilon_cutoff: Optional[float] = 0ο
Epsilon cutoff
attribute eta_cutoff: Optional[float] = 0ο
ETA cutoff
attribute length_penalty: Optional[float] = 1ο
Length Penalty
attribute max_new_tokens: Optional[int] = 250ο
The maximum number of tokens to generate.
attribute min_length: Optional[int] = 0ο
Minimum generation length in tokens.
attribute model_url: str [Required]ο
The full URL to the textgen webui including http[s]://host:port
attribute no_repeat_ngram_size: Optional[int] = 0ο
If not set to 0, specifies the length of token sets that are completely blocked
from repeating at all. Higher values = blocks larger phrases,
lower values = blocks words or letters from repeating.
Only 0 or high values are a good idea in most cases.
attribute num_beams: Optional[int] = 1ο
Number of beams
attribute penalty_alpha: Optional[float] = 0ο
Penalty Alpha
attribute repetition_penalty: Optional[float] = 1.18ο
Exponential penalty factor for repeating prior tokens. 1 means no penalty,
higher value = less repetition, lower value = more repetition.
attribute seed: int = -1ο
Seed (-1 for random)
attribute skip_special_tokens: bool = Trueο
Skip special tokens. Some specific models need this unset.
attribute stopping_strings: Optional[List[str]] = []ο
A list of strings to stop generation when encountered.
attribute streaming: bool = Falseο
Whether to stream the results, token by token (currently unimplemented).
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: Optional[float] = 1.3ο
Primary factor to control randomness of outputs. 0 = deterministic
(only the most likely token is used). Higher value = more randomness.
attribute top_k: Optional[float] = 40ο
Similar to top_p, but select instead only the top_k most likely tokens.
Higher value = higher range of possible random results.
attribute top_p: Optional[float] = 0.1ο
If not set to 1, select tokens with probabilities adding up to less than this
number. Higher value = higher range of possible random results.
attribute truncation_length: Optional[int] = 2048ο
Truncate the prompt up to this length. The leftmost tokens are removed if
the prompt exceeds this length. Most models require this to be at most 2048.
attribute typical_p: Optional[float] = 1ο
If not set to 1, select only tokens that are at least this much more likely to
appear than random tokens, given the prior text.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.ManifestWrapper(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, llm_kwargs=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around HazyResearchβs Manifest library.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
llm_kwargs (Optional[Dict]) β
Return type
None
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Modal(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='', model_kwargs=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Modal large language models.
To use, you should have the modal-client python package installed.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import Modal
modal = Modal(endpoint_url="")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
endpoint_url (str) β
model_kwargs (Dict[str, Any]) β
Return type
None
attribute endpoint_url: str = ''ο
model endpoint to use
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not
explicitly specified.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.MosaicML(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict', inject_instruction_format=False, model_kwargs=None, retry_sleep=1.0, mosaicml_api_token=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around MosaicMLβs LLM inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import MosaicML
endpoint_url = (
"https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict"
)
mosaic_llm = MosaicML(
endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
endpoint_url (str) β
inject_instruction_format (bool) β
model_kwargs (Optional[dict]) β
retry_sleep (float) β
mosaicml_api_token (Optional[str]) β
Return type
None
attribute endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict'ο
Endpoint URL to use.
attribute inject_instruction_format: bool = Falseο
Whether to inject the instruction format into the prompt.
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model.
attribute retry_sleep: float = 1.0ο
How long to try sleeping for if a rate limit is encountered
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.NLPCloud(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='finetuned-gpt-neox-20b', temperature=0.7, min_length=1, max_length=256, length_no_input=True, remove_input=True, remove_end_sequence=True, bad_words=[], top_p=1, top_k=50, repetition_penalty=1.0, length_penalty=1.0, do_sample=True, num_beams=1, early_stopping=False, num_return_sequences=1, nlpcloud_api_key=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around NLPCloud large language models.
To use, you should have the nlpcloud python package installed, and the
environment variable NLPCLOUD_API_KEY set with your API key.
Example
from langchain.llms import NLPCloud
nlpcloud = NLPCloud(model="gpt-neox-20b")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model_name (str) β
temperature (float) β
min_length (int) β
max_length (int) β
length_no_input (bool) β
remove_input (bool) β
remove_end_sequence (bool) β
bad_words (List[str]) β
top_p (int) β
top_k (int) β
repetition_penalty (float) β
length_penalty (float) β
do_sample (bool) β
num_beams (int) β
early_stopping (bool) β
num_return_sequences (int) β
nlpcloud_api_key (Optional[str]) β
Return type
None
attribute bad_words: List[str] = []ο
List of tokens not allowed to be generated.
attribute do_sample: bool = Trueο
Whether to use sampling (True) or greedy decoding.
attribute early_stopping: bool = Falseο
Whether to stop beam search at num_beams sentences.
attribute length_no_input: bool = Trueο
Whether min_length and max_length should include the length of the input.
attribute length_penalty: float = 1.0ο
Exponential penalty to the length.
attribute max_length: int = 256ο
The maximum number of tokens to generate in the completion.
attribute min_length: int = 1ο
The minimum number of tokens to generate in the completion.
attribute model_name: str = 'finetuned-gpt-neox-20b'ο
Model name to use.
attribute num_beams: int = 1ο
Number of beams for beam search.
attribute num_return_sequences: int = 1ο
How many completions to generate for each prompt.
attribute remove_end_sequence: bool = Trueο
Whether or not to remove the end sequence token.
attribute remove_input: bool = Trueο
Remove input text from API response
attribute repetition_penalty: float = 1.0ο
Penalizes repeated tokens. 1.0 means no penalty.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.7ο
What sampling temperature to use.
attribute top_k: int = 50ο
The number of highest probability tokens to keep for top-k filtering.
attribute top_p: int = 1ο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.OpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='text-davinci-003', temperature=0.7, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, batch_size=20, request_timeout=None, logit_bias=None, max_retries=6, streaming=False, allowed_special={}, disallowed_special='all', tiktoken_model_name=None)[source]ο
Bases: langchain.llms.openai.BaseOpenAI
Wrapper around OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import OpenAI
openai = OpenAI(model_name="text-davinci-003")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
temperature (float) β
max_tokens (int) β
top_p (float) β
frequency_penalty (float) β
presence_penalty (float) β
n (int) β
best_of (int) β
model_kwargs (Dict[str, Any]) β
openai_api_key (Optional[str]) β
openai_api_base (Optional[str]) β
openai_organization (Optional[str]) β
openai_proxy (Optional[str]) β
batch_size (int) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
logit_bias (Optional[Dict[str, float]]) β
max_retries (int) β
streaming (bool) β
allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) β
disallowed_special (Union[Literal['all'], typing.Collection[str]]) β
tiktoken_model_name (Optional[str]) β
Return type
None
attribute allowed_special: Union[Literal['all'], AbstractSet[str]] = {}ο
Set of special tokens that are allowedγ
attribute batch_size: int = 20ο
Batch size to use when passing multiple documents to generate.
attribute best_of: int = 1ο
Generates best_of completions server-side and returns the βbestβ.
attribute disallowed_special: Union[Literal['all'], Collection[str]] = 'all'ο
Set of special tokens that are not allowedγ
attribute frequency_penalty: float = 0ο
Penalizes repeated tokens according to frequency.
attribute logit_bias: Optional[Dict[str, float]] [Optional]ο
Adjust the probability of specific tokens being generated.
attribute max_retries: int = 6ο
Maximum number of retries to make when generating.
attribute max_tokens: int = 256ο
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not explicitly specified.
attribute model_name: str = 'text-davinci-003' (alias 'model')ο
Model name to use.
attribute n: int = 1ο
How many completions to generate for each prompt.
attribute presence_penalty: float = 0ο
Penalizes repeated tokens.
attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = Noneο
Timeout for requests to OpenAI completion API. Default is 600 seconds.
attribute streaming: bool = Falseο
Whether to stream the results or not.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.7ο
What sampling temperature to use.
attribute tiktoken_model_name: Optional[str] = Noneο
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
attribute top_p: float = 1ο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
create_llm_result(choices, prompts, token_usage)ο
Create the LLMResult from the choices and prompts.
Parameters
choices (Any) β
prompts (List[str]) β
token_usage (Dict[str, int]) β
Return type
langchain.schema.LLMResult
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_sub_prompts(params, prompts, stop=None)ο
Get the sub prompts for llm call.
Parameters
params (Dict[str, Any]) β
prompts (List[str]) β
stop (Optional[List[str]]) β
Return type
List[List[str]]
get_token_ids(text)ο
Get the token IDs using the tiktoken package.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
max_tokens_for_prompt(prompt)ο
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt (str) β The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Return type
int
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
static modelname_to_contextsize(modelname)ο
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname (str) β The modelname we want to know the context size for.
Returns
The maximum context size
Return type
int
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
prep_streaming_params(stop=None)ο
Prepare the params for streaming.
Parameters
stop (Optional[List[str]]) β
Return type
Dict[str, Any]
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt, stop=None)ο
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt (str) β The prompts to pass into the model.
stop (Optional[List[str]]) β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Return type
Generator
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
property max_context_size: intο
Get max context size for this model.
class langchain.llms.OpenAIChat(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='gpt-3.5-turbo', model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_proxy=None, max_retries=6, prefix_messages=None, streaming=False, allowed_special={}, disallowed_special='all')[source]ο
Bases: langchain.llms.base.BaseLLM
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import OpenAIChat
openaichat = OpenAIChat(model_name="gpt-3.5-turbo")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model_name (str) β
model_kwargs (Dict[str, Any]) β
openai_api_key (Optional[str]) β
openai_api_base (Optional[str]) β
openai_proxy (Optional[str]) β
max_retries (int) β
prefix_messages (List) β
streaming (bool) β
allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) β
disallowed_special (Union[Literal['all'], typing.Collection[str]]) β
Return type
None
attribute allowed_special: Union[Literal['all'], AbstractSet[str]] = {}ο
Set of special tokens that are allowedγ
attribute disallowed_special: Union[Literal['all'], Collection[str]] = 'all'ο
Set of special tokens that are not allowedγ
attribute max_retries: int = 6ο
Maximum number of retries to make when generating.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not explicitly specified.
attribute model_name: str = 'gpt-3.5-turbo'ο
Model name to use.
attribute prefix_messages: List [Optional]ο
Series of messages for Chat input.
attribute streaming: bool = Falseο
Whether to stream the results or not.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)[source]ο
Get the token IDs using the tiktoken package.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.OpenLLM(model_name=None, *, model_id=None, server_url=None, server_type='http', embedded=True, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, llm_kwargs)[source]ο
Bases: langchain.llms.base.LLM
Wrapper for accessing OpenLLM, supporting both in-process model
instance and remote OpenLLM servers.
To use, you should have the openllm library installed:
pip install openllm
Learn more at: https://github.com/bentoml/openllm
Example running an LLM model locally managed by OpenLLM:from langchain.llms import OpenLLM
llm = OpenLLM(
model_name='flan-t5',
model_id='google/flan-t5-large',
)
llm("What is the difference between a duck and a goose?")
For all available supported models, you can run βopenllm modelsβ.
If you have a OpenLLM server running, you can also use it remotely:from langchain.llms import OpenLLM
llm = OpenLLM(server_url='http://localhost:3000')
llm("What is the difference between a duck and a goose?")
Parameters
model_name (Optional[str]) β
model_id (Optional[str]) β
server_url (Optional[str]) β
server_type (Literal['grpc', 'http']) β
embedded (bool) β
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
llm_kwargs (Dict[str, Any]) β
Return type
None
attribute embedded: bool = Trueο
Initialize this LLM instance in current process by default. Should
only set to False when using in conjunction with BentoML Service.
attribute llm_kwargs: Dict[str, Any] [Required]ο
Key word arguments to be passed to openllm.LLM
attribute model_id: Optional[str] = Noneο
Model Id to use. If not provided, will use the default model for the model name.
See βopenllm modelsβ for all available model variants.
attribute model_name: Optional[str] = Noneο
Model name to use. See βopenllm modelsβ for all available models.
attribute server_type: ServerType = 'http'ο
Optional server type. Either βhttpβ or βgrpcβ.
attribute server_url: Optional[str] = Noneο
Optional server URL that currently runs a LLMServer with βopenllm startβ.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
property runner: openllm.LLMRunnerο
Get the underlying openllm.LLMRunner instance for integration with BentoML.
Example:
.. code-block:: python
llm = OpenLLM(model_name=βflan-t5β,
model_id=βgoogle/flan-t5-largeβ,
embedded=False,
)
tools = load_tools([βserpapiβ, βllm-mathβ], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
svc = bentoml.Service(βlangchain-openllmβ, runners=[llm.runner])
@svc.api(input=Text(), output=Text())
def chat(input_text: str):
return agent.run(input_text)
class langchain.llms.OpenLM(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='text-davinci-003', temperature=0.7, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, batch_size=20, request_timeout=None, logit_bias=None, max_retries=6, streaming=False, allowed_special={}, disallowed_special='all', tiktoken_model_name=None)[source]ο
Bases: langchain.llms.openai.BaseOpenAI
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
temperature (float) β
max_tokens (int) β
top_p (float) β
frequency_penalty (float) β
presence_penalty (float) β
n (int) β
best_of (int) β
model_kwargs (Dict[str, Any]) β
openai_api_key (Optional[str]) β
openai_api_base (Optional[str]) β
openai_organization (Optional[str]) β
openai_proxy (Optional[str]) β
batch_size (int) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
logit_bias (Optional[Dict[str, float]]) β
max_retries (int) β
streaming (bool) β
allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) β
disallowed_special (Union[Literal['all'], typing.Collection[str]]) β
tiktoken_model_name (Optional[str]) β
Return type
None
attribute allowed_special: Union[Literal['all'], AbstractSet[str]] = {}ο
Set of special tokens that are allowedγ
attribute batch_size: int = 20ο
Batch size to use when passing multiple documents to generate.
attribute best_of: int = 1ο
Generates best_of completions server-side and returns the βbestβ.
attribute disallowed_special: Union[Literal['all'], Collection[str]] = 'all'ο
Set of special tokens that are not allowedγ
attribute frequency_penalty: float = 0ο
Penalizes repeated tokens according to frequency.
attribute logit_bias: Optional[Dict[str, float]] [Optional]ο
Adjust the probability of specific tokens being generated.
attribute max_retries: int = 6ο
Maximum number of retries to make when generating.
attribute max_tokens: int = 256ο
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not explicitly specified.
attribute model_name: str = 'text-davinci-003' (alias 'model')ο
Model name to use.
attribute n: int = 1ο
How many completions to generate for each prompt.
attribute presence_penalty: float = 0ο
Penalizes repeated tokens.
attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = Noneο
Timeout for requests to OpenAI completion API. Default is 600 seconds.
attribute streaming: bool = Falseο
Whether to stream the results or not.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.7ο
What sampling temperature to use.
attribute tiktoken_model_name: Optional[str] = Noneο
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
attribute top_p: float = 1ο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
create_llm_result(choices, prompts, token_usage)ο
Create the LLMResult from the choices and prompts.
Parameters
choices (Any) β
prompts (List[str]) β
token_usage (Dict[str, int]) β
Return type
langchain.schema.LLMResult
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_sub_prompts(params, prompts, stop=None)ο
Get the sub prompts for llm call.
Parameters
params (Dict[str, Any]) β
prompts (List[str]) β
stop (Optional[List[str]]) β
Return type
List[List[str]]
get_token_ids(text)ο
Get the token IDs using the tiktoken package.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
max_tokens_for_prompt(prompt)ο
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt (str) β The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Return type
int
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
static modelname_to_contextsize(modelname)ο
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname (str) β The modelname we want to know the context size for.
Returns
The maximum context size
Return type
int
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
prep_streaming_params(stop=None)ο
Prepare the params for streaming.
Parameters
stop (Optional[List[str]]) β
Return type
Dict[str, Any]
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt, stop=None)ο
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt (str) β The prompts to pass into the model.
stop (Optional[List[str]]) β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Return type
Generator
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
property max_context_size: intο
Get max context size for this model.
class langchain.llms.Petals(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, tokenizer=None, model_name='bigscience/bloom-petals', temperature=0.7, max_new_tokens=256, top_p=0.9, top_k=None, do_sample=True, max_length=None, model_kwargs=None, huggingface_api_key=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Petals Bloom models.
To use, you should have the petals python package installed, and the
environment variable HUGGINGFACE_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import petals
petals = Petals()
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
tokenizer (Any) β
model_name (str) β
temperature (float) β
max_new_tokens (int) β
top_p (float) β
top_k (Optional[int]) β
do_sample (bool) β
max_length (Optional[int]) β
model_kwargs (Dict[str, Any]) β
huggingface_api_key (Optional[str]) β
Return type
None
attribute client: Any = Noneο
The client to use for the API calls.
attribute do_sample: bool = Trueο
Whether or not to use sampling; use greedy decoding otherwise.
attribute max_length: Optional[int] = Noneο
The maximum length of the sequence to be generated.
attribute max_new_tokens: int = 256ο
The maximum number of new tokens to generate in the completion.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call
not explicitly specified.
attribute model_name: str = 'bigscience/bloom-petals'ο
The model to use.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.7ο
What sampling temperature to use
attribute tokenizer: Any = Noneο
The tokenizer to use for the API calls.
attribute top_k: Optional[int] = Noneο
The number of highest probability vocabulary tokens
to keep for top-k-filtering.
attribute top_p: float = 0.9ο
The cumulative probability for top-p sampling.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.PipelineAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_key='', pipeline_kwargs=None, pipeline_api_key=None)[source]ο
Bases: langchain.llms.base.LLM, pydantic.main.BaseModel
Wrapper around PipelineAI large language models.
To use, you should have the pipeline-ai python package installed,
and the environment variable PIPELINE_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain import PipelineAI
pipeline = PipelineAI(pipeline_key="")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_key (str) β
pipeline_kwargs (Dict[str, Any]) β
pipeline_api_key (Optional[str]) β
Return type
None
attribute pipeline_key: str = ''ο
The id or tag of the target pipeline
attribute pipeline_kwargs: Dict[str, Any] [Optional]ο
Holds any pipeline parameters valid for create call not
explicitly specified.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.PredictionGuard(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='MPT-7B-Instruct', output=None, max_tokens=256, temperature=0.75, token=None, stop=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Prediction Guard large language models.
To use, you should have the predictionguard python package installed, and the
environment variable PREDICTIONGUARD_TOKEN set with your access token, or pass
it as a named parameter to the constructor. To use Prediction Guardβs API along
with OpenAI models, set the environment variable OPENAI_API_KEY with your
OpenAI API key as well.
Example
pgllm = PredictionGuard(model="MPT-7B-Instruct",
token="my-access-token",
output={
"type": "boolean"
})
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (Optional[str]) β
output (Optional[Dict[str, Any]]) β
max_tokens (int) β
temperature (float) β
token (Optional[str]) β
stop (Optional[List[str]]) β
Return type
None
attribute max_tokens: int = 256ο
Denotes the number of tokens to predict per generation.
attribute model: Optional[str] = 'MPT-7B-Instruct'ο
Model name to use.
attribute output: Optional[Dict[str, Any]] = Noneο
The output type or structure for controlling the LLM output.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.75ο
A non-negative float that tunes the degree of randomness in generation.
attribute token: Optional[str] = Noneο
Your Prediction Guard access token.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.PromptLayerOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='text-davinci-003', temperature=0.7, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, batch_size=20, request_timeout=None, logit_bias=None, max_retries=6, streaming=False, allowed_special={}, disallowed_special='all', tiktoken_model_name=None, pl_tags=None, return_pl_id=False)[source]ο
Bases: langchain.llms.openai.OpenAI
Wrapper around OpenAI large language models.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerOpenAI LLM adds two optional
Parameters
pl_tags (Optional[List[str]]) β List of strings to tag the request with.
return_pl_id (Optional[bool]) β If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
temperature (float) β
max_tokens (int) β
top_p (float) β
frequency_penalty (float) β
presence_penalty (float) β
n (int) β
best_of (int) β
model_kwargs (Dict[str, Any]) β
openai_api_key (Optional[str]) β
openai_api_base (Optional[str]) β
openai_organization (Optional[str]) β
openai_proxy (Optional[str]) β
batch_size (int) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
logit_bias (Optional[Dict[str, float]]) β
max_retries (int) β
streaming (bool) β
allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) β
disallowed_special (Union[Literal['all'], typing.Collection[str]]) β
tiktoken_model_name (Optional[str]) β
Return type
None
Example
from langchain.llms import PromptLayerOpenAI
openai = PromptLayerOpenAI(model_name="text-davinci-003")
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
create_llm_result(choices, prompts, token_usage)ο
Create the LLMResult from the choices and prompts.
Parameters
choices (Any) β
prompts (List[str]) β
token_usage (Dict[str, int]) β
Return type
langchain.schema.LLMResult
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_sub_prompts(params, prompts, stop=None)ο
Get the sub prompts for llm call.
Parameters
params (Dict[str, Any]) β
prompts (List[str]) β
stop (Optional[List[str]]) β
Return type
List[List[str]]
get_token_ids(text)ο
Get the token IDs using the tiktoken package.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
max_tokens_for_prompt(prompt)ο
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt (str) β The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Return type
int
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
static modelname_to_contextsize(modelname)ο
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname (str) β The modelname we want to know the context size for.
Returns
The maximum context size
Return type
int
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
prep_streaming_params(stop=None)ο
Prepare the params for streaming.
Parameters
stop (Optional[List[str]]) β
Return type
Dict[str, Any]
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt, stop=None)ο
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt (str) β The prompts to pass into the model.
stop (Optional[List[str]]) β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Return type
Generator
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
property max_context_size: intο
Get max context size for this model.
class langchain.llms.PromptLayerOpenAIChat(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='gpt-3.5-turbo', model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_proxy=None, max_retries=6, prefix_messages=None, streaming=False, allowed_special={}, disallowed_special='all', pl_tags=None, return_pl_id=False)[source]ο
Bases: langchain.llms.openai.OpenAIChat
Wrapper around OpenAI large language models.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAIChat LLM can also
be passed here. The PromptLayerOpenAIChat adds two optional
Parameters
pl_tags (Optional[List[str]]) β List of strings to tag the request with.
return_pl_id (Optional[bool]) β If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model_name (str) β
model_kwargs (Dict[str, Any]) β
openai_api_key (Optional[str]) β
openai_api_base (Optional[str]) β
openai_proxy (Optional[str]) β
max_retries (int) β
prefix_messages (List) β
streaming (bool) β
allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) β
disallowed_special (Union[Literal['all'], typing.Collection[str]]) β
Return type
None
Example
from langchain.llms import PromptLayerOpenAIChat
openaichat = PromptLayerOpenAIChat(model_name="gpt-3.5-turbo")
attribute allowed_special: Union[Literal['all'], AbstractSet[str]] = {}ο
Set of special tokens that are allowedγ
attribute disallowed_special: Union[Literal['all'], Collection[str]] = 'all'ο
Set of special tokens that are not allowedγ
attribute max_retries: int = 6ο
Maximum number of retries to make when generating.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not explicitly specified.
attribute model_name: str = 'gpt-3.5-turbo'ο
Model name to use.
attribute prefix_messages: List [Optional]ο
Series of messages for Chat input.
attribute streaming: bool = Falseο
Whether to stream the results or not.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token IDs using the tiktoken package.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.RWKV(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model, tokens_path, strategy='cpu fp32', rwkv_verbose=True, temperature=1.0, top_p=0.5, penalty_alpha_frequency=0.4, penalty_alpha_presence=0.4, CHUNK_LEN=256, max_tokens_per_generation=256, client=None, tokenizer=None, pipeline=None, model_tokens=None, model_state=None)[source]ο
Bases: langchain.llms.base.LLM, pydantic.main.BaseModel
Wrapper around RWKV language models.
To use, you should have the rwkv python package installed, the
pre-trained model file, and the modelβs config information.
Example
from langchain.llms import RWKV
model = RWKV(model="./models/rwkv-3b-fp16.bin", strategy="cpu fp32")
# Simplest invocation
response = model("Once upon a time, ")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model (str) β
tokens_path (str) β
strategy (str) β
rwkv_verbose (bool) β
temperature (float) β
top_p (float) β
penalty_alpha_frequency (float) β
penalty_alpha_presence (float) β
CHUNK_LEN (int) β
max_tokens_per_generation (int) β
client (Any) β
tokenizer (Any) β
pipeline (Any) β
model_tokens (Any) β
model_state (Any) β
Return type
None
attribute CHUNK_LEN: int = 256ο
Batch size for prompt processing.
attribute max_tokens_per_generation: int = 256ο
Maximum number of tokens to generate.
attribute model: str [Required]ο
Path to the pre-trained RWKV model file.
attribute penalty_alpha_frequency: float = 0.4ο
Positive values penalize new tokens based on their existing frequency
in the text so far, decreasing the modelβs likelihood to repeat the same
line verbatim..
attribute penalty_alpha_presence: float = 0.4ο
Positive values penalize new tokens based on whether they appear
in the text so far, increasing the modelβs likelihood to talk about
new topics..
attribute rwkv_verbose: bool = Trueο
Print debug information.
attribute strategy: str = 'cpu fp32'ο
Token context window.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 1.0ο
The temperature to use for sampling.
attribute tokens_path: str [Required]ο
Path to the RWKV tokens file.
attribute top_p: float = 0.5ο
The top-p value to use for sampling.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Replicate(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model, input=None, model_kwargs=None, replicate_api_token=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Replicate models.
To use, you should have the replicate python package installed,
and the environment variable REPLICATE_API_TOKEN set with your API token.
You can find your token here: https://replicate.com/account
The model param is required, but any other model parameters can also
be passed in with the format input={model_param: value, β¦}
Example
from langchain.llms import Replicate
replicate = Replicate(model="stability-ai/stable-diffusion: 27b93a2413e7f36cd83da926f365628 0b2931564ff050bf9575f1fdf9bcd7478",
input={"image_dimensions": "512x512"})
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model (str) β
input (Dict[str, Any]) β
model_kwargs (Dict[str, Any]) β
replicate_api_token (Optional[str]) β
Return type
None
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.SagemakerEndpoint(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, endpoint_name='', region_name='', credentials_profile_name=None, content_handler, model_kwargs=None, endpoint_kwargs=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
endpoint_name (str) β
region_name (str) β
credentials_profile_name (Optional[str]) β
content_handler (langchain.llms.sagemaker_endpoint.LLMContentHandler) β
model_kwargs (Optional[Dict]) β
endpoint_kwargs (Optional[Dict]) β
Return type
None
attribute content_handler: langchain.llms.sagemaker_endpoint.LLMContentHandler [Required]ο
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
attribute credentials_profile_name: Optional[str] = Noneο
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
attribute endpoint_kwargs: Optional[Dict] = Noneο
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
attribute endpoint_name: str = ''ο
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
attribute model_kwargs: Optional[Dict] = Noneο
Key word arguments to pass to the model.
attribute region_name: str = ''ο
The aws region where the Sagemaker model is deployed, eg. us-west-2.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.SelfHostedHuggingFaceLLM(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _generate_text>, hardware=None, model_load_fn=<function _load_transformer>, load_fn_kwargs=None, model_reqs=['./', 'transformers', 'torch'], model_id='gpt2', task='text-generation', device=0, model_kwargs=None)[source]ο
Bases: langchain.llms.self_hosted.SelfHostedPipeline
Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Only supports text-generation, text2text-generation and summarization for now.
Example using from_model_id:from langchain.llms import SelfHostedHuggingFaceLLM
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceLLM(
model_id="google/flan-t5-large", task="text2text-generation",
hardware=gpu
)
Example passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggingFaceLLM
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
def get_pipeline():
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer
)
return pipe
hf = SelfHostedHuggingFaceLLM(
model_load_fn=get_pipeline, model_id="gpt2", hardware=gpu)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
model_id (str) β
task (str) β
device (int) β
model_kwargs (Optional[dict]) β
Return type
None
attribute device: int = 0ο
Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc.
attribute hardware: Any = Noneο
Remote hardware to send the inference function to.
attribute inference_fn: Callable = <function _generate_text>ο
Inference function to send to the remote hardware.
attribute load_fn_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model load function.
attribute model_id: str = 'gpt2'ο
Hugging Face model_id to load the model.
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model.
attribute model_load_fn: Callable = <function _load_transformer>ο
Function to load the model remotely on the server.
attribute model_reqs: List[str] = ['./', 'transformers', 'torch']ο
Requirements to install on hardware to inference the model.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute task: str = 'text-generation'ο
Hugging Face task (βtext-generationβ, βtext2text-generationβ or
βsummarizationβ).
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_pipeline(pipeline, hardware, model_reqs=None, device=0, **kwargs)ο
Init the SelfHostedPipeline from a pipeline object or string.
Parameters
pipeline (Any) β
hardware (Any) β
model_reqs (Optional[List[str]]) β
device (int) β
kwargs (Any) β
Return type
langchain.llms.base.LLM
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.SelfHostedPipeline(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _generate_text>, hardware=None, model_load_fn, load_fn_kwargs=None, model_reqs=['./', 'torch'])[source]ο
Bases: langchain.llms.base.LLM
Run model inference on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
def load_pipeline():
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2")
return pipeline(
"text-generation", model=model, tokenizer=tokenizer,
max_new_tokens=10
)
def inference_fn(pipeline, prompt, stop = None):
return pipeline(prompt)[0]["generated_text"]
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
llm = SelfHostedPipeline(
model_load_fn=load_pipeline,
hardware=gpu,
model_reqs=model_reqs, inference_fn=inference_fn
)
Example for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
my_model = ...
llm = SelfHostedPipeline.from_pipeline(
pipeline=my_model,
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Example passing model path for larger models:from langchain.llms import SelfHostedPipeline
import runhouse as rh
import pickle
from transformers import pipeline
generator = pipeline(model="gpt2")
rh.blob(pickle.dumps(generator), path="models/pipeline.pkl"
).save().to(gpu, path="models")
llm = SelfHostedPipeline.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
Return type
None
attribute hardware: Any = Noneο
Remote hardware to send the inference function to.
attribute inference_fn: Callable = <function _generate_text>ο
Inference function to send to the remote hardware.
attribute load_fn_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model load function.
attribute model_load_fn: Callable [Required]ο
Function to load the model remotely on the server.
attribute model_reqs: List[str] = ['./', 'torch']ο
Requirements to install on hardware to inference the model.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_pipeline(pipeline, hardware, model_reqs=None, device=0, **kwargs)[source]ο
Init the SelfHostedPipeline from a pipeline object or string.
Parameters
pipeline (Any) β
hardware (Any) β
model_reqs (Optional[List[str]]) β
device (int) β
kwargs (Any) β
Return type
langchain.llms.base.LLM
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.StochasticAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, api_url='', model_kwargs=None, stochasticai_api_key=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around StochasticAI large language models.
To use, you should have the environment variable STOCHASTICAI_API_KEY
set with your API key.
Example
from langchain.llms import StochasticAI
stochasticai = StochasticAI(api_url="")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
api_url (str) β
model_kwargs (Dict[str, Any]) β
stochasticai_api_key (Optional[str]) β
Return type
None
attribute api_url: str = ''ο
Model name to use.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not
explicitly specified.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.VertexAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='text-bison', temperature=0.0, max_output_tokens=128, top_p=0.95, top_k=40, stop=None, project=None, location='us-central1', credentials=None, tuned_model_name=None)[source]ο
Bases: langchain.llms.vertexai._VertexAICommon, langchain.llms.base.LLM
Wrapper around Google Vertex AI large language models.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (_LanguageModel) β
model_name (str) β
temperature (float) β
max_output_tokens (int) β
top_p (float) β
top_k (int) β
stop (Optional[List[str]]) β
project (Optional[str]) β
location (str) β
credentials (Any) β
tuned_model_name (Optional[str]) β
Return type
None
attribute credentials: Any = Noneο
The default custom credentials (google.auth.credentials.Credentials) to use
attribute location: str = 'us-central1'ο
The default location to use when making API calls.
attribute max_output_tokens: int = 128ο
Token limit determines the maximum amount of text output from one prompt.
attribute model_name: str = 'text-bison'ο
The name of the Vertex AI large language model.
attribute project: Optional[str] = Noneο
The default GCP project to use when making Vertex API calls.
attribute stop: Optional[List[str]] = Noneο
Optional list of stop words to use when generating.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.0ο
Sampling temperature, it controls the degree of randomness in token selection.
attribute top_k: int = 40ο
How the model selects tokens for output, the next token is selected from
attribute top_p: float = 0.95ο
Tokens are selected from most probable to least until the sum of their
attribute tuned_model_name: Optional[str] = Noneο
The name of a tuned model. If provided, model_name is ignored.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Writer(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, writer_org_id=None, model_id='palmyra-instruct', min_tokens=None, max_tokens=None, temperature=None, top_p=None, stop=None, presence_penalty=None, repetition_penalty=None, best_of=None, logprobs=False, n=None, writer_api_key=None, base_url=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Writer large language models.
To use, you should have the environment variable WRITER_API_KEY and
WRITER_ORG_ID set with your API key and organization ID respectively.
Example
from langchain import Writer
writer = Writer(model_id="palmyra-base")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
writer_org_id (Optional[str]) β
model_id (str) β
min_tokens (Optional[int]) β
max_tokens (Optional[int]) β
temperature (Optional[float]) β
top_p (Optional[float]) β
stop (Optional[List[str]]) β
presence_penalty (Optional[float]) β
repetition_penalty (Optional[float]) β
best_of (Optional[int]) β
logprobs (bool) β
n (Optional[int]) β
writer_api_key (Optional[str]) β
base_url (Optional[str]) β
Return type
None
attribute base_url: Optional[str] = Noneο
Base url to use, if None decides based on model name.
attribute best_of: Optional[int] = Noneο
Generates this many completions server-side and returns the βbestβ.
attribute logprobs: bool = Falseο
Whether to return log probabilities.
attribute max_tokens: Optional[int] = Noneο
Maximum number of tokens to generate.
attribute min_tokens: Optional[int] = Noneο
Minimum number of tokens to generate.
attribute model_id: str = 'palmyra-instruct'ο
Model name to use.
attribute n: Optional[int] = Noneο
How many completions to generate.
attribute presence_penalty: Optional[float] = Noneο
Penalizes repeated tokens regardless of frequency.
attribute repetition_penalty: Optional[float] = Noneο
Penalizes repeated tokens according to frequency.
attribute stop: Optional[List[str]] = Noneο
Sequences when completion generation will stop.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: Optional[float] = Noneο
What sampling temperature to use.
attribute top_p: Optional[float] = Noneο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
attribute writer_api_key: Optional[str] = Noneο
Writer API key.
attribute writer_org_id: Optional[str] = Noneο
Writer organization ID.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/latest/modules/llms.html |
be397b83-28ad-4be4-b909-eb6e7aca10df | Base classesο
Common schema objects.
langchain.schema.get_buffer_string(messages, human_prefix='Human', ai_prefix='AI')[source]ο
Get buffer string of messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
human_prefix (str) β
ai_prefix (str) β
Return type
str
class langchain.schema.AgentAction(tool, tool_input, log)[source]ο
Bases: object
Agentβs action to take.
Parameters
tool (str) β
tool_input (Union[str, dict]) β
log (str) β
Return type
None
class langchain.schema.AgentFinish(return_values, log)[source]ο
Bases: NamedTuple
Agentβs return value.
Parameters
return_values (dict) β
log (str) β
return_values: dictο
Alias for field number 0
log: strο
Alias for field number 1
count(value, /)ο
Return number of occurrences of value.
index(value, start=0, stop=9223372036854775807, /)ο
Return first index of value.
Raises ValueError if the value is not present.
class langchain.schema.Generation(*, text, generation_info=None)[source]ο
Bases: langchain.load.serializable.Serializable
Output of a single generation.
Parameters
text (str) β
generation_info (Optional[Dict[str, Any]]) β
Return type
None
attribute generation_info: Optional[Dict[str, Any]] = Noneο
Raw generation info response from the provider
attribute text: str [Required]ο
Generated text output.
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
class langchain.schema.BaseMessage(*, content, additional_kwargs=None)[source]ο
Bases: langchain.load.serializable.Serializable
Message object.
Parameters
content (str) β
additional_kwargs (dict) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
abstract property type: strο
Type of the message, used for serialization.
class langchain.schema.HumanMessage(*, content, additional_kwargs=None, example=False)[source]ο
Bases: langchain.schema.BaseMessage
Type of message that is spoken by the human.
Parameters
content (str) β
additional_kwargs (dict) β
example (bool) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
property type: strο
Type of the message, used for serialization.
class langchain.schema.AIMessage(*, content, additional_kwargs=None, example=False)[source]ο
Bases: langchain.schema.BaseMessage
Type of message that is spoken by the AI.
Parameters
content (str) β
additional_kwargs (dict) β
example (bool) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
property type: strο
Type of the message, used for serialization.
class langchain.schema.SystemMessage(*, content, additional_kwargs=None)[source]ο
Bases: langchain.schema.BaseMessage
Type of message that is a system message.
Parameters
content (str) β
additional_kwargs (dict) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
property type: strο
Type of the message, used for serialization.
class langchain.schema.FunctionMessage(*, content, additional_kwargs=None, name)[source]ο
Bases: langchain.schema.BaseMessage
Parameters
content (str) β
additional_kwargs (dict) β
name (str) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
property type: strο
Type of the message, used for serialization.
class langchain.schema.ChatMessage(*, content, additional_kwargs=None, role)[source]ο
Bases: langchain.schema.BaseMessage
Type of message with arbitrary speaker.
Parameters
content (str) β
additional_kwargs (dict) β
role (str) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
property type: strο
Type of the message, used for serialization.
langchain.schema.messages_to_dict(messages)[source]ο
Convert messages to dict.
Parameters
messages (List[langchain.schema.BaseMessage]) β List of messages to convert.
Returns
List of dicts.
Return type
List[dict]
langchain.schema.messages_from_dict(messages)[source]ο
Convert messages from dict.
Parameters
messages (List[dict]) β List of messages (dicts) to convert.
Returns
List of messages (BaseMessages).
Return type
List[langchain.schema.BaseMessage]
class langchain.schema.ChatGeneration(*, text='', generation_info=None, message)[source]ο
Bases: langchain.schema.Generation
Output of a single generation.
Parameters
text (str) β
generation_info (Optional[Dict[str, Any]]) β
message (langchain.schema.BaseMessage) β
Return type
None
attribute generation_info: Optional[Dict[str, Any]] = Noneο
Raw generation info response from the provider
attribute text: str = ''ο
Generated text output.
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
class langchain.schema.RunInfo(*, run_id)[source]ο
Bases: pydantic.main.BaseModel
Class that contains all relevant metadata for a Run.
Parameters
run_id (uuid.UUID) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
class langchain.schema.ChatResult(*, generations, llm_output=None)[source]ο
Bases: pydantic.main.BaseModel
Class that contains all relevant information for a Chat Result.
Parameters
generations (List[langchain.schema.ChatGeneration]) β
llm_output (Optional[dict]) β
Return type
None
attribute generations: List[langchain.schema.ChatGeneration] [Required]ο
List of the things generated.
attribute llm_output: Optional[dict] = Noneο
For arbitrary LLM provider specific output.
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
class langchain.schema.LLMResult(*, generations, llm_output=None, run=None)[source]ο
Bases: pydantic.main.BaseModel
Class that contains all relevant information for an LLM Result.
Parameters
generations (List[List[langchain.schema.Generation]]) β
llm_output (Optional[dict]) β
run (Optional[List[langchain.schema.RunInfo]]) β
Return type
None
attribute generations: List[List[langchain.schema.Generation]] [Required]ο
List of the things generated. This is List[List[]] because
each input could have multiple generations.
attribute llm_output: Optional[dict] = Noneο
For arbitrary LLM provider specific output.
attribute run: Optional[List[langchain.schema.RunInfo]] = Noneο
Run metadata.
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
flatten()[source]ο
Flatten generations into a single list.
Return type
List[langchain.schema.LLMResult]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
class langchain.schema.PromptValue[source]ο
Bases: langchain.load.serializable.Serializable, abc.ABC
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
abstract to_messages()[source]ο
Return prompt as messages.
Return type
List[langchain.schema.BaseMessage]
abstract to_string()[source]ο
Return prompt as string.
Return type
str
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.schema.BaseMemory[source]ο
Bases: langchain.load.serializable.Serializable, abc.ABC
Base interface for memory in chains.
Return type
None
abstract clear()[source]ο
Clear memory contents.
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
abstract load_memory_variables(inputs)[source]ο
Return key-value pairs given the text input to the chain.
If None, return all memories
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Any]
abstract save_context(inputs, outputs)[source]ο
Save the context of this model run to memory.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
abstract property memory_variables: List[str]ο
Input keys this memory class will load dynamically.
class langchain.schema.BaseChatMessageHistory[source]ο
Bases: abc.ABC
Base interface for chat message history
See ChatMessageHistory for default implementation.
add_user_message(message)[source]ο
Add a user message to the store
Parameters
message (str) β
Return type
None
add_ai_message(message)[source]ο
Add an AI message to the store
Parameters
message (str) β
Return type
None
add_message(message)[source]ο
Add a self-created message to the store
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
abstract clear()[source]ο
Remove all messages from the store
Return type
None
class langchain.schema.Document(*, page_content, metadata=None)[source]ο
Bases: langchain.load.serializable.Serializable
Interface for interacting with a document.
Parameters
page_content (str) β
metadata (dict) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.schema.BaseRetriever[source]ο
Bases: abc.ABC
Base interface for retrievers.
abstract get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
abstract async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
langchain.schema.Memoryο
alias of langchain.schema.BaseMemory
class langchain.schema.BaseLLMOutputParser[source]ο
Bases: langchain.load.serializable.Serializable, abc.ABC, Generic[langchain.schema.T]
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
abstract parse_result(result)[source]ο
Parse LLM Result.
Parameters
result (List[langchain.schema.Generation]) β
Return type
langchain.schema.T
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.schema.BaseOutputParser[source]ο
Bases: langchain.schema.BaseLLMOutputParser, abc.ABC, Generic[langchain.schema.T]
Class to parse the output of an LLM call.
Output parsers help structure language model responses.
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)[source]ο
Return dictionary representation of output parser.
Parameters
kwargs (Any) β
Return type
Dict
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
abstract parse(text)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text (str) β output of language model
Returns
structured output
Return type
langchain.schema.T
parse_result(result)[source]ο
Parse LLM Result.
Parameters
result (List[langchain.schema.Generation]) β
Return type
langchain.schema.T
parse_with_prompt(completion, prompt)[source]ο
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion (str) β output of language model
prompt (langchain.schema.PromptValue) β prompt value
Returns
structured output
Return type
Any
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.schema.NoOpOutputParser[source]ο
Bases: langchain.schema.BaseOutputParser[str]
Output parser that just returns the text as is.
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return dictionary representation of output parser.
Parameters
kwargs (Any) β
Return type
Dict
get_format_instructions()ο
Instructions on how the LLM output should be formatted.
Return type
str
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
parse(text)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text (str) β output of language model
Returns
structured output
Return type
str
parse_result(result)ο
Parse LLM Result.
Parameters
result (List[langchain.schema.Generation]) β
Return type
langchain.schema.T
parse_with_prompt(completion, prompt)ο
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion (str) β output of language model
prompt (langchain.schema.PromptValue) β prompt value
Returns
structured output
Return type
Any
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
exception langchain.schema.OutputParserException(error, observation=None, llm_output=None, send_to_llm=False)[source]ο
Bases: ValueError
Exception that output parsers should raise to signify a parsing error.
This exists to differentiate parsing errors from other code or execution errors
that also may arise inside the output parser. OutputParserExceptions will be
available to catch and handle in ways to fix the parsing error, while other
errors will be raised.
Parameters
error (Any) β
observation (str | None) β
llm_output (str | None) β
send_to_llm (bool) β
add_note()ο
Exception.add_note(note) β
add a note to the exception
with_traceback()ο
Exception.with_traceback(tb) β
set self.__traceback__ to tb and return self.
class langchain.schema.BaseDocumentTransformer[source]ο
Bases: abc.ABC
Base interface for transforming documents.
abstract transform_documents(documents, **kwargs)[source]ο
Transform a list of documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
kwargs (Any) β
Return type
Sequence[langchain.schema.Document]
abstract async atransform_documents(documents, **kwargs)[source]ο
Asynchronously transform a list of documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
kwargs (Any) β
Return type
Sequence[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/base_classes.html |
35cde68a-b909-43b6-b918-81c4eb2db5cd | Chainsο
Chains are easily reusable components which can be linked together.
class langchain.chains.APIChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, api_request_chain, api_answer_chain, requests_wrapper, api_docs, question_key='question', output_key='output')[source]ο
Bases: langchain.chains.base.Chain
Chain that makes API calls and summarizes the responses to answer a question.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
api_request_chain (langchain.chains.llm.LLMChain) β
api_answer_chain (langchain.chains.llm.LLMChain) β
requests_wrapper (langchain.requests.TextRequestsWrapper) β
api_docs (str) β
question_key (str) β
output_key (str) β
Return type
None
attribute api_answer_chain: LLMChain [Required]ο
attribute api_docs: str [Required]ο
attribute api_request_chain: LLMChain [Required]ο
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute requests_wrapper: TextRequestsWrapper [Required]ο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm_and_api_docs(llm, api_docs, headers=None, api_url_prompt=PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url:', template_format='f-string', validate_template=True), api_response_prompt=PromptTemplate(input_variables=['api_docs', 'question', 'api_url', 'api_response'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url: {api_url}\n\nHere is the response from the API:\n\n{api_response}\n\nSummarize this response to answer the original question.\n\nSummary:', template_format='f-string', validate_template=True), **kwargs)[source]ο
Load chain from just an LLM and the api docs.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
api_docs (str) β
headers (Optional[dict]) β
api_url_prompt (langchain.prompts.base.BasePromptTemplate) β
api_response_prompt (langchain.prompts.base.BasePromptTemplate) β
kwargs (Any) β
Return type
langchain.chains.api.base.APIChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.AnalyzeDocumentChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_document', text_splitter=None, combine_docs_chain)[source]ο
Bases: langchain.chains.base.Chain
Chain that splits documents, then analyzes it in pieces.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
input_key (str) β
text_splitter (langchain.text_splitter.TextSplitter) β
combine_docs_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute text_splitter: langchain.text_splitter.TextSplitter [Optional]ο
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.ChatVectorDBChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_docs_chain, question_generator, output_key='answer', return_source_documents=False, return_generated_question=False, get_chat_history=None, vectorstore, top_k_docs_for_context=4, search_kwargs=None)[source]ο
Bases: langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain
Chain for chatting with a vector database.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
combine_docs_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) β
question_generator (langchain.chains.llm.LLMChain) β
output_key (str) β
return_source_documents (bool) β
return_generated_question (bool) β
get_chat_history (Optional[Callable[[Union[Tuple[str, str], langchain.schema.BaseMessage]], str]]) β
vectorstore (langchain.vectorstores.base.VectorStore) β
top_k_docs_for_context (int) β
search_kwargs (dict) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute combine_docs_chain: BaseCombineDocumentsChain [Required]ο
attribute get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = Noneο
Return the source documents.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute output_key: str = 'answer'ο
attribute question_generator: LLMChain [Required]ο
attribute return_generated_question: bool = Falseο
attribute return_source_documents: bool = Falseο
attribute search_kwargs: dict [Optional]ο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute top_k_docs_for_context: int = 4ο
attribute vectorstore: VectorStore [Required]ο
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, vectorstore, condense_question_prompt=PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), chain_type='stuff', combine_docs_chain_kwargs=None, callbacks=None, **kwargs)[source]ο
Load chain from LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
vectorstore (langchain.vectorstores.base.VectorStore) β
condense_question_prompt (langchain.prompts.base.BasePromptTemplate) β
chain_type (str) β
combine_docs_chain_kwargs (Optional[Dict]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property input_keys: List[str]ο
Input keys.
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.ConstitutionalChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, chain, constitutional_principles, critique_chain, revision_chain, return_intermediate_steps=False)[source]ο
Bases: langchain.chains.base.Chain
Chain for applying constitutional principles.
Example
from langchain.llms import OpenAI
from langchain.chains import LLMChain, ConstitutionalChain
from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple
llm = OpenAI()
qa_prompt = PromptTemplate(
template="Q: {question} A:",
input_variables=["question"],
)
qa_chain = LLMChain(llm=llm, prompt=qa_prompt)
constitutional_chain = ConstitutionalChain.from_llm(
llm=llm,
chain=qa_chain,
constitutional_principles=[
ConstitutionalPrinciple(
critique_request="Tell if this answer is good.",
revision_request="Give a better answer.",
)
],
)
constitutional_chain.run(question="What is the meaning of life?")
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
chain (langchain.chains.llm.LLMChain) β
constitutional_principles (List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple]) β
critique_chain (langchain.chains.llm.LLMChain) β
revision_chain (langchain.chains.llm.LLMChain) β
return_intermediate_steps (bool) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute chain: langchain.chains.llm.LLMChain [Required]ο
attribute constitutional_principles: List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple] [Required]ο
attribute critique_chain: langchain.chains.llm.LLMChain [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute return_intermediate_steps: bool = Falseο
attribute revision_chain: langchain.chains.llm.LLMChain [Required]ο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, chain, critique_prompt=FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the modelβs precedings response is harmful. Also point out potential harmfulness in the humanβs questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isnβt strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, itβs always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, donβt be afraid to ask.', 'critique_request': 'Identify specific ways in which the modelβs precedings response is harmful. Also point out potential harmfulness in the humanβs questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the modelβs response could be interpreted as saying that it isnβt strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. Itβs always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, donβt be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600βs. Itβs explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sunβs gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, Iβm not sure that the precession measurement was actually made in the 1600βs, but was probably made at least 100 years ago. Iβm also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, Iβm not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although Iβm pretty sure it decays more quickly than Newtonβs law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sunβs gravitational field that is smaller and decays more quickly than Newtonβs law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': "Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': "Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.", 'critique': "The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.", 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique:', example_separator='\n === \n', prefix="Below is a conversation between a human and an AI model. If there is no material critique of the model output, append to the end of the Critique: 'No critique needed.' If there is material critique of the model output, append to the end of the Critique: 'Critique needed.'", template_format='f-string', validate_template=True), revision_prompt=FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the modelβs precedings response is harmful. Also point out potential harmfulness in the humanβs questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isnβt strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, itβs always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, donβt be afraid to ask.', 'critique_request': 'Identify specific ways in which the modelβs precedings response is harmful. Also point out potential harmfulness in the humanβs questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the modelβs response could be interpreted as saying that it isnβt strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. Itβs always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, donβt be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600βs. Itβs explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sunβs gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, Iβm not sure that the precession measurement was actually made in the 1600βs, but was probably made at least 100 years ago. Iβm also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, Iβm not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although Iβm pretty sure it decays more quickly than Newtonβs law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and youβre unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sunβs gravitational field that is smaller and decays more quickly than Newtonβs law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': "Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': "Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.", 'critique': "The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.", 'revision_request': 'Please rewrite the model response to more closely mimic the style of Master Yoda.', 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nIf the critique does not identify anything worth changing, ignore the Revision Request and do not make any revisions. Instead, return "No revisions needed".\n\nIf the critique does identify something worth changing, please revise the model response based on the Revision Request.\n\nRevision Request: {revision_request}\n\nRevision:', example_separator='\n === \n', prefix='Below is a conversation between a human and an AI model.', template_format='f-string', validate_template=True), **kwargs)[source]ο
Create a chain from an LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
chain (langchain.chains.llm.LLMChain) β
critique_prompt (langchain.prompts.base.BasePromptTemplate) β
revision_prompt (langchain.prompts.base.BasePromptTemplate) β
kwargs (Any) β
Return type
langchain.chains.constitutional_ai.base.ConstitutionalChain
classmethod get_principles(names=None)[source]ο
Parameters
names (Optional[List[str]]) β
Return type
List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple]
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property input_keys: List[str]ο
Defines the input keys.
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
property output_keys: List[str]ο
Defines the output keys.
class langchain.chains.ConversationChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n{history}\nHuman: {input}\nAI:', template_format='f-string', validate_template=True), llm, output_key='response', output_parser=None, return_final_only=True, llm_kwargs=None, input_key='input')[source]ο
Bases: langchain.chains.llm.LLMChain
Chain to have a conversation and load context from memory.
Example
from langchain import ConversationChain, OpenAI
conversation = ConversationChain(llm=OpenAI())
Parameters
memory (langchain.schema.BaseMemory) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
prompt (langchain.prompts.base.BasePromptTemplate) β
llm (langchain.base_language.BaseLanguageModel) β
output_key (str) β
output_parser (langchain.schema.BaseLLMOutputParser) β
return_final_only (bool) β
llm_kwargs (dict) β
input_key (str) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute llm: BaseLanguageModel [Required]ο
Language model to call.
attribute llm_kwargs: dict [Optional]ο
attribute memory: langchain.schema.BaseMemory [Optional]ο
Default memory store.
attribute output_parser: BaseLLMOutputParser [Optional]ο
Output parser to use.
Defaults to one that takes the most likely string but does not change it
otherwise.
attribute prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n{history}\nHuman: {input}\nAI:', template_format='f-string', validate_template=True)ο
Default conversation prompt to use.
attribute return_final_only: bool = Trueο
Whether to return only the final parsed result. Defaults to True.
If false, will return a bunch of extra information about the generation.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async aapply(input_list, callbacks=None)ο
Utilize the LLM generate method for speed gains.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async aapply_and_parse(input_list, callbacks=None)ο
Call apply and then parse the results.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
Sequence[Union[str, List[str], Dict[str, str]]]
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
async agenerate(input_list, run_manager=None)ο
Generate LLM result from inputs.
Parameters
input_list (List[Dict[str, Any]]) β
run_manager (Optional[langchain.callbacks.manager.AsyncCallbackManagerForChainRun]) β
Return type
langchain.schema.LLMResult
apply(input_list, callbacks=None)ο
Utilize the LLM generate method for speed gains.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
apply_and_parse(input_list, callbacks=None)ο
Call apply and then parse the results.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
Sequence[Union[str, List[str], Dict[str, str]]]
async apredict(callbacks=None, **kwargs)ο
Format prompt with kwargs and pass to LLM.
Parameters
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to pass to LLMChain
**kwargs β Keys to pass to prompt template.
kwargs (Any) β
Returns
Completion from LLM.
Return type
str
Example
completion = llm.predict(adjective="funny")
async apredict_and_parse(callbacks=None, **kwargs)ο
Call apredict and then parse the results.
Parameters
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Union[str, List[str], Dict[str, str]]
async aprep_prompts(input_list, run_manager=None)ο
Prepare prompts from inputs.
Parameters
input_list (List[Dict[str, Any]]) β
run_manager (Optional[langchain.callbacks.manager.AsyncCallbackManagerForChainRun]) β
Return type
Tuple[List[langchain.schema.PromptValue], Optional[List[str]]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
create_outputs(llm_result)ο
Create outputs from response.
Parameters
llm_result (langchain.schema.LLMResult) β
Return type
List[Dict[str, Any]]
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_string(llm, template)ο
Create LLMChain from LLM and template.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
template (str) β
Return type
langchain.chains.llm.LLMChain
generate(input_list, run_manager=None)ο
Generate LLM result from inputs.
Parameters
input_list (List[Dict[str, Any]]) β
run_manager (Optional[langchain.callbacks.manager.CallbackManagerForChainRun]) β
Return type
langchain.schema.LLMResult
predict(callbacks=None, **kwargs)ο
Format prompt with kwargs and pass to LLM.
Parameters
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to pass to LLMChain
**kwargs β Keys to pass to prompt template.
kwargs (Any) β
Returns
Completion from LLM.
Return type
str
Example
completion = llm.predict(adjective="funny")
predict_and_parse(callbacks=None, **kwargs)ο
Call predict and then parse the results.
Parameters
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Union[str, List[str], Dict[str, Any]]
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
prep_prompts(input_list, run_manager=None)ο
Prepare prompts from inputs.
Parameters
input_list (List[Dict[str, Any]]) β
run_manager (Optional[langchain.callbacks.manager.CallbackManagerForChainRun]) β
Return type
Tuple[List[langchain.schema.PromptValue], Optional[List[str]]]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property input_keys: List[str]ο
Use this since so some prompt vars come from history.
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.ConversationalRetrievalChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_docs_chain, question_generator, output_key='answer', return_source_documents=False, return_generated_question=False, get_chat_history=None, retriever, max_tokens_limit=None)[source]ο
Bases: langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain
Chain for chatting with an index.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
combine_docs_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) β
question_generator (langchain.chains.llm.LLMChain) β
output_key (str) β
return_source_documents (bool) β
return_generated_question (bool) β
get_chat_history (Optional[Callable[[Union[Tuple[str, str], langchain.schema.BaseMessage]], str]]) β
retriever (langchain.schema.BaseRetriever) β
max_tokens_limit (Optional[int]) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute combine_docs_chain: BaseCombineDocumentsChain [Required]ο
attribute get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = Noneο
Return the source documents.
attribute max_tokens_limit: Optional[int] = Noneο
If set, restricts the docs to return from store based on tokens, enforced only
for StuffDocumentChain
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute output_key: str = 'answer'ο
attribute question_generator: LLMChain [Required]ο
attribute retriever: BaseRetriever [Required]ο
Index to connect to.
attribute return_generated_question: bool = Falseο
attribute return_source_documents: bool = Falseο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, retriever, condense_question_prompt=PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), chain_type='stuff', verbose=False, condense_question_llm=None, combine_docs_chain_kwargs=None, callbacks=None, **kwargs)[source]ο
Load chain from LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
retriever (langchain.schema.BaseRetriever) β
condense_question_prompt (langchain.prompts.base.BasePromptTemplate) β
chain_type (str) β
verbose (bool) β
condense_question_llm (Optional[langchain.base_language.BaseLanguageModel]) β
combine_docs_chain_kwargs (Optional[Dict]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property input_keys: List[str]ο
Input keys.
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.FlareChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, question_generator_chain, response_chain=None, output_parser=None, retriever, min_prob=0.2, min_token_gap=5, num_pad_tokens=2, max_iter=10, start_with_retrieval=True)[source]ο
Bases: langchain.chains.base.Chain
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
question_generator_chain (langchain.chains.flare.base.QuestionGeneratorChain) β
response_chain (langchain.chains.flare.base._ResponseChain) β
output_parser (langchain.chains.flare.prompts.FinishedOutputParser) β
retriever (langchain.schema.BaseRetriever) β
min_prob (float) β
min_token_gap (int) β
num_pad_tokens (int) β
max_iter (int) β
start_with_retrieval (bool) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute max_iter: int = 10ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute min_prob: float = 0.2ο
attribute min_token_gap: int = 5ο
attribute num_pad_tokens: int = 2ο
attribute output_parser: FinishedOutputParser [Optional]ο
attribute question_generator_chain: QuestionGeneratorChain [Required]ο
attribute response_chain: _ResponseChain [Optional]ο
attribute retriever: BaseRetriever [Required]ο
attribute start_with_retrieval: bool = Trueο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, max_generation_len=32, **kwargs)[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
max_generation_len (int) β
kwargs (Any) β
Return type
langchain.chains.flare.base.FlareChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property input_keys: List[str]ο
Input keys this chain expects.
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
property output_keys: List[str]ο
Output keys this chain expects.
class langchain.chains.GraphCypherQAChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, graph, cypher_generation_chain, qa_chain, input_key='query', output_key='result', top_k=10, return_intermediate_steps=False, return_direct=False)[source]ο
Bases: langchain.chains.base.Chain
Chain for question-answering against a graph by generating Cypher statements.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
graph (langchain.graphs.neo4j_graph.Neo4jGraph) β
cypher_generation_chain (langchain.chains.llm.LLMChain) β
qa_chain (langchain.chains.llm.LLMChain) β
input_key (str) β
output_key (str) β
top_k (int) β
return_intermediate_steps (bool) β
return_direct (bool) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute cypher_generation_chain: LLMChain [Required]ο
attribute graph: Neo4jGraph [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute qa_chain: LLMChain [Required]ο
attribute return_direct: bool = Falseο
Whether or not to return the result of querying the graph directly.
attribute return_intermediate_steps: bool = Falseο
Whether or not to return the intermediate steps along with the final answer.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute top_k: int = 10ο
Number of results to return from the query
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, *, qa_prompt=PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="You are an assistant that helps to form nice and human understandable answers.\nThe information part contains the provided information that you must use to construct an answer.\nThe provided information is authorative, you must never doubt it or try to use your internal knowledge to correct it.\nMake the answer sound as a response to the question. Do not mention that you based the result on the given information.\nIf the provided information is empty, say that you don't know the answer.\nInformation:\n{context}\n\nQuestion: {question}\nHelpful Answer:", template_format='f-string', validate_template=True), cypher_prompt=PromptTemplate(input_variables=['schema', 'question'], output_parser=None, partial_variables={}, template='Task:Generate Cypher statement to query a graph database.\nInstructions:\nUse only the provided relationship types and properties in the schema.\nDo not use any other relationship types or properties that are not provided.\nSchema:\n{schema}\nNote: Do not include any explanations or apologies in your responses.\nDo not respond to any questions that might ask anything else than for you to construct a Cypher statement.\nDo not include any text except the generated Cypher statement.\n\nThe question is:\n{question}', template_format='f-string', validate_template=True), **kwargs)[source]ο
Initialize from LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
qa_prompt (langchain.prompts.base.BasePromptTemplate) β
cypher_prompt (langchain.prompts.base.BasePromptTemplate) β
kwargs (Any) β
Return type
langchain.chains.graph_qa.cypher.GraphCypherQAChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.GraphQAChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, graph, entity_extraction_chain, qa_chain, input_key='query', output_key='result')[source]ο
Bases: langchain.chains.base.Chain
Chain for question-answering against a graph.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
graph (langchain.graphs.networkx_graph.NetworkxEntityGraph) β
entity_extraction_chain (langchain.chains.llm.LLMChain) β
qa_chain (langchain.chains.llm.LLMChain) β
input_key (str) β
output_key (str) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute entity_extraction_chain: LLMChain [Required]ο
attribute graph: NetworkxEntityGraph [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute qa_chain: LLMChain [Required]ο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, qa_prompt=PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="Use the following knowledge triplets to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\nHelpful Answer:", template_format='f-string', validate_template=True), entity_prompt=PromptTemplate(input_variables=['input'], output_parser=None, partial_variables={}, template="Extract all entities from the following text. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return.\n\nEXAMPLE\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I'm working with Sam.\nOutput: Langchain, Sam\nEND OF EXAMPLE\n\nBegin!\n\n{input}\nOutput:", template_format='f-string', validate_template=True), **kwargs)[source]ο
Initialize from LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
qa_prompt (langchain.prompts.base.BasePromptTemplate) β
entity_prompt (langchain.prompts.base.BasePromptTemplate) β
kwargs (Any) β
Return type
langchain.chains.graph_qa.base.GraphQAChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.HypotheticalDocumentEmbedder(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, base_embeddings, llm_chain)[source]ο
Bases: langchain.chains.base.Chain, langchain.embeddings.base.Embeddings
Generate hypothetical document for query, and then embed that.
Based on https://arxiv.org/abs/2212.10496
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
base_embeddings (langchain.embeddings.base.Embeddings) β
llm_chain (langchain.chains.llm.LLMChain) β
Return type
None
attribute base_embeddings: Embeddings [Required]ο
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute llm_chain: LLMChain [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
combine_embeddings(embeddings)[source]ο
Combine embeddings into final embeddings.
Parameters
embeddings (List[List[float]]) β
Return type
List[float]
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
embed_documents(texts)[source]ο
Call the base embeddings.
Parameters
texts (List[str]) β
Return type
List[List[float]]
embed_query(text)[source]ο
Generate a hypothetical document and embedded it.
Parameters
text (str) β
Return type
List[float]
classmethod from_llm(llm, base_embeddings, prompt_key, **kwargs)[source]ο
Load and use LLMChain for a specific prompt key.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
base_embeddings (langchain.embeddings.base.Embeddings) β
prompt_key (str) β
kwargs (Any) β
Return type
langchain.chains.hyde.base.HypotheticalDocumentEmbedder
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property input_keys: List[str]ο
Input keys for Hydeβs LLM chain.
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
property output_keys: List[str]ο
Output keys for Hydeβs LLM chain.
class langchain.chains.KuzuQAChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, graph, cypher_generation_chain, qa_chain, input_key='query', output_key='result')[source]ο
Bases: langchain.chains.base.Chain
Chain for question-answering against a graph by generating Cypher statements for
KΓΉzu.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
graph (langchain.graphs.kuzu_graph.KuzuGraph) β
cypher_generation_chain (langchain.chains.llm.LLMChain) β
qa_chain (langchain.chains.llm.LLMChain) β
input_key (str) β
output_key (str) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute cypher_generation_chain: LLMChain [Required]ο
attribute graph: KuzuGraph [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute qa_chain: LLMChain [Required]ο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, *, qa_prompt=PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="You are an assistant that helps to form nice and human understandable answers.\nThe information part contains the provided information that you must use to construct an answer.\nThe provided information is authorative, you must never doubt it or try to use your internal knowledge to correct it.\nMake the answer sound as a response to the question. Do not mention that you based the result on the given information.\nIf the provided information is empty, say that you don't know the answer.\nInformation:\n{context}\n\nQuestion: {question}\nHelpful Answer:", template_format='f-string', validate_template=True), cypher_prompt=PromptTemplate(input_variables=['schema', 'question'], output_parser=None, partial_variables={}, template='Task:Generate KΓΉzu Cypher statement to query a graph database.\n\nInstructions:\n\nGenerate statement with KΓΉzu Cypher dialect (rather than standard):\n1. do not use `WHERE EXISTS` clause to check the existence of a property because KΓΉzu database has a fixed schema.\n2. do not omit relationship pattern. Always use `()-[]->()` instead of `()->()`.\n3. do not include any notes or comments even if the statement does not produce the expected result.\n```\n\nUse only the provided relationship types and properties in the schema.\nDo not use any other relationship types or properties that are not provided.\nSchema:\n{schema}\nNote: Do not include any explanations or apologies in your responses.\nDo not respond to any questions that might ask anything else than for you to construct a Cypher statement.\nDo not include any text except the generated Cypher statement.\n\nThe question is:\n{question}', template_format='f-string', validate_template=True), **kwargs)[source]ο
Initialize from LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
qa_prompt (langchain.prompts.base.BasePromptTemplate) β
cypher_prompt (langchain.prompts.base.BasePromptTemplate) β
kwargs (Any) β
Return type
langchain.chains.graph_qa.kuzu.KuzuQAChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.LLMBashChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, llm_chain, llm=None, input_key='question', output_key='answer', prompt=PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put "#!/bin/bash" in your answer. Make sure to reason step by step, using this format:\n\nQuestion: "copy the files in the directory named \'target\' into a new directory at the same level as target called \'myNewDirectory\'"\n\nI need to take the following actions:\n- List all files in the directory\n- Create a new directory\n- Copy the files from the first directory into the second directory\n```bash\nls\nmkdir myNewDirectory\ncp -r target/* myNewDirectory\n```\n\nThat is the format. Begin!\n\nQuestion: {question}', template_format='f-string', validate_template=True), bash_process=None)[source]ο
Bases: langchain.chains.base.Chain
Chain that interprets a prompt and executes bash code to perform bash operations.
Example
from langchain import LLMBashChain, OpenAI
llm_bash = LLMBashChain.from_llm(OpenAI())
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
llm_chain (langchain.chains.llm.LLMChain) β
llm (Optional[langchain.base_language.BaseLanguageModel]) β
input_key (str) β
output_key (str) β
prompt (langchain.prompts.base.BasePromptTemplate) β
bash_process (langchain.utilities.bash.BashProcess) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute llm: Optional[BaseLanguageModel] = Noneο
[Deprecated] LLM wrapper to use.
attribute llm_chain: LLMChain [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put "#!/bin/bash" in your answer. Make sure to reason step by step, using this format:\n\nQuestion: "copy the files in the directory named \'target\' into a new directory at the same level as target called \'myNewDirectory\'"\n\nI need to take the following actions:\n- List all files in the directory\n- Create a new directory\n- Copy the files from the first directory into the second directory\n```bash\nls\nmkdir myNewDirectory\ncp -r target/* myNewDirectory\n```\n\nThat is the format. Begin!\n\nQuestion: {question}', template_format='f-string', validate_template=True)ο
[Deprecated]
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, prompt=PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put "#!/bin/bash" in your answer. Make sure to reason step by step, using this format:\n\nQuestion: "copy the files in the directory named \'target\' into a new directory at the same level as target called \'myNewDirectory\'"\n\nI need to take the following actions:\n- List all files in the directory\n- Create a new directory\n- Copy the files from the first directory into the second directory\n```bash\nls\nmkdir myNewDirectory\ncp -r target/* myNewDirectory\n```\n\nThat is the format. Begin!\n\nQuestion: {question}', template_format='f-string', validate_template=True), **kwargs)[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
prompt (langchain.prompts.base.BasePromptTemplate) β
kwargs (Any) β
Return type
langchain.chains.llm_bash.base.LLMBashChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.LLMChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, prompt, llm, output_key='text', output_parser=None, return_final_only=True, llm_kwargs=None)[source]ο
Bases: langchain.chains.base.Chain
Chain to run queries against LLMs.
Example
from langchain import LLMChain, OpenAI, PromptTemplate
prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(
input_variables=["adjective"], template=prompt_template
)
llm = LLMChain(llm=OpenAI(), prompt=prompt)
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
prompt (langchain.prompts.base.BasePromptTemplate) β
llm (langchain.base_language.BaseLanguageModel) β
output_key (str) β
output_parser (langchain.schema.BaseLLMOutputParser) β
return_final_only (bool) β
llm_kwargs (dict) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute llm: BaseLanguageModel [Required]ο
Language model to call.
attribute llm_kwargs: dict [Optional]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute output_parser: BaseLLMOutputParser [Optional]ο
Output parser to use.
Defaults to one that takes the most likely string but does not change it
otherwise.
attribute prompt: BasePromptTemplate [Required]ο
Prompt object to use.
attribute return_final_only: bool = Trueο
Whether to return only the final parsed result. Defaults to True.
If false, will return a bunch of extra information about the generation.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async aapply(input_list, callbacks=None)[source]ο
Utilize the LLM generate method for speed gains.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async aapply_and_parse(input_list, callbacks=None)[source]ο
Call apply and then parse the results.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
Sequence[Union[str, List[str], Dict[str, str]]]
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
async agenerate(input_list, run_manager=None)[source]ο
Generate LLM result from inputs.
Parameters
input_list (List[Dict[str, Any]]) β
run_manager (Optional[langchain.callbacks.manager.AsyncCallbackManagerForChainRun]) β
Return type
langchain.schema.LLMResult
apply(input_list, callbacks=None)[source]ο
Utilize the LLM generate method for speed gains.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
apply_and_parse(input_list, callbacks=None)[source]ο
Call apply and then parse the results.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
Sequence[Union[str, List[str], Dict[str, str]]]
async apredict(callbacks=None, **kwargs)[source]ο
Format prompt with kwargs and pass to LLM.
Parameters
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to pass to LLMChain
**kwargs β Keys to pass to prompt template.
kwargs (Any) β
Returns
Completion from LLM.
Return type
str
Example
completion = llm.predict(adjective="funny")
async apredict_and_parse(callbacks=None, **kwargs)[source]ο
Call apredict and then parse the results.
Parameters
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Union[str, List[str], Dict[str, str]]
async aprep_prompts(input_list, run_manager=None)[source]ο
Prepare prompts from inputs.
Parameters
input_list (List[Dict[str, Any]]) β
run_manager (Optional[langchain.callbacks.manager.AsyncCallbackManagerForChainRun]) β
Return type
Tuple[List[langchain.schema.PromptValue], Optional[List[str]]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
create_outputs(llm_result)[source]ο
Create outputs from response.
Parameters
llm_result (langchain.schema.LLMResult) β
Return type
List[Dict[str, Any]]
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_string(llm, template)[source]ο
Create LLMChain from LLM and template.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
template (str) β
Return type
langchain.chains.llm.LLMChain
generate(input_list, run_manager=None)[source]ο
Generate LLM result from inputs.
Parameters
input_list (List[Dict[str, Any]]) β
run_manager (Optional[langchain.callbacks.manager.CallbackManagerForChainRun]) β
Return type
langchain.schema.LLMResult
predict(callbacks=None, **kwargs)[source]ο
Format prompt with kwargs and pass to LLM.
Parameters
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to pass to LLMChain
**kwargs β Keys to pass to prompt template.
kwargs (Any) β
Returns
Completion from LLM.
Return type
str
Example
completion = llm.predict(adjective="funny")
predict_and_parse(callbacks=None, **kwargs)[source]ο
Call predict and then parse the results.
Parameters
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Union[str, List[str], Dict[str, Any]]
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
prep_prompts(input_list, run_manager=None)[source]ο
Prepare prompts from inputs.
Parameters
input_list (List[Dict[str, Any]]) β
run_manager (Optional[langchain.callbacks.manager.CallbackManagerForChainRun]) β
Return type
Tuple[List[langchain.schema.PromptValue], Optional[List[str]]]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.LLMCheckerChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, question_to_checked_assertions_chain, llm=None, create_draft_answer_prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\n\n', template_format='f-string', validate_template=True), list_assertions_prompt=PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\n{statement}\nMake a bullet point list of the assumptions you made when producing the above statement.\n\n', template_format='f-string', validate_template=True), check_assertions_prompt=PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\n{assertions}\nFor each assertion, determine whether it is true or false. If it is false, explain why.\n\n', template_format='f-string', validate_template=True), revised_answer_prompt=PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template="{checked_assertions}\n\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\n\nAnswer:", template_format='f-string', validate_template=True), input_key='query', output_key='result')[source]ο
Bases: langchain.chains.base.Chain
Chain for question-answering with self-verification.
Example
from langchain import OpenAI, LLMCheckerChain
llm = OpenAI(temperature=0.7)
checker_chain = LLMCheckerChain.from_llm(llm)
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
question_to_checked_assertions_chain (langchain.chains.sequential.SequentialChain) β
llm (Optional[langchain.base_language.BaseLanguageModel]) β
create_draft_answer_prompt (langchain.prompts.prompt.PromptTemplate) β
list_assertions_prompt (langchain.prompts.prompt.PromptTemplate) β
check_assertions_prompt (langchain.prompts.prompt.PromptTemplate) β
revised_answer_prompt (langchain.prompts.prompt.PromptTemplate) β
input_key (str) β
output_key (str) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\n{assertions}\nFor each assertion, determine whether it is true or false. If it is false, explain why.\n\n', template_format='f-string', validate_template=True)ο
[Deprecated]
attribute create_draft_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\n\n', template_format='f-string', validate_template=True)ο
[Deprecated]
attribute list_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\n{statement}\nMake a bullet point list of the assumptions you made when producing the above statement.\n\n', template_format='f-string', validate_template=True)ο
[Deprecated]
attribute llm: Optional[BaseLanguageModel] = Noneο
[Deprecated] LLM wrapper to use.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute question_to_checked_assertions_chain: SequentialChain [Required]ο
attribute revised_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template="{checked_assertions}\n\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\n\nAnswer:", template_format='f-string', validate_template=True)ο
[Deprecated] Prompt to use when questioning the documents.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, create_draft_answer_prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\n\n', template_format='f-string', validate_template=True), list_assertions_prompt=PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\n{statement}\nMake a bullet point list of the assumptions you made when producing the above statement.\n\n', template_format='f-string', validate_template=True), check_assertions_prompt=PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\n{assertions}\nFor each assertion, determine whether it is true or false. If it is false, explain why.\n\n', template_format='f-string', validate_template=True), revised_answer_prompt=PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template="{checked_assertions}\n\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\n\nAnswer:", template_format='f-string', validate_template=True), **kwargs)[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
create_draft_answer_prompt (langchain.prompts.prompt.PromptTemplate) β
list_assertions_prompt (langchain.prompts.prompt.PromptTemplate) β
check_assertions_prompt (langchain.prompts.prompt.PromptTemplate) β
revised_answer_prompt (langchain.prompts.prompt.PromptTemplate) β
kwargs (Any) β
Return type
langchain.chains.llm_checker.base.LLMCheckerChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.LLMMathChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, llm_chain, llm=None, prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python\'s numexpr library. Use the output of running this code to answer the question.\n\nQuestion: ${{Question with math problem.}}\n```text\n${{single line mathematical expression that solves the problem}}\n```\n...numexpr.evaluate(text)...\n```output\n${{Output of running the code}}\n```\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n```text\n37593 * 67\n```\n...numexpr.evaluate("37593 * 67")...\n```output\n2518731\n```\nAnswer: 2518731\n\nQuestion: 37593^(1/5)\n```text\n37593**(1/5)\n```\n...numexpr.evaluate("37593**(1/5)")...\n```output\n8.222831614237718\n```\nAnswer: 8.222831614237718\n\nQuestion: {question}\n', template_format='f-string', validate_template=True), input_key='question', output_key='answer')[source]ο
Bases: langchain.chains.base.Chain
Chain that interprets a prompt and executes python code to do math.
Example
from langchain import LLMMathChain, OpenAI
llm_math = LLMMathChain.from_llm(OpenAI())
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
llm_chain (langchain.chains.llm.LLMChain) β
llm (Optional[langchain.base_language.BaseLanguageModel]) β
prompt (langchain.prompts.base.BasePromptTemplate) β
input_key (str) β
output_key (str) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute llm: Optional[BaseLanguageModel] = Noneο
[Deprecated] LLM wrapper to use.
attribute llm_chain: LLMChain [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python\'s numexpr library. Use the output of running this code to answer the question.\n\nQuestion: ${{Question with math problem.}}\n```text\n${{single line mathematical expression that solves the problem}}\n```\n...numexpr.evaluate(text)...\n```output\n${{Output of running the code}}\n```\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n```text\n37593 * 67\n```\n...numexpr.evaluate("37593 * 67")...\n```output\n2518731\n```\nAnswer: 2518731\n\nQuestion: 37593^(1/5)\n```text\n37593**(1/5)\n```\n...numexpr.evaluate("37593**(1/5)")...\n```output\n8.222831614237718\n```\nAnswer: 8.222831614237718\n\nQuestion: {question}\n', template_format='f-string', validate_template=True)ο
[Deprecated] Prompt to use to translate to python if necessary.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python\'s numexpr library. Use the output of running this code to answer the question.\n\nQuestion: ${{Question with math problem.}}\n```text\n${{single line mathematical expression that solves the problem}}\n```\n...numexpr.evaluate(text)...\n```output\n${{Output of running the code}}\n```\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n```text\n37593 * 67\n```\n...numexpr.evaluate("37593 * 67")...\n```output\n2518731\n```\nAnswer: 2518731\n\nQuestion: 37593^(1/5)\n```text\n37593**(1/5)\n```\n...numexpr.evaluate("37593**(1/5)")...\n```output\n8.222831614237718\n```\nAnswer: 8.222831614237718\n\nQuestion: {question}\n', template_format='f-string', validate_template=True), **kwargs)[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
prompt (langchain.prompts.base.BasePromptTemplate) β
kwargs (Any) β
Return type
langchain.chains.llm_math.base.LLMMathChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.LLMRequestsChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, llm_chain, requests_wrapper=None, text_length=8000, requests_key='requests_result', input_key='url', output_key='output')[source]ο
Bases: langchain.chains.base.Chain
Chain that hits a URL and then uses an LLM to parse results.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
llm_chain (langchain.chains.llm.LLMChain) β
requests_wrapper (langchain.requests.TextRequestsWrapper) β
text_length (int) β
requests_key (str) β
input_key (str) β
output_key (str) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute llm_chain: LLMChain [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute requests_wrapper: TextRequestsWrapper [Optional]ο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute text_length: int = 8000ο
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.LLMRouterChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, llm_chain)[source]ο
Bases: langchain.chains.router.base.RouterChain
A router chain that uses an LLM chain to perform routing.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
llm_chain (langchain.chains.llm.LLMChain) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute llm_chain: LLMChain [Required]ο
LLM chain used to perform routing
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async aroute(inputs, callbacks=None)ο
Parameters
inputs (Dict[str, Any]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
langchain.chains.router.base.Route
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, prompt, **kwargs)[source]ο
Convenience constructor.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
prompt (langchain.prompts.base.BasePromptTemplate) β
kwargs (Any) β
Return type
langchain.chains.router.llm_router.LLMRouterChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
route(inputs, callbacks=None)ο
Parameters
inputs (Dict[str, Any]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
langchain.chains.router.base.Route
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
property output_keys: List[str]ο
Output keys this chain expects.
class langchain.chains.LLMSummarizationCheckerChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, sequential_chain, llm=None, create_assertions_prompt=PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\n\nFormat your output as a bulleted list.\n\nText:\n"""\n{summary}\n"""\n\nFacts:', template_format='f-string', validate_template=True), check_assertions_prompt=PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\n"""\n{assertions}\n"""\n\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".\nIf the fact is false, explain why.\n\n', template_format='f-string', validate_template=True), revised_summary_prompt=PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\n\nChecked Assertions:\n"""\n{checked_assertions}\n"""\n\nOriginal Summary:\n"""\n{summary}\n"""\n\nUsing these checked assertions, rewrite the original summary to be completely true.\n\nThe output should have the same structure and formatting as the original summary.\n\nSummary:', template_format='f-string', validate_template=True), are_all_true_prompt=PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\n\nIf all of the assertions are true, return "True". If any of the assertions are false, return "False".\n\nHere are some examples:\n===\n\nChecked Assertions: """\n- The sky is red: False\n- Water is made of lava: False\n- The sun is a star: True\n"""\nResult: False\n\n===\n\nChecked Assertions: """\n- The sky is blue: True\n- Water is wet: True\n- The sun is a star: True\n"""\nResult: True\n\n===\n\nChecked Assertions: """\n- The sky is blue - True\n- Water is made of lava- False\n- The sun is a star - True\n"""\nResult: False\n\n===\n\nChecked Assertions:"""\n{checked_assertions}\n"""\nResult:', template_format='f-string', validate_template=True), input_key='query', output_key='result', max_checks=2)[source]ο
Bases: langchain.chains.base.Chain
Chain for question-answering with self-verification.
Example
from langchain import OpenAI, LLMSummarizationCheckerChain
llm = OpenAI(temperature=0.0)
checker_chain = LLMSummarizationCheckerChain.from_llm(llm)
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
sequential_chain (langchain.chains.sequential.SequentialChain) β
llm (Optional[langchain.base_language.BaseLanguageModel]) β
create_assertions_prompt (langchain.prompts.prompt.PromptTemplate) β
check_assertions_prompt (langchain.prompts.prompt.PromptTemplate) β
revised_summary_prompt (langchain.prompts.prompt.PromptTemplate) β
are_all_true_prompt (langchain.prompts.prompt.PromptTemplate) β
input_key (str) β
output_key (str) β
max_checks (int) β
Return type
None
attribute are_all_true_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\n\nIf all of the assertions are true, return "True". If any of the assertions are false, return "False".\n\nHere are some examples:\n===\n\nChecked Assertions: """\n- The sky is red: False\n- Water is made of lava: False\n- The sun is a star: True\n"""\nResult: False\n\n===\n\nChecked Assertions: """\n- The sky is blue: True\n- Water is wet: True\n- The sun is a star: True\n"""\nResult: True\n\n===\n\nChecked Assertions: """\n- The sky is blue - True\n- Water is made of lava- False\n- The sun is a star - True\n"""\nResult: False\n\n===\n\nChecked Assertions:"""\n{checked_assertions}\n"""\nResult:', template_format='f-string', validate_template=True)ο
[Deprecated]
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\n"""\n{assertions}\n"""\n\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".\nIf the fact is false, explain why.\n\n', template_format='f-string', validate_template=True)ο
[Deprecated]
attribute create_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\n\nFormat your output as a bulleted list.\n\nText:\n"""\n{summary}\n"""\n\nFacts:', template_format='f-string', validate_template=True)ο
[Deprecated]
attribute llm: Optional[BaseLanguageModel] = Noneο
[Deprecated] LLM wrapper to use.
attribute max_checks: int = 2ο
Maximum number of times to check the assertions. Default to double-checking.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute revised_summary_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\n\nChecked Assertions:\n"""\n{checked_assertions}\n"""\n\nOriginal Summary:\n"""\n{summary}\n"""\n\nUsing these checked assertions, rewrite the original summary to be completely true.\n\nThe output should have the same structure and formatting as the original summary.\n\nSummary:', template_format='f-string', validate_template=True)ο
[Deprecated]
attribute sequential_chain: SequentialChain [Required]ο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, create_assertions_prompt=PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\n\nFormat your output as a bulleted list.\n\nText:\n"""\n{summary}\n"""\n\nFacts:', template_format='f-string', validate_template=True), check_assertions_prompt=PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\n"""\n{assertions}\n"""\n\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".\nIf the fact is false, explain why.\n\n', template_format='f-string', validate_template=True), revised_summary_prompt=PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\n\nChecked Assertions:\n"""\n{checked_assertions}\n"""\n\nOriginal Summary:\n"""\n{summary}\n"""\n\nUsing these checked assertions, rewrite the original summary to be completely true.\n\nThe output should have the same structure and formatting as the original summary.\n\nSummary:', template_format='f-string', validate_template=True), are_all_true_prompt=PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\n\nIf all of the assertions are true, return "True". If any of the assertions are false, return "False".\n\nHere are some examples:\n===\n\nChecked Assertions: """\n- The sky is red: False\n- Water is made of lava: False\n- The sun is a star: True\n"""\nResult: False\n\n===\n\nChecked Assertions: """\n- The sky is blue: True\n- Water is wet: True\n- The sun is a star: True\n"""\nResult: True\n\n===\n\nChecked Assertions: """\n- The sky is blue - True\n- Water is made of lava- False\n- The sun is a star - True\n"""\nResult: False\n\n===\n\nChecked Assertions:"""\n{checked_assertions}\n"""\nResult:', template_format='f-string', validate_template=True), verbose=False, **kwargs)[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
create_assertions_prompt (langchain.prompts.prompt.PromptTemplate) β
check_assertions_prompt (langchain.prompts.prompt.PromptTemplate) β
revised_summary_prompt (langchain.prompts.prompt.PromptTemplate) β
are_all_true_prompt (langchain.prompts.prompt.PromptTemplate) β
verbose (bool) β
kwargs (Any) β
Return type
langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.MapReduceChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_documents_chain, text_splitter, input_key='input_text', output_key='output_text')[source]ο
Bases: langchain.chains.base.Chain
Map-reduce chain.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
combine_documents_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) β
text_splitter (langchain.text_splitter.TextSplitter) β
input_key (str) β
output_key (str) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute combine_documents_chain: BaseCombineDocumentsChain [Required]ο
Chain to use to combine documents.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute text_splitter: TextSplitter [Required]ο
Text splitter to use.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_params(llm, prompt, text_splitter, callbacks=None, combine_chain_kwargs=None, reduce_chain_kwargs=None, **kwargs)[source]ο
Construct a map-reduce chain that uses the chain for map and reduce.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
prompt (langchain.prompts.base.BasePromptTemplate) β
text_splitter (langchain.text_splitter.TextSplitter) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
combine_chain_kwargs (Optional[Mapping[str, Any]]) β
reduce_chain_kwargs (Optional[Mapping[str, Any]]) β
kwargs (Any) β
Return type
langchain.chains.mapreduce.MapReduceChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.MultiPromptChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, router_chain, destination_chains, default_chain, silent_errors=False)[source]ο
Bases: langchain.chains.router.base.MultiRouteChain
A multi-route chain that uses an LLM router chain to choose amongst prompts.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
router_chain (langchain.chains.router.base.RouterChain) β
destination_chains (Mapping[str, langchain.chains.llm.LLMChain]) β
default_chain (langchain.chains.llm.LLMChain) β
silent_errors (bool) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute default_chain: LLMChain [Required]ο
Default chain to use when router doesnβt map input to one of the destinations.
attribute destination_chains: Mapping[str, LLMChain] [Required]ο
Map of name to candidate chains that inputs can be routed to.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute router_chain: RouterChain [Required]ο
Chain for deciding a destination chain and the input to it.
attribute silent_errors: bool = Falseο
If True, use default_chain when an invalid destination name is provided.
Defaults to False.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_prompts(llm, prompt_infos, default_chain=None, **kwargs)[source]ο
Convenience constructor for instantiating from destination prompts.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
prompt_infos (List[Dict[str, str]]) β
default_chain (Optional[langchain.chains.llm.LLMChain]) β
kwargs (Any) β
Return type
langchain.chains.router.multi_prompt.MultiPromptChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.MultiRetrievalQAChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, router_chain, destination_chains, default_chain, silent_errors=False)[source]ο
Bases: langchain.chains.router.base.MultiRouteChain
A multi-route chain that uses an LLM router chain to choose amongst retrieval
qa chains.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
router_chain (langchain.chains.router.llm_router.LLMRouterChain) β
destination_chains (Mapping[str, langchain.chains.retrieval_qa.base.BaseRetrievalQA]) β
default_chain (langchain.chains.base.Chain) β
silent_errors (bool) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute default_chain: Chain [Required]ο
Default chain to use when router doesnβt map input to one of the destinations.
attribute destination_chains: Mapping[str, BaseRetrievalQA] [Required]ο
Map of name to candidate chains that inputs can be routed to.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute router_chain: LLMRouterChain [Required]ο
Chain for deciding a destination chain and the input to it.
attribute silent_errors: bool = Falseο
If True, use default_chain when an invalid destination name is provided.
Defaults to False.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_retrievers(llm, retriever_infos, default_retriever=None, default_prompt=None, default_chain=None, **kwargs)[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
retriever_infos (List[Dict[str, Any]]) β
default_retriever (Optional[langchain.schema.BaseRetriever]) β
default_prompt (Optional[langchain.prompts.prompt.PromptTemplate]) β
default_chain (Optional[langchain.chains.base.Chain]) β
kwargs (Any) β
Return type
langchain.chains.router.multi_retrieval_qa.MultiRetrievalQAChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.MultiRouteChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, router_chain, destination_chains, default_chain, silent_errors=False)[source]ο
Bases: langchain.chains.base.Chain
Use a single chain to route an input to one of multiple candidate chains.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
router_chain (langchain.chains.router.base.RouterChain) β
destination_chains (Mapping[str, langchain.chains.base.Chain]) β
default_chain (langchain.chains.base.Chain) β
silent_errors (bool) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute default_chain: Chain [Required]ο
Default chain to use when none of the destination chains are suitable.
attribute destination_chains: Mapping[str, Chain] [Required]ο
Chains that return final answer to inputs.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute router_chain: RouterChain [Required]ο
Chain that routes inputs to destination chains.
attribute silent_errors: bool = Falseο
If True, use default_chain when an invalid destination name is provided.
Defaults to False.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.NatBotChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, llm_chain, objective, llm=None, input_url_key='url', input_browser_content_key='browser_content', previous_command='', output_key='command')[source]ο
Bases: langchain.chains.base.Chain
Implement an LLM driven browser.
Example
from langchain import NatBotChain
natbot = NatBotChain.from_default("Buy me a new hat.")
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
llm_chain (langchain.chains.llm.LLMChain) β
objective (str) β
llm (Optional[langchain.base_language.BaseLanguageModel]) β
input_url_key (str) β
input_browser_content_key (str) β
previous_command (str) β
output_key (str) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute llm: Optional[BaseLanguageModel] = Noneο
[Deprecated] LLM wrapper to use.
attribute llm_chain: LLMChain [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute objective: str [Required]ο
Objective that NatBot is tasked with completing.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
execute(url, browser_content)[source]ο
Figure out next browser command to run.
Parameters
url (str) β URL of the site currently on.
browser_content (str) β Content of the page as currently displayed by the browser.
Returns
Next browser command to run.
Return type
str
Example
browser_content = "...."
llm_command = natbot.run("www.google.com", browser_content)
classmethod from_default(objective, **kwargs)[source]ο
Load with default LLMChain.
Parameters
objective (str) β
kwargs (Any) β
Return type
langchain.chains.natbot.base.NatBotChain
classmethod from_llm(llm, objective, **kwargs)[source]ο
Load from LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
objective (str) β
kwargs (Any) β
Return type
langchain.chains.natbot.base.NatBotChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.NebulaGraphQAChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, graph, ngql_generation_chain, qa_chain, input_key='query', output_key='result')[source]ο
Bases: langchain.chains.base.Chain
Chain for question-answering against a graph by generating nGQL statements.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
graph (langchain.graphs.nebula_graph.NebulaGraph) β
ngql_generation_chain (langchain.chains.llm.LLMChain) β
qa_chain (langchain.chains.llm.LLMChain) β
input_key (str) β
output_key (str) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute graph: NebulaGraph [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute ngql_generation_chain: LLMChain [Required]ο
attribute qa_chain: LLMChain [Required]ο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, *, qa_prompt=PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="You are an assistant that helps to form nice and human understandable answers.\nThe information part contains the provided information that you must use to construct an answer.\nThe provided information is authorative, you must never doubt it or try to use your internal knowledge to correct it.\nMake the answer sound as a response to the question. Do not mention that you based the result on the given information.\nIf the provided information is empty, say that you don't know the answer.\nInformation:\n{context}\n\nQuestion: {question}\nHelpful Answer:", template_format='f-string', validate_template=True), ngql_prompt=PromptTemplate(input_variables=['schema', 'question'], output_parser=None, partial_variables={}, template="Task:Generate NebulaGraph Cypher statement to query a graph database.\n\nInstructions:\n\nFirst, generate cypher then convert it to NebulaGraph Cypher dialect(rather than standard):\n1. it requires explicit label specification when referring to node properties: v.`Foo`.name\n2. it uses double equals sign for comparison: `==` rather than `=`\nFor instance:\n```diff\n< MATCH (p:person)-[:directed]->(m:movie) WHERE m.name = 'The Godfather II'\n< RETURN p.name;\n---\n> MATCH (p:`person`)-[:directed]->(m:`movie`) WHERE m.`movie`.`name` == 'The Godfather II'\n> RETURN p.`person`.`name`;\n```\n\nUse only the provided relationship types and properties in the schema.\nDo not use any other relationship types or properties that are not provided.\nSchema:\n{schema}\nNote: Do not include any explanations or apologies in your responses.\nDo not respond to any questions that might ask anything else than for you to construct a Cypher statement.\nDo not include any text except the generated Cypher statement.\n\nThe question is:\n{question}", template_format='f-string', validate_template=True), **kwargs)[source]ο
Initialize from LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
qa_prompt (langchain.prompts.base.BasePromptTemplate) β
ngql_prompt (langchain.prompts.base.BasePromptTemplate) β
kwargs (Any) β
Return type
langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.OpenAIModerationChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, client=None, model_name=None, error=False, input_key='input', output_key='output', openai_api_key=None, openai_organization=None)[source]ο
Bases: langchain.chains.base.Chain
Pass input through a moderation endpoint.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.chains import OpenAIModerationChain
moderation = OpenAIModerationChain()
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
client (Any) β
model_name (Optional[str]) β
error (bool) β
input_key (str) β
output_key (str) β
openai_api_key (Optional[str]) β
openai_organization (Optional[str]) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute error: bool = Falseο
Whether or not to error if bad content was found.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute model_name: Optional[str] = Noneο
Moderation model name to use.
attribute openai_api_key: Optional[str] = Noneο
attribute openai_organization: Optional[str] = Noneο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.OpenAPIEndpointChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, api_request_chain, api_response_chain=None, api_operation, requests=None, param_mapping, return_intermediate_steps=False, instructions_key='instructions', output_key='output', max_text_length=None)[source]ο
Bases: langchain.chains.base.Chain, pydantic.main.BaseModel
Chain interacts with an OpenAPI endpoint using natural language.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
api_request_chain (langchain.chains.llm.LLMChain) β
api_response_chain (Optional[langchain.chains.llm.LLMChain]) β
api_operation (langchain.tools.openapi.utils.api_models.APIOperation) β
requests (langchain.requests.Requests) β
param_mapping (langchain.chains.api.openapi.chain._ParamMapping) β
return_intermediate_steps (bool) β
instructions_key (str) β
output_key (str) β
max_text_length (Optional[int]) β
Return type
None
attribute api_operation: APIOperation [Required]ο
attribute api_request_chain: LLMChain [Required]ο
attribute api_response_chain: Optional[LLMChain] = Noneο
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute param_mapping: _ParamMapping [Required]ο
attribute requests: Requests [Optional]ο
attribute return_intermediate_steps: bool = Falseο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
deserialize_json_input(serialized_args)[source]ο
Use the serialized typescript dictionary.
Resolve the path, query params dict, and optional requestBody dict.
Parameters
serialized_args (str) β
Return type
dict
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_api_operation(operation, llm, requests=None, verbose=False, return_intermediate_steps=False, raw_response=False, callbacks=None, **kwargs)[source]ο
Create an OpenAPIEndpointChain from an operation and a spec.
Parameters
operation (langchain.tools.openapi.utils.api_models.APIOperation) β
llm (langchain.base_language.BaseLanguageModel) β
requests (Optional[langchain.requests.Requests]) β
verbose (bool) β
return_intermediate_steps (bool) β
raw_response (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.chains.api.openapi.chain.OpenAPIEndpointChain
classmethod from_url_and_method(spec_url, path, method, llm, requests=None, return_intermediate_steps=False, **kwargs)[source]ο
Create an OpenAPIEndpoint from a spec at the specified url.
Parameters
spec_url (str) β
path (str) β
method (str) β
llm (langchain.base_language.BaseLanguageModel) β
requests (Optional[langchain.requests.Requests]) β
return_intermediate_steps (bool) β
kwargs (Any) β
Return type
langchain.chains.api.openapi.chain.OpenAPIEndpointChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.PALChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, llm_chain, llm=None, prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """Olivia has $23. She bought five bagels for $3 each. How much money does she have left?"""\nΒ Β Β money_initial = 23\nΒ Β Β bagels = 5\nΒ Β Β bagel_cost = 3\nΒ Β Β money_spent = bagels * bagel_cost\nΒ Β Β money_left = money_initial - money_spent\nΒ Β Β result = money_left\nΒ Β Β return result\n\n\n\n\n\nQ: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?"""\nΒ Β Β golf_balls_initial = 58\nΒ Β Β golf_balls_lost_tuesday = 23\nΒ Β Β golf_balls_lost_wednesday = 2\nΒ Β Β golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday\nΒ Β Β result = golf_balls_left\nΒ Β Β return result\n\n\n\n\n\nQ: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?"""\nΒ Β Β computers_initial = 9\nΒ Β Β computers_per_day = 5\nΒ Β Β num_days = 4Β # 4 days between monday and thursday\nΒ Β Β computers_added = computers_per_day * num_days\nΒ Β Β computers_total = computers_initial + computers_added\nΒ Β Β result = computers_total\nΒ Β Β return result\n\n\n\n\n\nQ: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?"""\nΒ Β Β toys_initial = 5\nΒ Β Β mom_toys = 2\nΒ Β Β dad_toys = 2\nΒ Β Β total_received = mom_toys + dad_toys\nΒ Β Β total_toys = toys_initial + total_received\nΒ Β Β result = total_toys\nΒ Β Β return result\n\n\n\n\n\nQ: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?"""\nΒ Β Β jason_lollipops_initial = 20\nΒ Β Β jason_lollipops_after = 12\nΒ Β Β denny_lollipops = jason_lollipops_initial - jason_lollipops_after\nΒ Β Β result = denny_lollipops\nΒ Β Β return result\n\n\n\n\n\nQ: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?"""\nΒ Β Β leah_chocolates = 32\nΒ Β Β sister_chocolates = 42\nΒ Β Β total_chocolates = leah_chocolates + sister_chocolates\nΒ Β Β chocolates_eaten = 35\nΒ Β Β chocolates_left = total_chocolates - chocolates_eaten\nΒ Β Β result = chocolates_left\nΒ Β Β return result\n\n\n\n\n\nQ: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?"""\nΒ Β Β cars_initial = 3\nΒ Β Β cars_arrived = 2\nΒ Β Β total_cars = cars_initial + cars_arrived\nΒ Β Β result = total_cars\nΒ Β Β return result\n\n\n\n\n\nQ: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?"""\nΒ Β Β trees_initial = 15\nΒ Β Β trees_after = 21\nΒ Β Β trees_added = trees_after - trees_initial\nΒ Β Β result = trees_added\nΒ Β Β return result\n\n\n\n\n\nQ: {question}\n\n# solution in Python:\n\n\n', template_format='f-string', validate_template=True), stop='\n\n', get_answer_expr='print(solution())', python_globals=None, python_locals=None, output_key='result', return_intermediate_steps=False)[source]ο
Bases: langchain.chains.base.Chain
Implements Program-Aided Language Models.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
llm_chain (langchain.chains.llm.LLMChain) β
llm (Optional[langchain.base_language.BaseLanguageModel]) β
prompt (langchain.prompts.base.BasePromptTemplate) β
stop (str) β
get_answer_expr (str) β
python_globals (Optional[Dict[str, Any]]) β
python_locals (Optional[Dict[str, Any]]) β
output_key (str) β
return_intermediate_steps (bool) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute get_answer_expr: str = 'print(solution())'ο
attribute llm: Optional[BaseLanguageModel] = Noneο
[Deprecated]
attribute llm_chain: LLMChain [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """Olivia has $23. She bought five bagels for $3 each. How much money does she have left?"""\nΒ Β Β money_initial = 23\nΒ Β Β bagels = 5\nΒ Β Β bagel_cost = 3\nΒ Β Β money_spent = bagels * bagel_cost\nΒ Β Β money_left = money_initial - money_spent\nΒ Β Β result = money_left\nΒ Β Β return result\n\n\n\n\n\nQ: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?"""\nΒ Β Β golf_balls_initial = 58\nΒ Β Β golf_balls_lost_tuesday = 23\nΒ Β Β golf_balls_lost_wednesday = 2\nΒ Β Β golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday\nΒ Β Β result = golf_balls_left\nΒ Β Β return result\n\n\n\n\n\nQ: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?"""\nΒ Β Β computers_initial = 9\nΒ Β Β computers_per_day = 5\nΒ Β Β num_days = 4Β # 4 days between monday and thursday\nΒ Β Β computers_added = computers_per_day * num_days\nΒ Β Β computers_total = computers_initial + computers_added\nΒ Β Β result = computers_total\nΒ Β Β return result\n\n\n\n\n\nQ: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?"""\nΒ Β Β toys_initial = 5\nΒ Β Β mom_toys = 2\nΒ Β Β dad_toys = 2\nΒ Β Β total_received = mom_toys + dad_toys\nΒ Β Β total_toys = toys_initial + total_received\nΒ Β Β result = total_toys\nΒ Β Β return result\n\n\n\n\n\nQ: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?"""\nΒ Β Β jason_lollipops_initial = 20\nΒ Β Β jason_lollipops_after = 12\nΒ Β Β denny_lollipops = jason_lollipops_initial - jason_lollipops_after\nΒ Β Β result = denny_lollipops\nΒ Β Β return result\n\n\n\n\n\nQ: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?"""\nΒ Β Β leah_chocolates = 32\nΒ Β Β sister_chocolates = 42\nΒ Β Β total_chocolates = leah_chocolates + sister_chocolates\nΒ Β Β chocolates_eaten = 35\nΒ Β Β chocolates_left = total_chocolates - chocolates_eaten\nΒ Β Β result = chocolates_left\nΒ Β Β return result\n\n\n\n\n\nQ: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?"""\nΒ Β Β cars_initial = 3\nΒ Β Β cars_arrived = 2\nΒ Β Β total_cars = cars_initial + cars_arrived\nΒ Β Β result = total_cars\nΒ Β Β return result\n\n\n\n\n\nQ: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?"""\nΒ Β Β trees_initial = 15\nΒ Β Β trees_after = 21\nΒ Β Β trees_added = trees_after - trees_initial\nΒ Β Β result = trees_added\nΒ Β Β return result\n\n\n\n\n\nQ: {question}\n\n# solution in Python:\n\n\n', template_format='f-string', validate_template=True)ο
[Deprecated]
attribute python_globals: Optional[Dict[str, Any]] = Noneο
attribute python_locals: Optional[Dict[str, Any]] = Noneο
attribute return_intermediate_steps: bool = Falseο
attribute stop: str = '\n\n'ο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_colored_object_prompt(llm, **kwargs)[source]ο
Load PAL from colored object prompt.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
kwargs (Any) β
Return type
langchain.chains.pal.base.PALChain
classmethod from_math_prompt(llm, **kwargs)[source]ο
Load PAL from math prompt.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
kwargs (Any) β
Return type
langchain.chains.pal.base.PALChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.QAGenerationChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, llm_chain, text_splitter=<langchain.text_splitter.RecursiveCharacterTextSplitter object>, input_key='text', output_key='questions', k=None)[source]ο
Bases: langchain.chains.base.Chain
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
llm_chain (langchain.chains.llm.LLMChain) β
text_splitter (langchain.text_splitter.TextSplitter) β
input_key (str) β
output_key (str) β
k (Optional[int]) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute input_key: str = 'text'ο
attribute k: Optional[int] = Noneο
attribute llm_chain: LLMChain [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute output_key: str = 'questions'ο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute text_splitter: TextSplitter = <langchain.text_splitter.RecursiveCharacterTextSplitter object>ο
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, prompt=None, **kwargs)[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
prompt (Optional[langchain.prompts.base.BasePromptTemplate]) β
kwargs (Any) β
Return type
langchain.chains.qa_generation.base.QAGenerationChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property input_keys: List[str]ο
Input keys this chain expects.
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
property output_keys: List[str]ο
Output keys this chain expects.
class langchain.chains.QAWithSourcesChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_documents_chain, question_key='question', input_docs_key='docs', answer_key='answer', sources_answer_key='sources', return_source_documents=False)[source]ο
Bases: langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain
Question answering with sources over documents.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
combine_documents_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) β
question_key (str) β
input_docs_key (str) β
answer_key (str) β
sources_answer_key (str) β
return_source_documents (bool) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute combine_documents_chain: BaseCombineDocumentsChain [Required]ο
Chain to use to combine documents.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute return_source_documents: bool = Falseο
Return the source documents.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_chain_type(llm, chain_type='stuff', chain_type_kwargs=None, **kwargs)ο
Load chain from chain type.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
chain_type (str) β
chain_type_kwargs (Optional[dict]) β
kwargs (Any) β
Return type
langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain
classmethod from_llm(llm, document_prompt=PromptTemplate(input_variables=['page_content', 'source'], output_parser=None, partial_variables={}, template='Content: {page_content}\nSource: {source}', template_format='f-string', validate_template=True), question_prompt=PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template='Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\n{context}\nQuestion: {question}\nRelevant text, if any:', template_format='f-string', validate_template=True), combine_prompt=PromptTemplate(input_variables=['summaries', 'question'], output_parser=None, partial_variables={}, template='Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). \nIf you don\'t know the answer, just say that you don\'t know. Don\'t try to make up an answer.\nALWAYS return a "SOURCES" part in your answer.\n\nQUESTION: Which state/country\'s law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts inΒ relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for anΒ injunction or other relief to protect its Intellectual Property Rights.\nSource: 28-pl\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other)Β right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuationΒ in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of anyΒ kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\nSource: 30-pl\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (asΒ defined in Clause 8.5) or that such a violation is reasonably likely to occur,\nSource: 4-pl\n=========\nFINAL ANSWER: This Agreement is governed by English law.\nSOURCES: 28-pl\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.Β \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russiaβs Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\nSource: 0-pl\nContent: And we wonβt stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLetβs use this moment to reset. Letβs stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease.Β \n\nLetβs stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans.Β \n\nWe canβt change how divided weβve been. But we can change how we move forwardβon COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans whoβd grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\nSource: 24-pl\nContent: And a proud Ukrainian people, who have known 30 yearsΒ of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards.Β \n\nTo all Americans, I will be honest with you, as Iβve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd Iβm taking robust action to make sure the pain of our sanctionsΒ is targeted at Russiaβs economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.Β \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies.Β \n\nThese steps will help blunt gas prices here at home. And I know the news about whatβs happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\nSource: 5-pl\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nItβs based on DARPAβthe Defense Department project that led to the Internet, GPS, and so much more.Β \n\nARPA-H will have a singular purposeβto drive breakthroughs in cancer, Alzheimerβs, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americansβtonight , we have gathered in a sacred spaceβthe citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\nSource: 34-pl\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\nSOURCES:\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:', template_format='f-string', validate_template=True), **kwargs)ο
Construct the chain from an LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
document_prompt (langchain.prompts.base.BasePromptTemplate) β
question_prompt (langchain.prompts.base.BasePromptTemplate) β
combine_prompt (langchain.prompts.base.BasePromptTemplate) β
kwargs (Any) β
Return type
langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.RetrievalQA(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_documents_chain, input_key='query', output_key='result', return_source_documents=False, retriever)[source]ο
Bases: langchain.chains.retrieval_qa.base.BaseRetrievalQA
Chain for question-answering against an index.
Example
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.faiss import FAISS
from langchain.vectorstores.base import VectorStoreRetriever
retriever = VectorStoreRetriever(vectorstore=FAISS(...))
retrievalQA = RetrievalQA.from_llm(llm=OpenAI(), retriever=retriever)
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
combine_documents_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) β
input_key (str) β
output_key (str) β
return_source_documents (bool) β
retriever (langchain.schema.BaseRetriever) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute combine_documents_chain: BaseCombineDocumentsChain [Required]ο
Chain to use to combine the documents.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute retriever: BaseRetriever [Required]ο
attribute return_source_documents: bool = Falseο
Return the source documents.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_chain_type(llm, chain_type='stuff', chain_type_kwargs=None, **kwargs)ο
Load chain from chain type.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
chain_type (str) β
chain_type_kwargs (Optional[dict]) β
kwargs (Any) β
Return type
langchain.chains.retrieval_qa.base.BaseRetrievalQA
classmethod from_llm(llm, prompt=None, **kwargs)ο
Initialize from LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
prompt (Optional[langchain.prompts.prompt.PromptTemplate]) β
kwargs (Any) β
Return type
langchain.chains.retrieval_qa.base.BaseRetrievalQA
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.RetrievalQAWithSourcesChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_documents_chain, question_key='question', input_docs_key='docs', answer_key='answer', sources_answer_key='sources', return_source_documents=False, retriever, reduce_k_below_max_tokens=False, max_tokens_limit=3375)[source]ο
Bases: langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain
Question-answering with sources over an index.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
combine_documents_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) β
question_key (str) β
input_docs_key (str) β
answer_key (str) β
sources_answer_key (str) β
return_source_documents (bool) β
retriever (langchain.schema.BaseRetriever) β
reduce_k_below_max_tokens (bool) β
max_tokens_limit (int) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute combine_documents_chain: BaseCombineDocumentsChain [Required]ο
Chain to use to combine documents.
attribute max_tokens_limit: int = 3375ο
Restrict the docs to return from store based on tokens,
enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute reduce_k_below_max_tokens: bool = Falseο
Reduce the number of results to return from store based on tokens limit
attribute retriever: langchain.schema.BaseRetriever [Required]ο
Index to connect to.
attribute return_source_documents: bool = Falseο
Return the source documents.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_chain_type(llm, chain_type='stuff', chain_type_kwargs=None, **kwargs)ο
Load chain from chain type.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
chain_type (str) β
chain_type_kwargs (Optional[dict]) β
kwargs (Any) β
Return type
langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain
classmethod from_llm(llm, document_prompt=PromptTemplate(input_variables=['page_content', 'source'], output_parser=None, partial_variables={}, template='Content: {page_content}\nSource: {source}', template_format='f-string', validate_template=True), question_prompt=PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template='Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\n{context}\nQuestion: {question}\nRelevant text, if any:', template_format='f-string', validate_template=True), combine_prompt=PromptTemplate(input_variables=['summaries', 'question'], output_parser=None, partial_variables={}, template='Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). \nIf you don\'t know the answer, just say that you don\'t know. Don\'t try to make up an answer.\nALWAYS return a "SOURCES" part in your answer.\n\nQUESTION: Which state/country\'s law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts inΒ relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for anΒ injunction or other relief to protect its Intellectual Property Rights.\nSource: 28-pl\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other)Β right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuationΒ in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of anyΒ kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\nSource: 30-pl\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (asΒ defined in Clause 8.5) or that such a violation is reasonably likely to occur,\nSource: 4-pl\n=========\nFINAL ANSWER: This Agreement is governed by English law.\nSOURCES: 28-pl\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.Β \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russiaβs Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\nSource: 0-pl\nContent: And we wonβt stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLetβs use this moment to reset. Letβs stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease.Β \n\nLetβs stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans.Β \n\nWe canβt change how divided weβve been. But we can change how we move forwardβon COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans whoβd grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\nSource: 24-pl\nContent: And a proud Ukrainian people, who have known 30 yearsΒ of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards.Β \n\nTo all Americans, I will be honest with you, as Iβve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd Iβm taking robust action to make sure the pain of our sanctionsΒ is targeted at Russiaβs economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.Β \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies.Β \n\nThese steps will help blunt gas prices here at home. And I know the news about whatβs happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\nSource: 5-pl\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nItβs based on DARPAβthe Defense Department project that led to the Internet, GPS, and so much more.Β \n\nARPA-H will have a singular purposeβto drive breakthroughs in cancer, Alzheimerβs, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americansβtonight , we have gathered in a sacred spaceβthe citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\nSource: 34-pl\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\nSOURCES:\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:', template_format='f-string', validate_template=True), **kwargs)ο
Construct the chain from an LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
document_prompt (langchain.prompts.base.BasePromptTemplate) β
question_prompt (langchain.prompts.base.BasePromptTemplate) β
combine_prompt (langchain.prompts.base.BasePromptTemplate) β
kwargs (Any) β
Return type
langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.RouterChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None)[source]ο
Bases: langchain.chains.base.Chain, abc.ABC
Chain that outputs the name of a destination chain and the inputs to it.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async aroute(inputs, callbacks=None)[source]ο
Parameters
inputs (Dict[str, Any]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
langchain.chains.router.base.Route
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
route(inputs, callbacks=None)[source]ο
Parameters
inputs (Dict[str, Any]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
langchain.chains.router.base.Route
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
abstract property input_keys: List[str]ο
Input keys this chain expects.
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
property output_keys: List[str]ο
Output keys this chain expects.
class langchain.chains.SQLDatabaseChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, llm_chain, llm=None, database, prompt=None, top_k=5, input_key='query', output_key='result', return_intermediate_steps=False, return_direct=False, use_query_checker=False, query_checker_prompt=None)[source]ο
Bases: langchain.chains.base.Chain
Chain for interacting with SQL Database.
Example
from langchain import SQLDatabaseChain, OpenAI, SQLDatabase
db = SQLDatabase(...)
db_chain = SQLDatabaseChain.from_llm(OpenAI(), db)
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
llm_chain (langchain.chains.llm.LLMChain) β
llm (Optional[langchain.base_language.BaseLanguageModel]) β
database (langchain.sql_database.SQLDatabase) β
prompt (Optional[langchain.prompts.base.BasePromptTemplate]) β
top_k (int) β
input_key (str) β
output_key (str) β
return_intermediate_steps (bool) β
return_direct (bool) β
use_query_checker (bool) β
query_checker_prompt (Optional[langchain.prompts.base.BasePromptTemplate]) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute database: SQLDatabase [Required]ο
SQL Database to connect to.
attribute llm: Optional[BaseLanguageModel] = Noneο
[Deprecated] LLM wrapper to use.
attribute llm_chain: LLMChain [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute prompt: Optional[BasePromptTemplate] = Noneο
[Deprecated] Prompt to use to translate natural language to SQL.
attribute query_checker_prompt: Optional[BasePromptTemplate] = Noneο
The prompt template that should be used by the query checker
attribute return_direct: bool = Falseο
Whether or not to return the result of querying the SQL table directly.
attribute return_intermediate_steps: bool = Falseο
Whether or not to return the intermediate steps along with the final answer.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute top_k: int = 5ο
Number of results to return from the query
attribute use_query_checker: bool = Falseο
Whether or not the query checker tool should be used to attempt
to fix the initial SQL from the LLM.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, db, prompt=None, **kwargs)[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
db (langchain.sql_database.SQLDatabase) β
prompt (Optional[langchain.prompts.base.BasePromptTemplate]) β
kwargs (Any) β
Return type
langchain.chains.sql_database.base.SQLDatabaseChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.SQLDatabaseSequentialChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, decider_chain, sql_chain, input_key='query', output_key='result', return_intermediate_steps=False)[source]ο
Bases: langchain.chains.base.Chain
Chain for querying SQL database that is a sequential chain.
The chain is as follows:
1. Based on the query, determine which tables to use.
2. Based on those tables, call the normal SQL database chain.
This is useful in cases where the number of tables in the database is large.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
decider_chain (langchain.chains.llm.LLMChain) β
sql_chain (langchain.chains.sql_database.base.SQLDatabaseChain) β
input_key (str) β
output_key (str) β
return_intermediate_steps (bool) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute decider_chain: LLMChain [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute return_intermediate_steps: bool = Falseο
attribute sql_chain: SQLDatabaseChain [Required]ο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, database, query_prompt=PromptTemplate(input_variables=['input', 'table_info', 'dialect', 'top_k'], output_parser=None, partial_variables={}, template='Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\n\nNever query for all the columns from a specific table, only ask for a the few relevant columns given the question.\n\nPay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n\nUse the following format:\n\nQuestion: Question here\nSQLQuery: SQL Query to run\nSQLResult: Result of the SQLQuery\nAnswer: Final answer here\n\nOnly use the following tables:\n{table_info}\n\nQuestion: {input}', template_format='f-string', validate_template=True), decider_prompt=PromptTemplate(input_variables=['query', 'table_names'], output_parser=CommaSeparatedListOutputParser(), partial_variables={}, template='Given the below input question and list of potential tables, output a comma separated list of the table names that may be necessary to answer this question.\n\nQuestion: {query}\n\nTable Names: {table_names}\n\nRelevant Table Names:', template_format='f-string', validate_template=True), **kwargs)[source]ο
Load the necessary chains.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
database (langchain.sql_database.SQLDatabase) β
query_prompt (langchain.prompts.base.BasePromptTemplate) β
decider_prompt (langchain.prompts.base.BasePromptTemplate) β
kwargs (Any) β
Return type
langchain.chains.sql_database.base.SQLDatabaseSequentialChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.SequentialChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, chains, input_variables, output_variables, return_all=False)[source]ο
Bases: langchain.chains.base.Chain
Chain where the outputs of one chain feed directly into next.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
chains (List[langchain.chains.base.Chain]) β
input_variables (List[str]) β
output_variables (List[str]) β
return_all (bool) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute chains: List[langchain.chains.base.Chain] [Required]ο
attribute input_variables: List[str] [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute return_all: bool = Falseο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.SimpleSequentialChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, chains, strip_outputs=False, input_key='input', output_key='output')[source]ο
Bases: langchain.chains.base.Chain
Simple chain where the outputs of one step feed directly into next.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
chains (List[langchain.chains.base.Chain]) β
strip_outputs (bool) β
input_key (str) β
output_key (str) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute chains: List[langchain.chains.base.Chain] [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute strip_outputs: bool = Falseο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.TransformChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_variables, output_variables, transform)[source]ο
Bases: langchain.chains.base.Chain
Chain transform chain output.
Example
from langchain import TransformChain
transform_chain = TransformChain(input_variables=["text"],
output_variables["entities"], transform=func())
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
input_variables (List[str]) β
output_variables (List[str]) β
transform (Callable[[Dict[str, str]], Dict[str, str]]) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute input_variables: List[str] [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute output_variables: List[str] [Required]ο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute transform: Callable[[Dict[str, str]], Dict[str, str]] [Required]ο
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.VectorDBQA(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_documents_chain, input_key='query', output_key='result', return_source_documents=False, vectorstore, k=4, search_type='similarity', search_kwargs=None)[source]ο
Bases: langchain.chains.retrieval_qa.base.BaseRetrievalQA
Chain for question-answering against a vector database.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
combine_documents_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) β
input_key (str) β
output_key (str) β
return_source_documents (bool) β
vectorstore (langchain.vectorstores.base.VectorStore) β
k (int) β
search_type (str) β
search_kwargs (Dict[str, Any]) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute combine_documents_chain: BaseCombineDocumentsChain [Required]ο
Chain to use to combine the documents.
attribute k: int = 4ο
Number of documents to query for.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute return_source_documents: bool = Falseο
Return the source documents.
attribute search_kwargs: Dict[str, Any] [Optional]ο
Extra search args.
attribute search_type: str = 'similarity'ο
Search type to use over vectorstore. similarity or mmr.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute vectorstore: VectorStore [Required]ο
Vector Database to connect to.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_chain_type(llm, chain_type='stuff', chain_type_kwargs=None, **kwargs)ο
Load chain from chain type.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
chain_type (str) β
chain_type_kwargs (Optional[dict]) β
kwargs (Any) β
Return type
langchain.chains.retrieval_qa.base.BaseRetrievalQA
classmethod from_llm(llm, prompt=None, **kwargs)ο
Initialize from LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
prompt (Optional[langchain.prompts.prompt.PromptTemplate]) β
kwargs (Any) β
Return type
langchain.chains.retrieval_qa.base.BaseRetrievalQA
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.VectorDBQAWithSourcesChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_documents_chain, question_key='question', input_docs_key='docs', answer_key='answer', sources_answer_key='sources', return_source_documents=False, vectorstore, k=4, reduce_k_below_max_tokens=False, max_tokens_limit=3375, search_kwargs=None)[source]ο
Bases: langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain
Question-answering with sources over a vector database.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
combine_documents_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) β
question_key (str) β
input_docs_key (str) β
answer_key (str) β
sources_answer_key (str) β
return_source_documents (bool) β
vectorstore (langchain.vectorstores.base.VectorStore) β
k (int) β
reduce_k_below_max_tokens (bool) β
max_tokens_limit (int) β
search_kwargs (Dict[str, Any]) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute combine_documents_chain: BaseCombineDocumentsChain [Required]ο
Chain to use to combine documents.
attribute k: int = 4ο
Number of results to return from store
attribute max_tokens_limit: int = 3375ο
Restrict the docs to return from store based on tokens,
enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute reduce_k_below_max_tokens: bool = Falseο
Reduce the number of results to return from store based on tokens limit
attribute return_source_documents: bool = Falseο
Return the source documents.
attribute search_kwargs: Dict[str, Any] [Optional]ο
Extra search args.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute vectorstore: langchain.vectorstores.base.VectorStore [Required]ο
Vector Database to connect to.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_chain_type(llm, chain_type='stuff', chain_type_kwargs=None, **kwargs)ο
Load chain from chain type.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
chain_type (str) β
chain_type_kwargs (Optional[dict]) β
kwargs (Any) β
Return type
langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain
classmethod from_llm(llm, document_prompt=PromptTemplate(input_variables=['page_content', 'source'], output_parser=None, partial_variables={}, template='Content: {page_content}\nSource: {source}', template_format='f-string', validate_template=True), question_prompt=PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template='Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\n{context}\nQuestion: {question}\nRelevant text, if any:', template_format='f-string', validate_template=True), combine_prompt=PromptTemplate(input_variables=['summaries', 'question'], output_parser=None, partial_variables={}, template='Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). \nIf you don\'t know the answer, just say that you don\'t know. Don\'t try to make up an answer.\nALWAYS return a "SOURCES" part in your answer.\n\nQUESTION: Which state/country\'s law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts inΒ relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for anΒ injunction or other relief to protect its Intellectual Property Rights.\nSource: 28-pl\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other)Β right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuationΒ in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of anyΒ kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\nSource: 30-pl\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (asΒ defined in Clause 8.5) or that such a violation is reasonably likely to occur,\nSource: 4-pl\n=========\nFINAL ANSWER: This Agreement is governed by English law.\nSOURCES: 28-pl\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.Β \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russiaβs Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\nSource: 0-pl\nContent: And we wonβt stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLetβs use this moment to reset. Letβs stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease.Β \n\nLetβs stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans.Β \n\nWe canβt change how divided weβve been. But we can change how we move forwardβon COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans whoβd grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\nSource: 24-pl\nContent: And a proud Ukrainian people, who have known 30 yearsΒ of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards.Β \n\nTo all Americans, I will be honest with you, as Iβve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd Iβm taking robust action to make sure the pain of our sanctionsΒ is targeted at Russiaβs economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.Β \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies.Β \n\nThese steps will help blunt gas prices here at home. And I know the news about whatβs happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\nSource: 5-pl\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nItβs based on DARPAβthe Defense Department project that led to the Internet, GPS, and so much more.Β \n\nARPA-H will have a singular purposeβto drive breakthroughs in cancer, Alzheimerβs, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americansβtonight , we have gathered in a sacred spaceβthe citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\nSource: 34-pl\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\nSOURCES:\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:', template_format='f-string', validate_template=True), **kwargs)ο
Construct the chain from an LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
document_prompt (langchain.prompts.base.BasePromptTemplate) β
question_prompt (langchain.prompts.base.BasePromptTemplate) β
combine_prompt (langchain.prompts.base.BasePromptTemplate) β
kwargs (Any) β
Return type
langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
langchain.chains.create_extraction_chain(schema, llm)[source]ο
Creates a chain that extracts information from a passage.
Parameters
schema (dict) β The schema of the entities to extract.
llm (langchain.base_language.BaseLanguageModel) β The language model to use.
Returns
Chain that can be used to extract information from a passage.
Return type
langchain.chains.base.Chain
langchain.chains.create_extraction_chain_pydantic(pydantic_schema, llm)[source]ο
Creates a chain that extracts information from a passage using pydantic schema.
Parameters
pydantic_schema (Any) β The pydantic schema of the entities to extract.
llm (langchain.base_language.BaseLanguageModel) β The language model to use.
Returns
Chain that can be used to extract information from a passage.
Return type
langchain.chains.base.Chain
langchain.chains.create_tagging_chain(schema, llm)[source]ο
Creates a chain that extracts information from a passage.
Parameters
schema (dict) β The schema of the entities to extract.
llm (langchain.base_language.BaseLanguageModel) β The language model to use.
Returns
Chain (LLMChain) that can be used to extract information from a passage.
Return type
langchain.chains.base.Chain
langchain.chains.create_tagging_chain_pydantic(pydantic_schema, llm)[source]ο
Creates a chain that extracts information from a passage.
Parameters
pydantic_schema (Any) β The pydantic schema of the entities to extract.
llm (langchain.base_language.BaseLanguageModel) β The language model to use.
Returns
Chain (LLMChain) that can be used to extract information from a passage.
Return type
langchain.chains.base.Chain
langchain.chains.load_chain(path, **kwargs)[source]ο
Unified method for loading a chain from LangChainHub or local fs.
Parameters
path (Union[str, pathlib.Path]) β
kwargs (Any) β
Return type
langchain.chains.base.Chain
langchain.chains.create_citation_fuzzy_match_chain(llm)[source]ο
Create a citation fuzzy match chain.
Parameters
llm (langchain.base_language.BaseLanguageModel) β Language model to use for the chain.
Returns
Chain (LLMChain) that can be used to answer questions with citations.
Return type
langchain.chains.llm.LLMChain
langchain.chains.create_qa_with_structure_chain(llm, schema, output_parser='base', prompt=None)[source]ο
Create a question answering chain that returns an answer with sources.
Parameters
llm (langchain.base_language.BaseLanguageModel) β Language model to use for the chain.
schema (Union[dict, Type[pydantic.main.BaseModel]]) β Pydantic schema to use for the output.
output_parser (str) β Output parser to use. Should be one of pydantic or base.
Default to base.
prompt (Optional[Union[langchain.prompts.prompt.PromptTemplate, langchain.prompts.chat.ChatPromptTemplate]]) β Optional prompt to use for the chain.
Return type
langchain.chains.llm.LLMChain
Returns:
langchain.chains.create_qa_with_sources_chain(llm, **kwargs)[source]ο
Create a question answering chain that returns an answer with sources.
Parameters
llm (langchain.base_language.BaseLanguageModel) β Language model to use for the chain.
**kwargs β Keyword arguments to pass to create_qa_with_structure_chain.
kwargs (Any) β
Returns
Chain (LLMChain) that can be used to answer questions with citations.
Return type
langchain.chains.llm.LLMChain
class langchain.chains.StuffDocumentsChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_documents', output_key='output_text', llm_chain, document_prompt=None, document_variable_name, document_separator='\n\n')[source]ο
Bases: langchain.chains.combine_documents.base.BaseCombineDocumentsChain
Chain that combines documents by stuffing into context.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
input_key (str) β
output_key (str) β
llm_chain (langchain.chains.llm.LLMChain) β
document_prompt (langchain.prompts.base.BasePromptTemplate) β
document_variable_name (str) β
document_separator (str) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute document_prompt: langchain.prompts.base.BasePromptTemplate [Optional]ο
Prompt to use to format each document.
attribute document_separator: str = '\n\n'ο
The string with which to join the formatted documents
attribute document_variable_name: str [Required]ο
The variable name in the llm_chain to put the documents in.
If only one variable in the llm_chain, this need not be provided.
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
LLM wrapper to use after formatting documents.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
async acombine_docs(docs, callbacks=None, **kwargs)[source]ο
Stuff all documents into one prompt and pass to LLM.
Parameters
docs (List[langchain.schema.Document]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Tuple[str, dict]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
combine_docs(docs, callbacks=None, **kwargs)[source]ο
Stuff all documents into one prompt and pass to LLM.
Parameters
docs (List[langchain.schema.Document]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Tuple[str, dict]
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
prompt_length(docs, **kwargs)[source]ο
Get the prompt length by formatting the prompt.
Parameters
docs (List[langchain.schema.Document]) β
kwargs (Any) β
Return type
Optional[int]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.MapRerankDocumentsChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_documents', output_key='output_text', llm_chain, document_variable_name, rank_key, answer_key, metadata_keys=None, return_intermediate_steps=False)[source]ο
Bases: langchain.chains.combine_documents.base.BaseCombineDocumentsChain
Combining documents by mapping a chain over them, then reranking results.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
input_key (str) β
output_key (str) β
llm_chain (langchain.chains.llm.LLMChain) β
document_variable_name (str) β
rank_key (str) β
answer_key (str) β
metadata_keys (Optional[List[str]]) β
return_intermediate_steps (bool) β
Return type
None
attribute answer_key: str [Required]ο
Key in output of llm_chain to return as answer.
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute document_variable_name: str [Required]ο
The variable name in the llm_chain to put the documents in.
If only one variable in the llm_chain, this need not be provided.
attribute llm_chain: LLMChain [Required]ο
Chain to apply to each document individually.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute metadata_keys: Optional[List[str]] = Noneο
attribute rank_key: str [Required]ο
Key in output of llm_chain to rank on.
attribute return_intermediate_steps: bool = Falseο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
async acombine_docs(docs, callbacks=None, **kwargs)[source]ο
Combine documents in a map rerank manner.
Combine by mapping first chain over all documents, then reranking the results.
Parameters
docs (List[langchain.schema.Document]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Tuple[str, dict]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
combine_docs(docs, callbacks=None, **kwargs)[source]ο
Combine documents in a map rerank manner.
Combine by mapping first chain over all documents, then reranking the results.
Parameters
docs (List[langchain.schema.Document]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Tuple[str, dict]
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
prompt_length(docs, **kwargs)ο
Return the prompt length given the documents passed in.
Returns None if the method does not depend on the prompt length.
Parameters
docs (List[langchain.schema.Document]) β
kwargs (Any) β
Return type
Optional[int]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.MapReduceDocumentsChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_documents', output_key='output_text', llm_chain, combine_document_chain, collapse_document_chain=None, document_variable_name, return_intermediate_steps=False)[source]ο
Bases: langchain.chains.combine_documents.base.BaseCombineDocumentsChain
Combining documents by mapping a chain over them, then combining results.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
input_key (str) β
output_key (str) β
llm_chain (langchain.chains.llm.LLMChain) β
combine_document_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) β
collapse_document_chain (Optional[langchain.chains.combine_documents.base.BaseCombineDocumentsChain]) β
document_variable_name (str) β
return_intermediate_steps (bool) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute collapse_document_chain: Optional[BaseCombineDocumentsChain] = Noneο
Chain to use to collapse intermediary results if needed.
If None, will use the combine_document_chain.
attribute combine_document_chain: BaseCombineDocumentsChain [Required]ο
Chain to use to combine results of applying llm_chain to documents.
attribute document_variable_name: str [Required]ο
The variable name in the llm_chain to put the documents in.
If only one variable in the llm_chain, this need not be provided.
attribute llm_chain: LLMChain [Required]ο
Chain to apply to each document individually.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute return_intermediate_steps: bool = Falseο
Return the results of the map steps in the output.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
async acombine_docs(docs, callbacks=None, **kwargs)[source]ο
Combine documents in a map reduce manner.
Combine by mapping first chain over all documents, then reducing the results.
This reducing can be done recursively if needed (if there are many documents).
Parameters
docs (List[langchain.schema.Document]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Tuple[str, dict]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
combine_docs(docs, token_max=3000, callbacks=None, **kwargs)[source]ο
Combine documents in a map reduce manner.
Combine by mapping first chain over all documents, then reducing the results.
This reducing can be done recursively if needed (if there are many documents).
Parameters
docs (List[langchain.schema.Document]) β
token_max (int) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Tuple[str, dict]
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
prompt_length(docs, **kwargs)ο
Return the prompt length given the documents passed in.
Returns None if the method does not depend on the prompt length.
Parameters
docs (List[langchain.schema.Document]) β
kwargs (Any) β
Return type
Optional[int]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.RefineDocumentsChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_documents', output_key='output_text', initial_llm_chain, refine_llm_chain, document_variable_name, initial_response_name, document_prompt=None, return_intermediate_steps=False)[source]ο
Bases: langchain.chains.combine_documents.base.BaseCombineDocumentsChain
Combine documents by doing a first pass and then refining on more documents.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
input_key (str) β
output_key (str) β
initial_llm_chain (langchain.chains.llm.LLMChain) β
refine_llm_chain (langchain.chains.llm.LLMChain) β
document_variable_name (str) β
initial_response_name (str) β
document_prompt (langchain.prompts.base.BasePromptTemplate) β
return_intermediate_steps (bool) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute document_prompt: BasePromptTemplate [Optional]ο
Prompt to use to format each document.
attribute document_variable_name: str [Required]ο
The variable name in the initial_llm_chain to put the documents in.
If only one variable in the initial_llm_chain, this need not be provided.
attribute initial_llm_chain: LLMChain [Required]ο
LLM chain to use on initial document.
attribute initial_response_name: str [Required]ο
The variable name to format the initial response in when refining.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute refine_llm_chain: LLMChain [Required]ο
LLM chain to use when refining.
attribute return_intermediate_steps: bool = Falseο
Return the results of the refine steps in the output.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
async acombine_docs(docs, callbacks=None, **kwargs)[source]ο
Combine by mapping first chain over all, then stuffing into final chain.
Parameters
docs (List[langchain.schema.Document]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Tuple[str, dict]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
combine_docs(docs, callbacks=None, **kwargs)[source]ο
Combine by mapping first chain over all, then stuffing into final chain.
Parameters
docs (List[langchain.schema.Document]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Tuple[str, dict]
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
prompt_length(docs, **kwargs)ο
Return the prompt length given the documents passed in.
Returns None if the method does not depend on the prompt length.
Parameters
docs (List[langchain.schema.Document]) β
kwargs (Any) β
Return type
Optional[int]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/latest/modules/chains.html |
c08e8bd6-f501-4cbf-bda0-e556a386e559 | Agent Toolkitsο
Agent toolkits.
langchain.agents.agent_toolkits.create_json_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix='Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a json agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.json.toolkit.JsonToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.agent_toolkits.create_sql_agent(llm, toolkit, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix='You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix=None, format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a sql agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit) β
agent_type (langchain.agents.agent_types.AgentType) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (Optional[str]) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
top_k (int) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.agent_toolkits.create_openapi_agent(llm, toolkit, callback_manager=None, prefix="You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix='Begin!\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, return_intermediate_steps=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a json agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
verbose (bool) β
return_intermediate_steps (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.agent_toolkits.create_pbi_agent(llm, toolkit, powerbi=None, callback_manager=None, prefix='You are an agent designed to help users interact with a PowerBI Dataset.\n\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix='Begin!\n\nQuestion: {input}\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', examples=None, input_variables=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a pbi agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) β
powerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
examples (Optional[str]) β
input_variables (Optional[List[str]]) β
top_k (int) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.agent_toolkits.create_pbi_chat_agent(llm, toolkit, powerbi=None, callback_manager=None, output_parser=None, prefix='Assistant is a large language model built to help users interact with a PowerBI Dataset.\n\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix="TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples=None, input_variables=None, memory=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a pbi agent from an Chat LLM and tools.
If you supply only a toolkit and no powerbi dataset, the same LLM is used for both.
Parameters
llm (langchain.chat_models.base.BaseChatModel) β
toolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) β
powerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
output_parser (Optional[langchain.agents.agent.AgentOutputParser]) β
prefix (str) β
suffix (str) β
examples (Optional[str]) β
input_variables (Optional[List[str]]) β
memory (Optional[langchain.memory.chat_memory.BaseChatMemory]) β
top_k (int) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.agent_toolkits.create_python_agent(llm, tool, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, verbose=False, prefix='You are an agent designed to write and execute python code to answer questions.\nYou have access to a python REPL, which you can use to execute python code.\nIf you get an error, debug your code and try again.\nOnly use the output of your code to answer the question. \nYou might know the answer without running any code, but you should still run the code to get the answer.\nIf it does not seem like you can write code to answer the question, just return "I don\'t know" as the answer.\n', agent_executor_kwargs=None, **kwargs)[source]ο
Construct a python agent from an LLM and tool.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
tool (langchain.tools.python.tool.PythonREPLTool) β
agent_type (langchain.agents.agent_types.AgentType) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
prefix (str) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.agent_toolkits.create_vectorstore_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a vectorstore agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
class langchain.agents.agent_toolkits.JsonToolkit(*, spec)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for interacting with a JSON spec.
Parameters
spec (langchain.tools.json.tool.JsonSpec) β
Return type
None
attribute spec: langchain.tools.json.tool.JsonSpec [Required]ο
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.SQLDatabaseToolkit(*, db, llm)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for interacting with SQL databases.
Parameters
db (langchain.sql_database.SQLDatabase) β
llm (langchain.base_language.BaseLanguageModel) β
Return type
None
attribute db: langchain.sql_database.SQLDatabase [Required]ο
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
property dialect: strο
Return string representation of dialect to use.
class langchain.agents.agent_toolkits.SparkSQLToolkit(*, db, llm)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for interacting with Spark SQL.
Parameters
db (langchain.utilities.spark_sql.SparkSQL) β
llm (langchain.base_language.BaseLanguageModel) β
Return type
None
attribute db: langchain.utilities.spark_sql.SparkSQL [Required]ο
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.NLAToolkit(*, nla_tools)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Natural Language API Toolkit Definition.
Parameters
nla_tools (Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool]) β
Return type
None
attribute nla_tools: Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool] [Required]ο
List of API Endpoint Tools.
classmethod from_llm_and_ai_plugin(llm, ai_plugin, requests=None, verbose=False, **kwargs)[source]ο
Instantiate the toolkit from an OpenAPI Spec URL
Parameters
llm (langchain.base_language.BaseLanguageModel) β
ai_plugin (langchain.tools.plugin.AIPlugin) β
requests (Optional[langchain.requests.Requests]) β
verbose (bool) β
kwargs (Any) β
Return type
langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit
classmethod from_llm_and_ai_plugin_url(llm, ai_plugin_url, requests=None, verbose=False, **kwargs)[source]ο
Instantiate the toolkit from an OpenAPI Spec URL
Parameters
llm (langchain.base_language.BaseLanguageModel) β
ai_plugin_url (str) β
requests (Optional[langchain.requests.Requests]) β
verbose (bool) β
kwargs (Any) β
Return type
langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit
classmethod from_llm_and_spec(llm, spec, requests=None, verbose=False, **kwargs)[source]ο
Instantiate the toolkit by creating tools for each operation.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
spec (langchain.utilities.openapi.OpenAPISpec) β
requests (Optional[langchain.requests.Requests]) β
verbose (bool) β
kwargs (Any) β
Return type
langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit
classmethod from_llm_and_url(llm, open_api_url, requests=None, verbose=False, **kwargs)[source]ο
Instantiate the toolkit from an OpenAPI Spec URL
Parameters
llm (langchain.base_language.BaseLanguageModel) β
open_api_url (str) β
requests (Optional[langchain.requests.Requests]) β
verbose (bool) β
kwargs (Any) β
Return type
langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit
get_tools()[source]ο
Get the tools for all the API operations.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.PowerBIToolkit(*, powerbi, llm, examples=None, max_iterations=5, callback_manager=None)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for interacting with PowerBI dataset.
Parameters
powerbi (langchain.utilities.powerbi.PowerBIDataset) β
llm (langchain.base_language.BaseLanguageModel) β
examples (Optional[str]) β
max_iterations (int) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
Return type
None
attribute callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = Noneο
attribute examples: Optional[str] = Noneο
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
attribute max_iterations: int = 5ο
attribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]ο
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.OpenAPIToolkit(*, json_agent, requests_wrapper)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for interacting with a OpenAPI api.
Parameters
json_agent (langchain.agents.agent.AgentExecutor) β
requests_wrapper (langchain.requests.TextRequestsWrapper) β
Return type
None
attribute json_agent: langchain.agents.agent.AgentExecutor [Required]ο
attribute requests_wrapper: langchain.requests.TextRequestsWrapper [Required]ο
classmethod from_llm(llm, json_spec, requests_wrapper, **kwargs)[source]ο
Create json agent from llm, then initialize.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
json_spec (langchain.tools.json.tool.JsonSpec) β
requests_wrapper (langchain.requests.TextRequestsWrapper) β
kwargs (Any) β
Return type
langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.VectorStoreToolkit(*, vectorstore_info, llm=None)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for interacting with a vector store.
Parameters
vectorstore_info (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo) β
llm (langchain.base_language.BaseLanguageModel) β
Return type
None
attribute llm: langchain.base_language.BaseLanguageModel [Optional]ο
attribute vectorstore_info: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo [Required]ο
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
langchain.agents.agent_toolkits.create_vectorstore_router_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a vectorstore router agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
class langchain.agents.agent_toolkits.VectorStoreInfo(*, vectorstore, name, description)[source]ο
Bases: pydantic.main.BaseModel
Information about a vectorstore.
Parameters
vectorstore (langchain.vectorstores.base.VectorStore) β
name (str) β
description (str) β
Return type
None
attribute description: str [Required]ο
attribute name: str [Required]ο
attribute vectorstore: langchain.vectorstores.base.VectorStore [Required]ο
class langchain.agents.agent_toolkits.VectorStoreRouterToolkit(*, vectorstores, llm=None)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for routing between vector stores.
Parameters
vectorstores (List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo]) β
llm (langchain.base_language.BaseLanguageModel) β
Return type
None
attribute llm: langchain.base_language.BaseLanguageModel [Optional]ο
attribute vectorstores: List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo] [Required]ο
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
langchain.agents.agent_toolkits.create_pandas_dataframe_agent(llm, df, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix=None, suffix=None, input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, include_df_in_prompt=True, **kwargs)[source]ο
Construct a pandas agent from an LLM and dataframe.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
df (Any) β
agent_type (langchain.agents.agent_types.AgentType) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (Optional[str]) β
suffix (Optional[str]) β
input_variables (Optional[List[str]]) β
verbose (bool) β
return_intermediate_steps (bool) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
include_df_in_prompt (Optional[bool]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.agent_toolkits.create_spark_dataframe_agent(llm, df, callback_manager=None, prefix='\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix='\nThis is the result of `print(df.first())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, **kwargs)[source]ο
Construct a spark agent from an LLM and dataframe.
Parameters
llm (langchain.llms.base.BaseLLM) β
df (Any) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
input_variables (Optional[List[str]]) β
verbose (bool) β
return_intermediate_steps (bool) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.agent_toolkits.create_spark_sql_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with Spark SQL.\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix='Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a sql agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
top_k (int) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.agent_toolkits.create_csv_agent(llm, path, pandas_kwargs=None, **kwargs)[source]ο
Create csv agent by loading to a dataframe and using pandas agent.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
path (Union[str, List[str]]) β
pandas_kwargs (Optional[dict]) β
kwargs (Any) β
Return type
langchain.agents.agent.AgentExecutor
class langchain.agents.agent_toolkits.ZapierToolkit(*, tools=[])[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Zapier Toolkit.
Parameters
tools (List[langchain.tools.base.BaseTool]) β
Return type
None
attribute tools: List[langchain.tools.base.BaseTool] = []ο
classmethod from_zapier_nla_wrapper(zapier_nla_wrapper)[source]ο
Create a toolkit from a ZapierNLAWrapper.
Parameters
zapier_nla_wrapper (langchain.utilities.zapier.ZapierNLAWrapper) β
Return type
langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.GmailToolkit(*, api_resource=None)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for interacting with Gmail.
Parameters
api_resource (Resource) β
Return type
None
attribute api_resource: Resource [Optional]ο
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.JiraToolkit(*, tools=[])[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Jira Toolkit.
Parameters
tools (List[langchain.tools.base.BaseTool]) β
Return type
None
attribute tools: List[langchain.tools.base.BaseTool] = []ο
classmethod from_jira_api_wrapper(jira_api_wrapper)[source]ο
Parameters
jira_api_wrapper (langchain.utilities.jira.JiraAPIWrapper) β
Return type
langchain.agents.agent_toolkits.jira.toolkit.JiraToolkit
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.FileManagementToolkit(*, root_dir=None, selected_tools=None)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for interacting with a Local Files.
Parameters
root_dir (Optional[str]) β
selected_tools (Optional[List[str]]) β
Return type
None
attribute root_dir: Optional[str] = Noneο
If specified, all file operations are made relative to root_dir.
attribute selected_tools: Optional[List[str]] = Noneο
If provided, only provide the selected tools. Defaults to all.
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.PlayWrightBrowserToolkit(*, sync_browser=None, async_browser=None)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for web browser tools.
Parameters
sync_browser (Optional['SyncBrowser']) β
async_browser (Optional['AsyncBrowser']) β
Return type
None
attribute async_browser: Optional['AsyncBrowser'] = Noneο
attribute sync_browser: Optional['SyncBrowser'] = Noneο
classmethod from_browser(sync_browser=None, async_browser=None)[source]ο
Instantiate the toolkit.
Parameters
sync_browser (Optional[SyncBrowser]) β
async_browser (Optional[AsyncBrowser]) β
Return type
PlayWrightBrowserToolkit
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.AzureCognitiveServicesToolkit[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for Azure Cognitive Services.
Return type
None
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool] | https://api.python.langchain.com/en/latest/modules/agent_toolkits.html |
d234a720-4555-4f2c-8fac-a7f84b4a2f37 | Retrieversο
class langchain.retrievers.AmazonKendraRetriever(index_id, region_name=None, credentials_profile_name=None, top_k=3, attribute_filter=None, client=None)[source]ο
Bases: langchain.schema.BaseRetriever
Retriever class to query documents from Amazon Kendra Index.
Parameters
index_id (str) β Kendra index id
region_name (Optional[str]) β The aws region e.g., us-west-2.
Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config.
credentials_profile_name (Optional[str]) β The name of the profile in the ~/.aws/credentials
or ~/.aws/config files, which has either access keys or role information
specified. If not specified, the default credential profile or, if on an
EC2 instance, credentials from IMDS will be used.
top_k (int) β No of results to return
attribute_filter (Optional[Dict]) β Additional filtering of results based on metadata
See: https://docs.aws.amazon.com/kendra/latest/APIReference
client (Optional[Any]) β boto3 client for Kendra
Example
retriever = AmazonKendraRetriever(
index_id="c0806df7-e76b-4bce-9b5c-d5582f6b1a03"
)
get_relevant_documents(query)[source]ο
Run search on Kendra index and get top k documents
Example:
.. code-block:: python
docs = retriever.get_relevant_documents(βThis is my queryβ)
Parameters
query (str) β
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.ArxivRetriever(*, arxiv_search=None, arxiv_exceptions=None, top_k_results=3, load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000, ARXIV_MAX_QUERY_LENGTH=300)[source]ο
Bases: langchain.schema.BaseRetriever, langchain.utilities.arxiv.ArxivAPIWrapper
It is effectively a wrapper for ArxivAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all ArxivAPIWrapper arguments without any change.
Parameters
arxiv_search (Any) β
arxiv_exceptions (Any) β
top_k_results (int) β
load_max_docs (int) β
load_all_available_meta (bool) β
doc_content_chars_max (Optional[int]) β
ARXIV_MAX_QUERY_LENGTH (int) β
Return type
None
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.AzureCognitiveSearchRetriever(*, service_name='', index_name='', api_key='', api_version='2020-06-30', aiosession=None, content_key='content')[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Wrapper around Azure Cognitive Search.
Parameters
service_name (str) β
index_name (str) β
api_key (str) β
api_version (str) β
aiosession (Optional[aiohttp.client.ClientSession]) β
content_key (str) β
Return type
None
attribute aiosession: Optional[aiohttp.client.ClientSession] = Noneο
ClientSession, in case we want to reuse connection for better performance.
attribute api_key: str = ''ο
API Key. Both Admin and Query keys work, but for reading data itβs
recommended to use a Query key.
attribute api_version: str = '2020-06-30'ο
API version
attribute content_key: str = 'content'ο
Key in a retrieved result to set as the Document page_content.
attribute index_name: str = ''ο
Name of Index inside Azure Cognitive Search service
attribute service_name: str = ''ο
Name of Azure Cognitive Search service
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.ChatGPTPluginRetriever(*, url, bearer_token, top_k=3, filter=None, aiosession=None)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Parameters
url (str) β
bearer_token (str) β
top_k (int) β
filter (Optional[dict]) β
aiosession (Optional[aiohttp.client.ClientSession]) β
Return type
None
attribute aiosession: Optional[aiohttp.client.ClientSession] = Noneο
attribute bearer_token: str [Required]ο
attribute filter: Optional[dict] = Noneο
attribute top_k: int = 3ο
attribute url: str [Required]ο
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.ContextualCompressionRetriever(*, base_compressor, base_retriever)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Retriever that wraps a base retriever and compresses the results.
Parameters
base_compressor (langchain.retrievers.document_compressors.base.BaseDocumentCompressor) β
base_retriever (langchain.schema.BaseRetriever) β
Return type
None
attribute base_compressor: langchain.retrievers.document_compressors.base.BaseDocumentCompressor [Required]ο
Compressor for compressing retrieved documents.
attribute base_retriever: langchain.schema.BaseRetriever [Required]ο
Base Retriever to use for getting relevant documents.
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
Sequence of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.DataberryRetriever(datastore_url, top_k=None, api_key=None)[source]ο
Bases: langchain.schema.BaseRetriever
Retriever that uses the Databerry API.
Parameters
datastore_url (str) β
top_k (Optional[int]) β
api_key (Optional[str]) β
datastore_url: strο
api_key: Optional[str]ο
top_k: Optional[int]ο
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.ElasticSearchBM25Retriever(client, index_name)[source]ο
Bases: langchain.schema.BaseRetriever
Wrapper around Elasticsearch using BM25 as a retrieval method.
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the βDeploymentsβ page.
To obtain your Elastic Cloud password for the default βelasticβ user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to βSecurityβ > βUsersβ
Locate the βelasticβ user and click βEditβ
Click βReset passwordβ
Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
Parameters
client (Any) β
index_name (str) β
classmethod create(elasticsearch_url, index_name, k1=2.0, b=0.75)[source]ο
Parameters
elasticsearch_url (str) β
index_name (str) β
k1 (float) β
b (float) β
Return type
langchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever
add_texts(texts, refresh_indices=True)[source]ο
Run more texts through the embeddings and add to the retriever.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the retriever.
refresh_indices (bool) β bool to refresh ElasticSearch indices
Returns
List of ids from adding the texts into the retriever.
Return type
List[str]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.KNNRetriever(*, embeddings, index=None, texts, k=4, relevancy_threshold=None)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
KNN Retriever.
Parameters
embeddings (langchain.embeddings.base.Embeddings) β
index (Any) β
texts (List[str]) β
k (int) β
relevancy_threshold (Optional[float]) β
Return type
None
attribute embeddings: langchain.embeddings.base.Embeddings [Required]ο
attribute index: Any = Noneο
attribute k: int = 4ο
attribute relevancy_threshold: Optional[float] = Noneο
attribute texts: List[str] [Required]ο
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embeddings, **kwargs)[source]ο
Parameters
texts (List[str]) β
embeddings (langchain.embeddings.base.Embeddings) β
kwargs (Any) β
Return type
langchain.retrievers.knn.KNNRetriever
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.LlamaIndexGraphRetriever(*, graph=None, query_configs=None)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Question-answering with sources over an LlamaIndex graph data structure.
Parameters
graph (Any) β
query_configs (List[Dict]) β
Return type
None
attribute graph: Any = Noneο
attribute query_configs: List[Dict] [Optional]ο
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β
Return type
List[langchain.schema.Document]
class langchain.retrievers.LlamaIndexRetriever(*, index=None, query_kwargs=None)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Question-answering with sources over an LlamaIndex data structure.
Parameters
index (Any) β
query_kwargs (Dict) β
Return type
None
attribute index: Any = Noneο
attribute query_kwargs: Dict [Optional]ο
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β
Return type
List[langchain.schema.Document]
class langchain.retrievers.MergerRetriever(retrievers)[source]ο
Bases: langchain.schema.BaseRetriever
This class merges the results of multiple retrievers.
Parameters
retrievers (List[langchain.schema.BaseRetriever]) β A list of retrievers to merge.
get_relevant_documents(query)[source]ο
Get the relevant documents for a given query.
Parameters
query (str) β The query to search for.
Returns
A list of relevant documents.
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Asynchronously get the relevant documents for a given query.
Parameters
query (str) β The query to search for.
Returns
A list of relevant documents.
Return type
List[langchain.schema.Document]
merge_documents(query)[source]ο
Merge the results of the retrievers.
Parameters
query (str) β The query to search for.
Returns
A list of merged documents.
Return type
List[langchain.schema.Document]
async amerge_documents(query)[source]ο
Asynchronously merge the results of the retrievers.
Parameters
query (str) β The query to search for.
Returns
A list of merged documents.
Return type
List[langchain.schema.Document]
class langchain.retrievers.MetalRetriever(client, params=None)[source]ο
Bases: langchain.schema.BaseRetriever
Retriever that uses the Metal API.
Parameters
client (Any) β
params (Optional[dict]) β
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.MilvusRetriever(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', search_params=None)[source]ο
Bases: langchain.schema.BaseRetriever
Retriever that uses the Milvus API.
Parameters
embedding_function (langchain.embeddings.base.Embeddings) β
collection_name (str) β
connection_args (Optional[Dict[str, Any]]) β
consistency_level (str) β
search_params (Optional[dict]) β
add_texts(texts, metadatas=None)[source]ο
Add text to the Milvus store
Parameters
texts (List[str]) β The text
metadatas (List[dict]) β Metadata dicts, must line up with existing store
Return type
None
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.PineconeHybridSearchRetriever(*, embeddings, sparse_encoder=None, index=None, top_k=4, alpha=0.5)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Parameters
embeddings (langchain.embeddings.base.Embeddings) β
sparse_encoder (Any) β
index (Any) β
top_k (int) β
alpha (float) β
Return type
None
attribute alpha: float = 0.5ο
attribute embeddings: langchain.embeddings.base.Embeddings [Required]ο
attribute index: Any = Noneο
attribute sparse_encoder: Any = Noneο
attribute top_k: int = 4ο
add_texts(texts, ids=None, metadatas=None)[source]ο
Parameters
texts (List[str]) β
ids (Optional[List[str]]) β
metadatas (Optional[List[dict]]) β
Return type
None
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.PubMedRetriever(*, top_k_results=3, load_max_docs=25, doc_content_chars_max=2000, load_all_available_meta=False, email='your_email@example.com', base_url_esearch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?', base_url_efetch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?', max_retry=5, sleep_time=0.2, ARXIV_MAX_QUERY_LENGTH=300)[source]ο
Bases: langchain.schema.BaseRetriever, langchain.utilities.pupmed.PubMedAPIWrapper
It is effectively a wrapper for PubMedAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all PubMedAPIWrapper arguments without any change.
Parameters
top_k_results (int) β
load_max_docs (int) β
doc_content_chars_max (int) β
load_all_available_meta (bool) β
email (str) β
base_url_esearch (str) β
base_url_efetch (str) β
max_retry (int) β
sleep_time (float) β
ARXIV_MAX_QUERY_LENGTH (int) β
Return type
None
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.RemoteLangChainRetriever(*, url, headers=None, input_key='message', response_key='response', page_content_key='page_content', metadata_key='metadata')[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Parameters
url (str) β
headers (Optional[dict]) β
input_key (str) β
response_key (str) β
page_content_key (str) β
metadata_key (str) β
Return type
None
attribute headers: Optional[dict] = Noneο
attribute input_key: str = 'message'ο
attribute metadata_key: str = 'metadata'ο
attribute page_content_key: str = 'page_content'ο
attribute response_key: str = 'response'ο
attribute url: str [Required]ο
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.SVMRetriever(*, embeddings, index=None, texts, k=4, relevancy_threshold=None)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
SVM Retriever.
Parameters
embeddings (langchain.embeddings.base.Embeddings) β
index (Any) β
texts (List[str]) β
k (int) β
relevancy_threshold (Optional[float]) β
Return type
None
attribute embeddings: langchain.embeddings.base.Embeddings [Required]ο
attribute index: Any = Noneο
attribute k: int = 4ο
attribute relevancy_threshold: Optional[float] = Noneο
attribute texts: List[str] [Required]ο
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embeddings, **kwargs)[source]ο
Parameters
texts (List[str]) β
embeddings (langchain.embeddings.base.Embeddings) β
kwargs (Any) β
Return type
langchain.retrievers.svm.SVMRetriever
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.SelfQueryRetriever(*, vectorstore, llm_chain, search_type='similarity', search_kwargs=None, structured_query_translator, verbose=False, use_original_query=False)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Retriever that wraps around a vector store and uses an LLM to generate
the vector store queries.
Parameters
vectorstore (langchain.vectorstores.base.VectorStore) β
llm_chain (langchain.chains.llm.LLMChain) β
search_type (str) β
search_kwargs (dict) β
structured_query_translator (langchain.chains.query_constructor.ir.Visitor) β
verbose (bool) β
use_original_query (bool) β
Return type
None
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
The LLMChain for generating the vector store queries.
attribute search_kwargs: dict [Optional]ο
Keyword arguments to pass in to the vector store search.
attribute search_type: str = 'similarity'ο
The search type to perform on the vector store.
attribute structured_query_translator: langchain.chains.query_constructor.ir.Visitor [Required]ο
Translator for turning internal query language into vectorstore search params.
attribute use_original_query: bool = Falseο
attribute vectorstore: langchain.vectorstores.base.VectorStore [Required]ο
The underlying vector store from which documents will be retrieved.
attribute verbose: bool = Falseο
Use original query instead of the revised new query from LLM
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
classmethod from_llm(llm, vectorstore, document_contents, metadata_field_info, structured_query_translator=None, chain_kwargs=None, enable_limit=False, use_original_query=False, **kwargs)[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
vectorstore (langchain.vectorstores.base.VectorStore) β
document_contents (str) β
metadata_field_info (List[langchain.chains.query_constructor.schema.AttributeInfo]) β
structured_query_translator (Optional[langchain.chains.query_constructor.ir.Visitor]) β
chain_kwargs (Optional[Dict]) β
enable_limit (bool) β
use_original_query (bool) β
kwargs (Any) β
Return type
langchain.retrievers.self_query.base.SelfQueryRetriever
get_relevant_documents(query, callbacks=None)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.TFIDFRetriever(*, vectorizer=None, docs, tfidf_array=None, k=4)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Parameters
vectorizer (Any) β
docs (List[langchain.schema.Document]) β
tfidf_array (Any) β
k (int) β
Return type
None
attribute docs: List[langchain.schema.Document] [Required]ο
attribute k: int = 4ο
attribute tfidf_array: Any = Noneο
attribute vectorizer: Any = Noneο
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
classmethod from_documents(documents, *, tfidf_params=None, **kwargs)[source]ο
Parameters
documents (Iterable[langchain.schema.Document]) β
tfidf_params (Optional[Dict[str, Any]]) β
kwargs (Any) β
Return type
langchain.retrievers.tfidf.TFIDFRetriever
classmethod from_texts(texts, metadatas=None, tfidf_params=None, **kwargs)[source]ο
Parameters
texts (Iterable[str]) β
metadatas (Optional[Iterable[dict]]) β
tfidf_params (Optional[Dict[str, Any]]) β
kwargs (Any) β
Return type
langchain.retrievers.tfidf.TFIDFRetriever
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.TimeWeightedVectorStoreRetriever(*, vectorstore, search_kwargs=None, memory_stream=None, decay_rate=0.01, k=4, other_score_keys=[], default_salience=None)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Retriever combining embedding similarity with recency.
Parameters
vectorstore (langchain.vectorstores.base.VectorStore) β
search_kwargs (dict) β
memory_stream (List[langchain.schema.Document]) β
decay_rate (float) β
k (int) β
other_score_keys (List[str]) β
default_salience (Optional[float]) β
Return type
None
attribute decay_rate: float = 0.01ο
The exponential decay factor used as (1.0-decay_rate)**(hrs_passed).
attribute default_salience: Optional[float] = Noneο
The salience to assign memories not retrieved from the vector store.
None assigns no salience to documents not fetched from the vector store.
attribute k: int = 4ο
The maximum number of documents to retrieve in a given call.
attribute memory_stream: List[langchain.schema.Document] [Optional]ο
The memory_stream of documents to search through.
attribute other_score_keys: List[str] = []ο
Other keys in the metadata to factor into the score, e.g. βimportanceβ.
attribute search_kwargs: dict [Optional]ο
Keyword arguments to pass to the vectorstore similarity search.
attribute vectorstore: langchain.vectorstores.base.VectorStore [Required]ο
The vectorstore to store documents and determine salience.
async aadd_documents(documents, **kwargs)[source]ο
Add documents to vectorstore.
Parameters
documents (List[langchain.schema.Document]) β
kwargs (Any) β
Return type
List[str]
add_documents(documents, **kwargs)[source]ο
Add documents to vectorstore.
Parameters
documents (List[langchain.schema.Document]) β
kwargs (Any) β
Return type
List[str]
async aget_relevant_documents(query)[source]ο
Return documents that are relevant to the query.
Parameters
query (str) β
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Return documents that are relevant to the query.
Parameters
query (str) β
Return type
List[langchain.schema.Document]
get_salient_docs(query)[source]ο
Return documents that are salient to the query.
Parameters
query (str) β
Return type
Dict[int, Tuple[langchain.schema.Document, float]]
class langchain.retrievers.VespaRetriever(app, body, content_field, metadata_fields=None)[source]ο
Bases: langchain.schema.BaseRetriever
Retriever that uses the Vespa.
Parameters
app (Vespa) β
body (Dict) β
content_field (str) β
metadata_fields (Optional[Sequence[str]]) β
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents_with_filter(query, *, _filter=None)[source]ο
Parameters
query (str) β
_filter (Optional[str]) β
Return type
List[langchain.schema.Document]
classmethod from_params(url, content_field, *, k=None, metadata_fields=(), sources=None, _filter=None, yql=None, **kwargs)[source]ο
Instantiate retriever from params.
Parameters
url (str) β Vespa app URL.
content_field (str) β Field in results to return as Document page_content.
k (Optional[int]) β Number of Documents to return. Defaults to None.
metadata_fields (Sequence[str] or "*") β Fields in results to include in
document metadata. Defaults to empty tuple ().
sources (Sequence[str] or "*" or None) β Sources to retrieve
from. Defaults to None.
_filter (Optional[str]) β Document filter condition expressed in YQL.
Defaults to None.
yql (Optional[str]) β Full YQL query to be used. Should not be specified
if _filter or sources are specified. Defaults to None.
kwargs (Any) β Keyword arguments added to query body.
Return type
langchain.retrievers.vespa_retriever.VespaRetriever
class langchain.retrievers.WeaviateHybridSearchRetriever(client, index_name, text_key, alpha=0.5, k=4, attributes=None, create_schema_if_missing=True)[source]ο
Bases: langchain.schema.BaseRetriever
Parameters
client (Any) β
index_name (str) β
text_key (str) β
alpha (float) β
k (int) β
attributes (Optional[List[str]]) β
create_schema_if_missing (bool) β
class Config[source]ο
Bases: object
Configuration for this pydantic object.
extra = 'forbid'ο
arbitrary_types_allowed = Trueο
add_documents(docs, **kwargs)[source]ο
Upload documents to Weaviate.
Parameters
docs (List[langchain.schema.Document]) β
kwargs (Any) β
Return type
List[str]
get_relevant_documents(query, where_filter=None)[source]ο
Look up similar documents in Weaviate.
Parameters
query (str) β
where_filter (Optional[Dict[str, object]]) β
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query, where_filter=None)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
where_filter (Optional[Dict[str, object]]) β
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.WikipediaRetriever(*, wiki_client=None, top_k_results=3, lang='en', load_all_available_meta=False, doc_content_chars_max=4000)[source]ο
Bases: langchain.schema.BaseRetriever, langchain.utilities.wikipedia.WikipediaAPIWrapper
It is effectively a wrapper for WikipediaAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all WikipediaAPIWrapper arguments without any change.
Parameters
wiki_client (Any) β
top_k_results (int) β
lang (str) β
load_all_available_meta (bool) β
doc_content_chars_max (int) β
Return type
None
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.ZepRetriever(session_id, url, top_k=None)[source]ο
Bases: langchain.schema.BaseRetriever
A Retriever implementation for the Zep long-term memory store. Search your
userβs long-term chat history with Zep.
Note: You will need to provide the userβs session_id to use this retriever.
More on Zep:
Zep provides long-term conversation storage for LLM apps. The server stores,
summarizes, embeds, indexes, and enriches conversational AI chat
histories, and exposes them via simple, low-latency APIs.
For server installation instructions, see:
https://getzep.github.io/deployment/quickstart/
Parameters
session_id (str) β
url (str) β
top_k (Optional[int]) β
get_relevant_documents(query, metadata=None)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
metadata (Optional[Dict]) β
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query, metadata=None)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
metadata (Optional[Dict]) β
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.ZillizRetriever(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', search_params=None)[source]ο
Bases: langchain.schema.BaseRetriever
Retriever that uses the Zilliz API.
Parameters
embedding_function (langchain.embeddings.base.Embeddings) β
collection_name (str) β
connection_args (Optional[Dict[str, Any]]) β
consistency_level (str) β
search_params (Optional[dict]) β
add_texts(texts, metadatas=None)[source]ο
Add text to the Zilliz store
Parameters
texts (List[str]) β The text
metadatas (List[dict]) β Metadata dicts, must line up with existing store
Return type
None
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.DocArrayRetriever(*, index=None, embeddings, search_field, content_field, search_type=SearchType.similarity, top_k=1, filters=None)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Retriever class for DocArray Document Indices.
Currently, supports 5 backends:
InMemoryExactNNIndex, HnswDocumentIndex, QdrantDocumentIndex,
ElasticDocIndex, and WeaviateDocumentIndex.
Parameters
index (Any) β
embeddings (langchain.embeddings.base.Embeddings) β
search_field (str) β
content_field (str) β
search_type (langchain.retrievers.docarray.SearchType) β
top_k (int) β
filters (Optional[Any]) β
Return type
None
indexο
One of the above-mentioned index instances
embeddingsο
Embedding model to represent text as vectors
search_fieldο
Field to consider for searching in the documents.
Should be an embedding/vector/tensor.
content_fieldο
Field that represents the main content in your document schema.
Will be used as a page_content. Everything else will go into metadata.
search_typeο
Type of search to perform (similarity / mmr)
filtersο
Filters applied for document retrieval.
top_kο
Number of documents to return
attribute content_field: str [Required]ο
attribute embeddings: langchain.embeddings.base.Embeddings [Required]ο
attribute filters: Optional[Any] = Noneο
attribute index: Any = Noneο
attribute search_field: str [Required]ο
attribute search_type: langchain.retrievers.docarray.SearchType = SearchType.similarityο
attribute top_k: int = 1ο
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
Document compressorsο
class langchain.retrievers.document_compressors.DocumentCompressorPipeline(*, transformers)[source]ο
Bases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor
Document compressor that uses a pipeline of transformers.
Parameters
transformers (List[Union[langchain.schema.BaseDocumentTransformer, langchain.retrievers.document_compressors.base.BaseDocumentCompressor]]) β
Return type
None
attribute transformers: List[Union[langchain.schema.BaseDocumentTransformer, langchain.retrievers.document_compressors.base.BaseDocumentCompressor]] [Required]ο
List of document filters that are chained together and run in sequence.
async acompress_documents(documents, query)[source]ο
Compress retrieved documents given the query context.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
compress_documents(documents, query)[source]ο
Transform a list of documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
class langchain.retrievers.document_compressors.EmbeddingsFilter(*, embeddings, similarity_fn=<function cosine_similarity>, k=20, similarity_threshold=None)[source]ο
Bases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor
Parameters
embeddings (langchain.embeddings.base.Embeddings) β
similarity_fn (Callable) β
k (Optional[int]) β
similarity_threshold (Optional[float]) β
Return type
None
attribute embeddings: langchain.embeddings.base.Embeddings [Required]ο
Embeddings to use for embedding document contents and queries.
attribute k: Optional[int] = 20ο
The number of relevant documents to return. Can be set to None, in which case
similarity_threshold must be specified. Defaults to 20.
attribute similarity_fn: Callable = <function cosine_similarity>ο
Similarity function for comparing documents. Function expected to take as input
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity.
attribute similarity_threshold: Optional[float] = Noneο
Threshold for determining when two documents are similar enough
to be considered redundant. Defaults to None, must be specified if k is set
to None.
async acompress_documents(documents, query)[source]ο
Filter down documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
compress_documents(documents, query)[source]ο
Filter documents based on similarity of their embeddings to the query.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
class langchain.retrievers.document_compressors.LLMChainExtractor(*, llm_chain, get_input=<function default_get_input>)[source]ο
Bases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
get_input (Callable[[str, langchain.schema.Document], dict]) β
Return type
None
attribute get_input: Callable[[str, langchain.schema.Document], dict] = <function default_get_input>ο
Callable for constructing the chain input from the query and a Document.
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
LLM wrapper to use for compressing documents.
async acompress_documents(documents, query)[source]ο
Compress page content of raw documents asynchronously.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
compress_documents(documents, query)[source]ο
Compress page content of raw documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
classmethod from_llm(llm, prompt=None, get_input=None, llm_chain_kwargs=None)[source]ο
Initialize from LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
prompt (Optional[langchain.prompts.prompt.PromptTemplate]) β
get_input (Optional[Callable[[str, langchain.schema.Document], str]]) β
llm_chain_kwargs (Optional[dict]) β
Return type
langchain.retrievers.document_compressors.chain_extract.LLMChainExtractor
class langchain.retrievers.document_compressors.LLMChainFilter(*, llm_chain, get_input=<function default_get_input>)[source]ο
Bases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor
Filter that drops documents that arenβt relevant to the query.
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
get_input (Callable[[str, langchain.schema.Document], dict]) β
Return type
None
attribute get_input: Callable[[str, langchain.schema.Document], dict] = <function default_get_input>ο
Callable for constructing the chain input from the query and a Document.
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
LLM wrapper to use for filtering documents.
The chain prompt is expected to have a BooleanOutputParser.
async acompress_documents(documents, query)[source]ο
Filter down documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
compress_documents(documents, query)[source]ο
Filter down documents based on their relevance to the query.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
classmethod from_llm(llm, prompt=None, **kwargs)[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
prompt (Optional[langchain.prompts.base.BasePromptTemplate]) β
kwargs (Any) β
Return type
langchain.retrievers.document_compressors.chain_filter.LLMChainFilter
class langchain.retrievers.document_compressors.CohereRerank(*, client, top_n=3, model='rerank-english-v2.0')[source]ο
Bases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor
Parameters
client (Client) β
top_n (int) β
model (str) β
Return type
None
attribute client: Client [Required]ο
attribute model: str = 'rerank-english-v2.0'ο
attribute top_n: int = 3ο
async acompress_documents(documents, query)[source]ο
Compress retrieved documents given the query context.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
compress_documents(documents, query)[source]ο
Compress retrieved documents given the query context.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/retrievers.html |
c8bf6033-faa2-4a5c-8dcf-ab5de43b6b09 | Chat Modelsο
class langchain.chat_models.ChatOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='gpt-3.5-turbo', temperature=0.7, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None)[source]ο
Bases: langchain.chat_models.base.BaseChatModel
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.chat_models import ChatOpenAI
openai = ChatOpenAI(model_name="gpt-3.5-turbo")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
temperature (float) β
model_kwargs (Dict[str, Any]) β
openai_api_key (Optional[str]) β
openai_api_base (Optional[str]) β
openai_organization (Optional[str]) β
openai_proxy (Optional[str]) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
max_retries (int) β
streaming (bool) β
n (int) β
max_tokens (Optional[int]) β
tiktoken_model_name (Optional[str]) β
Return type
None
attribute max_retries: int = 6ο
Maximum number of retries to make when generating.
attribute max_tokens: Optional[int] = Noneο
Maximum number of tokens to generate.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not explicitly specified.
attribute model_name: str = 'gpt-3.5-turbo' (alias 'model')ο
Model name to use.
attribute n: int = 1ο
Number of chat completions to generate for each prompt.
attribute openai_api_base: Optional[str] = Noneο
attribute openai_api_key: Optional[str] = Noneο
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
attribute openai_organization: Optional[str] = Noneο
attribute openai_proxy: Optional[str] = Noneο
attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = Noneο
Timeout for requests to OpenAI completion API. Default is 600 seconds.
attribute streaming: bool = Falseο
Whether to stream the results or not.
attribute temperature: float = 0.7ο
What sampling temperature to use.
attribute tiktoken_model_name: Optional[str] = Noneο
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
completion_with_retry(**kwargs)[source]ο
Use tenacity to retry the completion call.
Parameters
kwargs (Any) β
Return type
Any
get_num_tokens_from_messages(messages)[source]ο
Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.
Official documentation: https://github.com/openai/openai-cookbook/blob/
main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)[source]ο
Get the tokens present in the text with tiktoken package.
Parameters
text (str) β
Return type
List[int]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chat_models.AzureChatOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='gpt-3.5-turbo', temperature=0.7, model_kwargs=None, openai_api_key='', openai_api_base='', openai_organization='', openai_proxy='', request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None, deployment_name='', openai_api_type='azure', openai_api_version='')[source]ο
Bases: langchain.chat_models.openai.ChatOpenAI
Wrapper around Azure OpenAI Chat Completion API. To use this class you
must have a deployed model on Azure OpenAI. Use deployment_name in the
constructor to refer to the βModel deployment nameβ in the Azure portal.
In addition, you should have the openai python package installed, and the
following environment variables set or passed in constructor in lower case:
- OPENAI_API_TYPE (default: azure)
- OPENAI_API_KEY
- OPENAI_API_BASE
- OPENAI_API_VERSION
- OPENAI_PROXY
For exmaple, if you have gpt-35-turbo deployed, with the deployment name
35-turbo-dev, the constructor should look like:
AzureChatOpenAI(
deployment_name="35-turbo-dev",
openai_api_version="2023-03-15-preview",
)
Be aware the API version may change.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
temperature (float) β
model_kwargs (Dict[str, Any]) β
openai_api_key (str) β
openai_api_base (str) β
openai_organization (str) β
openai_proxy (str) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
max_retries (int) β
streaming (bool) β
n (int) β
max_tokens (Optional[int]) β
tiktoken_model_name (Optional[str]) β
deployment_name (str) β
openai_api_type (str) β
openai_api_version (str) β
Return type
None
attribute deployment_name: str = ''ο
attribute openai_api_base: str = ''ο
attribute openai_api_key: str = ''ο
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
attribute openai_api_type: str = 'azure'ο
attribute openai_api_version: str = ''ο
attribute openai_organization: str = ''ο
attribute openai_proxy: str = ''ο
class langchain.chat_models.FakeListChatModel(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, responses, i=0)[source]ο
Bases: langchain.chat_models.base.SimpleChatModel
Fake ChatModel for testing purposes.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
responses (List) β
i (int) β
Return type
None
attribute i: int = 0ο
attribute responses: List [Required]ο
class langchain.chat_models.PromptLayerChatOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='gpt-3.5-turbo', temperature=0.7, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None, pl_tags=None, return_pl_id=False)[source]ο
Bases: langchain.chat_models.openai.ChatOpenAI
Wrapper around OpenAI Chat large language models and PromptLayer.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerChatOpenAI adds to optional
Parameters
pl_tags (Optional[List[str]]) β List of strings to tag the request with.
return_pl_id (Optional[bool]) β If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
temperature (float) β
model_kwargs (Dict[str, Any]) β
openai_api_key (Optional[str]) β
openai_api_base (Optional[str]) β
openai_organization (Optional[str]) β
openai_proxy (Optional[str]) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
max_retries (int) β
streaming (bool) β
n (int) β
max_tokens (Optional[int]) β
tiktoken_model_name (Optional[str]) β
Return type
None
Example
from langchain.chat_models import PromptLayerChatOpenAI
openai = PromptLayerChatOpenAI(model_name="gpt-3.5-turbo")
attribute pl_tags: Optional[List[str]] = Noneο
attribute return_pl_id: Optional[bool] = Falseο
class langchain.chat_models.ChatAnthropic(*, client=None, model='claude-v1', max_tokens_to_sample=256, temperature=None, top_k=None, top_p=None, streaming=False, default_request_timeout=None, anthropic_api_url=None, anthropic_api_key=None, HUMAN_PROMPT=None, AI_PROMPT=None, count_tokens=None, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None)[source]ο
Bases: langchain.chat_models.base.BaseChatModel, langchain.llms.anthropic._AnthropicCommon
Wrapper around Anthropicβs large language model.
To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
import anthropic
from langchain.llms import Anthropic
model = ChatAnthropic(model="<model_name>", anthropic_api_key="my-api-key")
Parameters
client (Any) β
model (str) β
max_tokens_to_sample (int) β
temperature (Optional[float]) β
top_k (Optional[int]) β
top_p (Optional[float]) β
streaming (bool) β
default_request_timeout (Optional[Union[float, Tuple[float, float]]]) β
anthropic_api_url (Optional[str]) β
anthropic_api_key (Optional[str]) β
HUMAN_PROMPT (Optional[str]) β
AI_PROMPT (Optional[str]) β
count_tokens (Optional[Callable[[str], int]]) β
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
Return type
None
get_num_tokens(text)[source]ο
Calculate number of tokens.
Parameters
text (str) β
Return type
int
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chat_models.ChatGooglePalm(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='models/chat-bison-001', google_api_key=None, temperature=None, top_p=None, top_k=None, n=1)[source]ο
Bases: langchain.chat_models.base.BaseChatModel, pydantic.main.BaseModel
Wrapper around Googleβs PaLM Chat API.
To use you must have the google.generativeai Python package installed and
either:
The GOOGLE_API_KEY` environment varaible set with your API key, or
Pass your API key using the google_api_key kwarg to the ChatGoogle
constructor.
Example
from langchain.chat_models import ChatGooglePalm
chat = ChatGooglePalm()
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model_name (str) β
google_api_key (Optional[str]) β
temperature (Optional[float]) β
top_p (Optional[float]) β
top_k (Optional[int]) β
n (int) β
Return type
None
attribute google_api_key: Optional[str] = Noneο
attribute model_name: str = 'models/chat-bison-001'ο
Model name to use.
attribute n: int = 1ο
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated.
attribute temperature: Optional[float] = Noneο
Run inference with this temperature. Must by in the closed
interval [0.0, 1.0].
attribute top_k: Optional[int] = Noneο
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
attribute top_p: Optional[float] = Noneο
Decode using nucleus sampling: consider the smallest set of tokens whose
probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].
class langchain.chat_models.ChatVertexAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='chat-bison', temperature=0.0, max_output_tokens=128, top_p=0.95, top_k=40, stop=None, project=None, location='us-central1', credentials=None)[source]ο
Bases: langchain.llms.vertexai._VertexAICommon, langchain.chat_models.base.BaseChatModel
Wrapper around Vertex AI large language models.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (_LanguageModel) β
model_name (str) β
temperature (float) β
max_output_tokens (int) β
top_p (float) β
top_k (int) β
stop (Optional[List[str]]) β
project (Optional[str]) β
location (str) β
credentials (Any) β
Return type
None
attribute model_name: str = 'chat-bison'ο
Model name to use. | https://api.python.langchain.com/en/latest/modules/chat_models.html |
30ea5497-b36c-4b13-9417-1b31e3763e0e | Prompt Templatesο
Prompt template classes.
class langchain.prompts.AIMessagePromptTemplate(*, prompt, additional_kwargs=None)[source]ο
Bases: langchain.prompts.chat.BaseStringMessagePromptTemplate
Parameters
prompt (langchain.prompts.base.StringPromptTemplate) β
additional_kwargs (dict) β
Return type
None
format(**kwargs)[source]ο
To a BaseMessage.
Parameters
kwargs (Any) β
Return type
langchain.schema.BaseMessage
class langchain.prompts.BaseChatPromptTemplate(*, input_variables, output_parser=None, partial_variables=None)[source]ο
Bases: langchain.prompts.base.BasePromptTemplate, abc.ABC
Parameters
input_variables (List[str]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
partial_variables (Mapping[str, Union[str, Callable[[], str]]]) β
Return type
None
format(**kwargs)[source]ο
Format the prompt with the inputs.
Parameters
kwargs (Any) β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Return type
str
Example:
prompt.format(variable1="foo")
abstract format_messages(**kwargs)[source]ο
Format kwargs into a list of messages.
Parameters
kwargs (Any) β
Return type
List[langchain.schema.BaseMessage]
format_prompt(**kwargs)[source]ο
Create Chat Messages.
Parameters
kwargs (Any) β
Return type
langchain.schema.PromptValue
class langchain.prompts.BasePromptTemplate(*, input_variables, output_parser=None, partial_variables=None)[source]ο
Bases: langchain.load.serializable.Serializable, abc.ABC
Base class for all prompt templates, returning a prompt.
Parameters
input_variables (List[str]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
partial_variables (Mapping[str, Union[str, Callable[[], str]]]) β
Return type
None
attribute input_variables: List[str] [Required]ο
A list of the names of the variables the prompt template expects.
attribute output_parser: Optional[langchain.schema.BaseOutputParser] = Noneο
How to parse the output of calling an LLM on this formatted prompt.
attribute partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]ο
dict(**kwargs)[source]ο
Return dictionary representation of prompt.
Parameters
kwargs (Any) β
Return type
Dict
abstract format(**kwargs)[source]ο
Format the prompt with the inputs.
Parameters
kwargs (Any) β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Return type
str
Example:
prompt.format(variable1="foo")
abstract format_prompt(**kwargs)[source]ο
Create Chat Messages.
Parameters
kwargs (Any) β
Return type
langchain.schema.PromptValue
partial(**kwargs)[source]ο
Return a partial of the prompt template.
Parameters
kwargs (Union[str, Callable[[], str]]) β
Return type
langchain.prompts.base.BasePromptTemplate
save(file_path)[source]ο
Save the prompt.
Parameters
file_path (Union[pathlib.Path, str]) β Path to directory to save prompt to.
Return type
None
Example:
.. code-block:: python
prompt.save(file_path=βpath/prompt.yamlβ)
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.prompts.ChatMessagePromptTemplate(*, prompt, additional_kwargs=None, role)[source]ο
Bases: langchain.prompts.chat.BaseStringMessagePromptTemplate
Parameters
prompt (langchain.prompts.base.StringPromptTemplate) β
additional_kwargs (dict) β
role (str) β
Return type
None
attribute role: str [Required]ο
format(**kwargs)[source]ο
To a BaseMessage.
Parameters
kwargs (Any) β
Return type
langchain.schema.BaseMessage
class langchain.prompts.ChatPromptTemplate(*, input_variables, output_parser=None, partial_variables=None, messages)[source]ο
Bases: langchain.prompts.chat.BaseChatPromptTemplate, abc.ABC
Parameters
input_variables (List[str]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
partial_variables (Mapping[str, Union[str, Callable[[], str]]]) β
messages (List[Union[langchain.prompts.chat.BaseMessagePromptTemplate, langchain.schema.BaseMessage]]) β
Return type
None
attribute input_variables: List[str] [Required]ο
A list of the names of the variables the prompt template expects.
attribute messages: List[Union[BaseMessagePromptTemplate, BaseMessage]] [Required]ο
format(**kwargs)[source]ο
Format the prompt with the inputs.
Parameters
kwargs (Any) β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Return type
str
Example:
prompt.format(variable1="foo")
format_messages(**kwargs)[source]ο
Format kwargs into a list of messages.
Parameters
kwargs (Any) β
Return type
List[langchain.schema.BaseMessage]
classmethod from_messages(messages)[source]ο
Parameters
messages (Sequence[Union[langchain.prompts.chat.BaseMessagePromptTemplate, langchain.schema.BaseMessage]]) β
Return type
langchain.prompts.chat.ChatPromptTemplate
classmethod from_role_strings(string_messages)[source]ο
Parameters
string_messages (List[Tuple[str, str]]) β
Return type
langchain.prompts.chat.ChatPromptTemplate
classmethod from_strings(string_messages)[source]ο
Parameters
string_messages (List[Tuple[Type[langchain.prompts.chat.BaseMessagePromptTemplate], str]]) β
Return type
langchain.prompts.chat.ChatPromptTemplate
classmethod from_template(template, **kwargs)[source]ο
Parameters
template (str) β
kwargs (Any) β
Return type
langchain.prompts.chat.ChatPromptTemplate
partial(**kwargs)[source]ο
Return a partial of the prompt template.
Parameters
kwargs (Union[str, Callable[[], str]]) β
Return type
langchain.prompts.base.BasePromptTemplate
save(file_path)[source]ο
Save the prompt.
Parameters
file_path (Union[pathlib.Path, str]) β Path to directory to save prompt to.
Return type
None
Example:
.. code-block:: python
prompt.save(file_path=βpath/prompt.yamlβ)
class langchain.prompts.FewShotPromptTemplate(*, input_variables, output_parser=None, partial_variables=None, examples=None, example_selector=None, example_prompt, suffix, example_separator='\n\n', prefix='', template_format='f-string', validate_template=True)[source]ο
Bases: langchain.prompts.base.StringPromptTemplate
Prompt template that contains few shot examples.
Parameters
input_variables (List[str]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
partial_variables (Mapping[str, Union[str, Callable[[], str]]]) β
examples (Optional[List[dict]]) β
example_selector (Optional[langchain.prompts.example_selector.base.BaseExampleSelector]) β
example_prompt (langchain.prompts.prompt.PromptTemplate) β
suffix (str) β
example_separator (str) β
prefix (str) β
template_format (str) β
validate_template (bool) β
Return type
None
attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]ο
PromptTemplate used to format an individual example.
attribute example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = Noneο
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
attribute example_separator: str = '\n\n'ο
String separator used to join the prefix, the examples, and suffix.
attribute examples: Optional[List[dict]] = Noneο
Examples to format into the prompt.
Either this or example_selector should be provided.
attribute input_variables: List[str] [Required]ο
A list of the names of the variables the prompt template expects.
attribute prefix: str = ''ο
A prompt template string to put before the examples.
attribute suffix: str [Required]ο
A prompt template string to put after the examples.
attribute template_format: str = 'f-string'ο
The format of the prompt template. Options are: βf-stringβ, βjinja2β.
attribute validate_template: bool = Trueο
Whether or not to try validating the template.
dict(**kwargs)[source]ο
Return a dictionary of the prompt.
Parameters
kwargs (Any) β
Return type
Dict
format(**kwargs)[source]ο
Format the prompt with the inputs.
Parameters
kwargs (Any) β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Return type
str
Example:
prompt.format(variable1="foo")
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.prompts.FewShotPromptWithTemplates(*, input_variables, output_parser=None, partial_variables=None, examples=None, example_selector=None, example_prompt, suffix, example_separator='\n\n', prefix=None, template_format='f-string', validate_template=True)[source]ο
Bases: langchain.prompts.base.StringPromptTemplate
Prompt template that contains few shot examples.
Parameters
input_variables (List[str]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
partial_variables (Mapping[str, Union[str, Callable[[], str]]]) β
examples (Optional[List[dict]]) β
example_selector (Optional[langchain.prompts.example_selector.base.BaseExampleSelector]) β
example_prompt (langchain.prompts.prompt.PromptTemplate) β
suffix (langchain.prompts.base.StringPromptTemplate) β
example_separator (str) β
prefix (Optional[langchain.prompts.base.StringPromptTemplate]) β
template_format (str) β
validate_template (bool) β
Return type
None
attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]ο
PromptTemplate used to format an individual example.
attribute example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = Noneο
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
attribute example_separator: str = '\n\n'ο
String separator used to join the prefix, the examples, and suffix.
attribute examples: Optional[List[dict]] = Noneο
Examples to format into the prompt.
Either this or example_selector should be provided.
attribute input_variables: List[str] [Required]ο
A list of the names of the variables the prompt template expects.
attribute prefix: Optional[langchain.prompts.base.StringPromptTemplate] = Noneο
A PromptTemplate to put before the examples.
attribute suffix: langchain.prompts.base.StringPromptTemplate [Required]ο
A PromptTemplate to put after the examples.
attribute template_format: str = 'f-string'ο
The format of the prompt template. Options are: βf-stringβ, βjinja2β.
attribute validate_template: bool = Trueο
Whether or not to try validating the template.
dict(**kwargs)[source]ο
Return a dictionary of the prompt.
Parameters
kwargs (Any) β
Return type
Dict
format(**kwargs)[source]ο
Format the prompt with the inputs.
Parameters
kwargs (Any) β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Return type
str
Example:
prompt.format(variable1="foo")
class langchain.prompts.HumanMessagePromptTemplate(*, prompt, additional_kwargs=None)[source]ο
Bases: langchain.prompts.chat.BaseStringMessagePromptTemplate
Parameters
prompt (langchain.prompts.base.StringPromptTemplate) β
additional_kwargs (dict) β
Return type
None
format(**kwargs)[source]ο
To a BaseMessage.
Parameters
kwargs (Any) β
Return type
langchain.schema.BaseMessage
class langchain.prompts.LengthBasedExampleSelector(*, examples, example_prompt, get_text_length=<function _get_length_based>, max_length=2048, example_text_lengths=[])[source]ο
Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel
Select examples based on length.
Parameters
examples (List[dict]) β
example_prompt (langchain.prompts.prompt.PromptTemplate) β
get_text_length (Callable[[str], int]) β
max_length (int) β
example_text_lengths (List[int]) β
Return type
None
attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]ο
Prompt template used to format the examples.
attribute examples: List[dict] [Required]ο
A list of the examples that the prompt template expects.
attribute get_text_length: Callable[[str], int] = <function _get_length_based>ο
Function to measure prompt length. Defaults to word count.
attribute max_length: int = 2048ο
Max length for the prompt, beyond which examples are cut.
add_example(example)[source]ο
Add new example to list.
Parameters
example (Dict[str, str]) β
Return type
None
select_examples(input_variables)[source]ο
Select which examples to use based on the input lengths.
Parameters
input_variables (Dict[str, str]) β
Return type
List[dict]
class langchain.prompts.MaxMarginalRelevanceExampleSelector(*, vectorstore, k=4, example_keys=None, input_keys=None, fetch_k=20)[source]ο
Bases: langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector
ExampleSelector that selects examples based on Max Marginal Relevance.
This was shown to improve performance in this paper:
https://arxiv.org/pdf/2211.13892.pdf
Parameters
vectorstore (langchain.vectorstores.base.VectorStore) β
k (int) β
example_keys (Optional[List[str]]) β
input_keys (Optional[List[str]]) β
fetch_k (int) β
Return type
None
attribute example_keys: Optional[List[str]] = Noneο
Optional keys to filter examples to.
attribute fetch_k: int = 20ο
Number of examples to fetch to rerank.
attribute input_keys: Optional[List[str]] = Noneο
Optional keys to filter input to. If provided, the search is based on
the input variables instead of all variables.
attribute k: int = 4ο
Number of examples to select.
attribute vectorstore: langchain.vectorstores.base.VectorStore [Required]ο
VectorStore than contains information about examples.
classmethod from_examples(examples, embeddings, vectorstore_cls, k=4, input_keys=None, fetch_k=20, **vectorstore_cls_kwargs)[source]ο
Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples (List[dict]) β List of examples to use in the prompt.
embeddings (langchain.embeddings.base.Embeddings) β An iniialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls (Type[langchain.vectorstores.base.VectorStore]) β A vector store DB interface class, e.g. FAISS.
k (int) β Number of examples to select
input_keys (Optional[List[str]]) β If provided, the search is based on the input variables
instead of all variables.
vectorstore_cls_kwargs (Any) β optional kwargs containing url for vector store
fetch_k (int) β
Returns
The ExampleSelector instantiated, backed by a vector store.
Return type
langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector
select_examples(input_variables)[source]ο
Select which examples to use based on semantic similarity.
Parameters
input_variables (Dict[str, str]) β
Return type
List[dict]
class langchain.prompts.MessagesPlaceholder(*, variable_name)[source]ο
Bases: langchain.prompts.chat.BaseMessagePromptTemplate
Prompt template that assumes variable is already list of messages.
Parameters
variable_name (str) β
Return type
None
attribute variable_name: str [Required]ο
format_messages(**kwargs)[source]ο
To a BaseMessage.
Parameters
kwargs (Any) β
Return type
List[langchain.schema.BaseMessage]
property input_variables: List[str]ο
Input variables for this prompt template.
class langchain.prompts.NGramOverlapExampleSelector(*, examples, example_prompt, threshold=- 1.0)[source]ο
Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel
Select and order examples based on ngram overlap score (sentence_bleu score).
https://www.nltk.org/_modules/nltk/translate/bleu_score.html
https://aclanthology.org/P02-1040.pdf
Parameters
examples (List[dict]) β
example_prompt (langchain.prompts.prompt.PromptTemplate) β
threshold (float) β
Return type
None
attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]ο
Prompt template used to format the examples.
attribute examples: List[dict] [Required]ο
A list of the examples that the prompt template expects.
attribute threshold: float = -1.0ο
Threshold at which algorithm stops. Set to -1.0 by default.
For negative threshold:
select_examples sorts examples by ngram_overlap_score, but excludes none.
For threshold greater than 1.0:
select_examples excludes all examples, and returns an empty list.
For threshold equal to 0.0:
select_examples sorts examples by ngram_overlap_score,
and excludes examples with no ngram overlap with input.
add_example(example)[source]ο
Add new example to list.
Parameters
example (Dict[str, str]) β
Return type
None
select_examples(input_variables)[source]ο
Return list of examples sorted by ngram_overlap_score with input.
Descending order.
Excludes any examples with ngram_overlap_score less than or equal to threshold.
Parameters
input_variables (Dict[str, str]) β
Return type
List[dict]
class langchain.prompts.PipelinePromptTemplate(*, input_variables, output_parser=None, partial_variables=None, final_prompt, pipeline_prompts)[source]ο
Bases: langchain.prompts.base.BasePromptTemplate
A prompt template for composing multiple prompts together.
This can be useful when you want to reuse parts of prompts.
A PipelinePrompt consists of two main parts:
final_prompt: This is the final prompt that is returned
pipeline_prompts: This is a list of tuples, consistingof a string (name) and a Prompt Template.
Each PromptTemplate will be formatted and then passed
to future prompt templates as a variable with
the same name as name
Parameters
input_variables (List[str]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
partial_variables (Mapping[str, Union[str, Callable[[], str]]]) β
final_prompt (langchain.prompts.base.BasePromptTemplate) β
pipeline_prompts (List[Tuple[str, langchain.prompts.base.BasePromptTemplate]]) β
Return type
None
attribute final_prompt: langchain.prompts.base.BasePromptTemplate [Required]ο
attribute pipeline_prompts: List[Tuple[str, langchain.prompts.base.BasePromptTemplate]] [Required]ο
format(**kwargs)[source]ο
Format the prompt with the inputs.
Parameters
kwargs (Any) β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Return type
str
Example:
prompt.format(variable1="foo")
format_prompt(**kwargs)[source]ο
Create Chat Messages.
Parameters
kwargs (Any) β
Return type
langchain.schema.PromptValue
langchain.prompts.Promptο
alias of langchain.prompts.prompt.PromptTemplate
class langchain.prompts.PromptTemplate(*, input_variables, output_parser=None, partial_variables=None, template, template_format='f-string', validate_template=True)[source]ο
Bases: langchain.prompts.base.StringPromptTemplate
Schema to represent a prompt for an LLM.
Example
from langchain import PromptTemplate
prompt = PromptTemplate(input_variables=["foo"], template="Say {foo}")
Parameters
input_variables (List[str]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
partial_variables (Mapping[str, Union[str, Callable[[], str]]]) β
template (str) β
template_format (str) β
validate_template (bool) β
Return type
None
attribute input_variables: List[str] [Required]ο
A list of the names of the variables the prompt template expects.
attribute template: str [Required]ο
The prompt template.
attribute template_format: str = 'f-string'ο
The format of the prompt template. Options are: βf-stringβ, βjinja2β.
attribute validate_template: bool = Trueο
Whether or not to try validating the template.
format(**kwargs)[source]ο
Format the prompt with the inputs.
Parameters
kwargs (Any) β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Return type
str
Example:
prompt.format(variable1="foo")
classmethod from_examples(examples, suffix, input_variables, example_separator='\n\n', prefix='', **kwargs)[source]ο
Take examples in list format with prefix and suffix to create a prompt.
Intended to be used as a way to dynamically create a prompt from examples.
Parameters
examples (List[str]) β List of examples to use in the prompt.
suffix (str) β String to go after the list of examples. Should generally
set up the userβs input.
input_variables (List[str]) β A list of variable names the final prompt template
will expect.
example_separator (str) β The separator to use in between examples. Defaults
to two new line characters.
prefix (str) β String that should go before any examples. Generally includes
examples. Default to an empty string.
kwargs (Any) β
Returns
The final prompt generated.
Return type
langchain.prompts.prompt.PromptTemplate
classmethod from_file(template_file, input_variables, **kwargs)[source]ο
Load a prompt from a file.
Parameters
template_file (Union[str, pathlib.Path]) β The path to the file containing the prompt template.
input_variables (List[str]) β A list of variable names the final prompt template
will expect.
kwargs (Any) β
Returns
The prompt loaded from the file.
Return type
langchain.prompts.prompt.PromptTemplate
classmethod from_template(template, **kwargs)[source]ο
Load a prompt template from a template.
Parameters
template (str) β
kwargs (Any) β
Return type
langchain.prompts.prompt.PromptTemplate
property lc_attributes: Dict[str, Any]ο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
class langchain.prompts.SemanticSimilarityExampleSelector(*, vectorstore, k=4, example_keys=None, input_keys=None)[source]ο
Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel
Example selector that selects examples based on SemanticSimilarity.
Parameters
vectorstore (langchain.vectorstores.base.VectorStore) β
k (int) β
example_keys (Optional[List[str]]) β
input_keys (Optional[List[str]]) β
Return type
None
attribute example_keys: Optional[List[str]] = Noneο
Optional keys to filter examples to.
attribute input_keys: Optional[List[str]] = Noneο
Optional keys to filter input to. If provided, the search is based on
the input variables instead of all variables.
attribute k: int = 4ο
Number of examples to select.
attribute vectorstore: langchain.vectorstores.base.VectorStore [Required]ο
VectorStore than contains information about examples.
add_example(example)[source]ο
Add new example to vectorstore.
Parameters
example (Dict[str, str]) β
Return type
str
classmethod from_examples(examples, embeddings, vectorstore_cls, k=4, input_keys=None, **vectorstore_cls_kwargs)[source]ο
Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples (List[dict]) β List of examples to use in the prompt.
embeddings (langchain.embeddings.base.Embeddings) β An initialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls (Type[langchain.vectorstores.base.VectorStore]) β A vector store DB interface class, e.g. FAISS.
k (int) β Number of examples to select
input_keys (Optional[List[str]]) β If provided, the search is based on the input variables
instead of all variables.
vectorstore_cls_kwargs (Any) β optional kwargs containing url for vector store
Returns
The ExampleSelector instantiated, backed by a vector store.
Return type
langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector
select_examples(input_variables)[source]ο
Select which examples to use based on semantic similarity.
Parameters
input_variables (Dict[str, str]) β
Return type
List[dict]
class langchain.prompts.StringPromptTemplate(*, input_variables, output_parser=None, partial_variables=None)[source]ο
Bases: langchain.prompts.base.BasePromptTemplate, abc.ABC
String prompt should expose the format method, returning a prompt.
Parameters
input_variables (List[str]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
partial_variables (Mapping[str, Union[str, Callable[[], str]]]) β
Return type
None
format_prompt(**kwargs)[source]ο
Create Chat Messages.
Parameters
kwargs (Any) β
Return type
langchain.schema.PromptValue
class langchain.prompts.SystemMessagePromptTemplate(*, prompt, additional_kwargs=None)[source]ο
Bases: langchain.prompts.chat.BaseStringMessagePromptTemplate
Parameters
prompt (langchain.prompts.base.StringPromptTemplate) β
additional_kwargs (dict) β
Return type
None
format(**kwargs)[source]ο
To a BaseMessage.
Parameters
kwargs (Any) β
Return type
langchain.schema.BaseMessage
langchain.prompts.load_prompt(path)[source]ο
Unified method for loading a prompt from LangChainHub or local fs.
Parameters
path (Union[str, pathlib.Path]) β
Return type
langchain.prompts.base.BasePromptTemplate | https://api.python.langchain.com/en/latest/modules/prompts.html |
f631b90a-9e53-4c8e-9640-d7fea38e324c | Example Selectorο
Logic for selecting examples to include in prompts.
class langchain.prompts.example_selector.LengthBasedExampleSelector(*, examples, example_prompt, get_text_length=<function _get_length_based>, max_length=2048, example_text_lengths=[])[source]ο
Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel
Select examples based on length.
Parameters
examples (List[dict]) β
example_prompt (langchain.prompts.prompt.PromptTemplate) β
get_text_length (Callable[[str], int]) β
max_length (int) β
example_text_lengths (List[int]) β
Return type
None
attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]ο
Prompt template used to format the examples.
attribute examples: List[dict] [Required]ο
A list of the examples that the prompt template expects.
attribute get_text_length: Callable[[str], int] = <function _get_length_based>ο
Function to measure prompt length. Defaults to word count.
attribute max_length: int = 2048ο
Max length for the prompt, beyond which examples are cut.
add_example(example)[source]ο
Add new example to list.
Parameters
example (Dict[str, str]) β
Return type
None
select_examples(input_variables)[source]ο
Select which examples to use based on the input lengths.
Parameters
input_variables (Dict[str, str]) β
Return type
List[dict]
class langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector(*, vectorstore, k=4, example_keys=None, input_keys=None, fetch_k=20)[source]ο
Bases: langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector
ExampleSelector that selects examples based on Max Marginal Relevance.
This was shown to improve performance in this paper:
https://arxiv.org/pdf/2211.13892.pdf
Parameters
vectorstore (langchain.vectorstores.base.VectorStore) β
k (int) β
example_keys (Optional[List[str]]) β
input_keys (Optional[List[str]]) β
fetch_k (int) β
Return type
None
attribute fetch_k: int = 20ο
Number of examples to fetch to rerank.
classmethod from_examples(examples, embeddings, vectorstore_cls, k=4, input_keys=None, fetch_k=20, **vectorstore_cls_kwargs)[source]ο
Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples (List[dict]) β List of examples to use in the prompt.
embeddings (langchain.embeddings.base.Embeddings) β An iniialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls (Type[langchain.vectorstores.base.VectorStore]) β A vector store DB interface class, e.g. FAISS.
k (int) β Number of examples to select
input_keys (Optional[List[str]]) β If provided, the search is based on the input variables
instead of all variables.
vectorstore_cls_kwargs (Any) β optional kwargs containing url for vector store
fetch_k (int) β
Returns
The ExampleSelector instantiated, backed by a vector store.
Return type
langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector
select_examples(input_variables)[source]ο
Select which examples to use based on semantic similarity.
Parameters
input_variables (Dict[str, str]) β
Return type
List[dict]
class langchain.prompts.example_selector.NGramOverlapExampleSelector(*, examples, example_prompt, threshold=- 1.0)[source]ο
Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel
Select and order examples based on ngram overlap score (sentence_bleu score).
https://www.nltk.org/_modules/nltk/translate/bleu_score.html
https://aclanthology.org/P02-1040.pdf
Parameters
examples (List[dict]) β
example_prompt (langchain.prompts.prompt.PromptTemplate) β
threshold (float) β
Return type
None
attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]ο
Prompt template used to format the examples.
attribute examples: List[dict] [Required]ο
A list of the examples that the prompt template expects.
attribute threshold: float = -1.0ο
Threshold at which algorithm stops. Set to -1.0 by default.
For negative threshold:
select_examples sorts examples by ngram_overlap_score, but excludes none.
For threshold greater than 1.0:
select_examples excludes all examples, and returns an empty list.
For threshold equal to 0.0:
select_examples sorts examples by ngram_overlap_score,
and excludes examples with no ngram overlap with input.
add_example(example)[source]ο
Add new example to list.
Parameters
example (Dict[str, str]) β
Return type
None
select_examples(input_variables)[source]ο
Return list of examples sorted by ngram_overlap_score with input.
Descending order.
Excludes any examples with ngram_overlap_score less than or equal to threshold.
Parameters
input_variables (Dict[str, str]) β
Return type
List[dict]
class langchain.prompts.example_selector.SemanticSimilarityExampleSelector(*, vectorstore, k=4, example_keys=None, input_keys=None)[source]ο
Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel
Example selector that selects examples based on SemanticSimilarity.
Parameters
vectorstore (langchain.vectorstores.base.VectorStore) β
k (int) β
example_keys (Optional[List[str]]) β
input_keys (Optional[List[str]]) β
Return type
None
attribute example_keys: Optional[List[str]] = Noneο
Optional keys to filter examples to.
attribute input_keys: Optional[List[str]] = Noneο
Optional keys to filter input to. If provided, the search is based on
the input variables instead of all variables.
attribute k: int = 4ο
Number of examples to select.
attribute vectorstore: langchain.vectorstores.base.VectorStore [Required]ο
VectorStore than contains information about examples.
add_example(example)[source]ο
Add new example to vectorstore.
Parameters
example (Dict[str, str]) β
Return type
str
classmethod from_examples(examples, embeddings, vectorstore_cls, k=4, input_keys=None, **vectorstore_cls_kwargs)[source]ο
Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples (List[dict]) β List of examples to use in the prompt.
embeddings (langchain.embeddings.base.Embeddings) β An initialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls (Type[langchain.vectorstores.base.VectorStore]) β A vector store DB interface class, e.g. FAISS.
k (int) β Number of examples to select
input_keys (Optional[List[str]]) β If provided, the search is based on the input variables
instead of all variables.
vectorstore_cls_kwargs (Any) β optional kwargs containing url for vector store
Returns
The ExampleSelector instantiated, backed by a vector store.
Return type
langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector
select_examples(input_variables)[source]ο
Select which examples to use based on semantic similarity.
Parameters
input_variables (Dict[str, str]) β
Return type
List[dict] | https://api.python.langchain.com/en/latest/modules/example_selector.html |
cfa50aec-8749-4b8e-af15-9d7641c9caa9 | Document Transformersο
Transform documents
langchain.document_transformers.get_stateful_documents(documents)[source]ο
Convert a list of documents to a list of documents with state.
Parameters
documents (Sequence[langchain.schema.Document]) β The documents to convert.
Returns
A list of documents with state.
Return type
Sequence[langchain.document_transformers._DocumentWithState]
class langchain.document_transformers.EmbeddingsRedundantFilter(*, embeddings, similarity_fn=<function cosine_similarity>, similarity_threshold=0.95)[source]ο
Bases: langchain.schema.BaseDocumentTransformer, pydantic.main.BaseModel
Filter that drops redundant documents by comparing their embeddings.
Parameters
embeddings (langchain.embeddings.base.Embeddings) β
similarity_fn (Callable) β
similarity_threshold (float) β
Return type
None
attribute embeddings: langchain.embeddings.base.Embeddings [Required]ο
Embeddings to use for embedding document contents.
attribute similarity_fn: Callable = <function cosine_similarity>ο
Similarity function for comparing documents. Function expected to take as input
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity.
attribute similarity_threshold: float = 0.95ο
Threshold for determining when two documents are similar enough
to be considered redundant.
async atransform_documents(documents, **kwargs)[source]ο
Asynchronously transform a list of documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
kwargs (Any) β
Return type
Sequence[langchain.schema.Document]
transform_documents(documents, **kwargs)[source]ο
Filter down documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
kwargs (Any) β
Return type
Sequence[langchain.schema.Document]
Text Splittersο
Functionality for splitting text.
class langchain.text_splitter.TextSplitter(chunk_size=4000, chunk_overlap=200, length_function=<built-in function len>, keep_separator=False, add_start_index=False)[source]ο
Bases: langchain.schema.BaseDocumentTransformer, abc.ABC
Interface for splitting text into chunks.
Parameters
chunk_size (int) β
chunk_overlap (int) β
length_function (Callable[[str], int]) β
keep_separator (bool) β
add_start_index (bool) β
Return type
None
abstract split_text(text)[source]ο
Split text into multiple components.
Parameters
text (str) β
Return type
List[str]
create_documents(texts, metadatas=None)[source]ο
Create documents from a list of texts.
Parameters
texts (List[str]) β
metadatas (Optional[List[dict]]) β
Return type
List[langchain.schema.Document]
split_documents(documents)[source]ο
Split documents.
Parameters
documents (Iterable[langchain.schema.Document]) β
Return type
List[langchain.schema.Document]
classmethod from_huggingface_tokenizer(tokenizer, **kwargs)[source]ο
Text splitter that uses HuggingFace tokenizer to count length.
Parameters
tokenizer (Any) β
kwargs (Any) β
Return type
langchain.text_splitter.TextSplitter
classmethod from_tiktoken_encoder(encoding_name='gpt2', model_name=None, allowed_special={}, disallowed_special='all', **kwargs)[source]ο
Text splitter that uses tiktoken encoder to count length.
Parameters
encoding_name (str) β
model_name (Optional[str]) β
allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) β
disallowed_special (Union[Literal['all'], typing.Collection[str]]) β
kwargs (Any) β
Return type
langchain.text_splitter.TS
transform_documents(documents, **kwargs)[source]ο
Transform sequence of documents by splitting them.
Parameters
documents (Sequence[langchain.schema.Document]) β
kwargs (Any) β
Return type
Sequence[langchain.schema.Document]
async atransform_documents(documents, **kwargs)[source]ο
Asynchronously transform a sequence of documents by splitting them.
Parameters
documents (Sequence[langchain.schema.Document]) β
kwargs (Any) β
Return type
Sequence[langchain.schema.Document]
class langchain.text_splitter.CharacterTextSplitter(separator='\n\n', **kwargs)[source]ο
Bases: langchain.text_splitter.TextSplitter
Implementation of splitting text that looks at characters.
Parameters
separator (str) β
kwargs (Any) β
Return type
None
split_text(text)[source]ο
Split incoming text and return chunks.
Parameters
text (str) β
Return type
List[str]
class langchain.text_splitter.LineType[source]ο
Bases: TypedDict
Line type as typed dict.
metadata: Dict[str, str]ο
content: strο
class langchain.text_splitter.HeaderType[source]ο
Bases: TypedDict
Header type as typed dict.
level: intο
name: strο
data: strο
class langchain.text_splitter.MarkdownHeaderTextSplitter(headers_to_split_on, return_each_line=False)[source]ο
Bases: object
Implementation of splitting markdown files based on specified headers.
Parameters
headers_to_split_on (List[Tuple[str, str]]) β
return_each_line (bool) β
aggregate_lines_to_chunks(lines)[source]ο
Combine lines with common metadata into chunks
:param lines: Line of text / associated header metadata
Parameters
lines (List[langchain.text_splitter.LineType]) β
Return type
List[langchain.schema.Document]
split_text(text)[source]ο
Split markdown file
:param text: Markdown file
Parameters
text (str) β
Return type
List[langchain.schema.Document]
class langchain.text_splitter.Tokenizer(chunk_overlap: 'int', tokens_per_chunk: 'int', decode: 'Callable[[list[int]], str]', encode: 'Callable[[str], List[int]]')[source]ο
Bases: object
Parameters
chunk_overlap (int) β
tokens_per_chunk (int) β
decode (Callable[[list[int]], str]) β
encode (Callable[[str], List[int]]) β
Return type
None
chunk_overlap: intο
tokens_per_chunk: intο
decode: Callable[[list[int]], str]ο
encode: Callable[[str], List[int]]ο
langchain.text_splitter.split_text_on_tokens(*, text, tokenizer)[source]ο
Split incoming text and return chunks.
Parameters
text (str) β
tokenizer (langchain.text_splitter.Tokenizer) β
Return type
List[str]
class langchain.text_splitter.TokenTextSplitter(encoding_name='gpt2', model_name=None, allowed_special={}, disallowed_special='all', **kwargs)[source]ο
Bases: langchain.text_splitter.TextSplitter
Implementation of splitting text that looks at tokens.
Parameters
encoding_name (str) β
model_name (Optional[str]) β
allowed_special (Union[Literal['all'], AbstractSet[str]]) β
disallowed_special (Union[Literal['all'], Collection[str]]) β
kwargs (Any) β
Return type
None
split_text(text)[source]ο
Split text into multiple components.
Parameters
text (str) β
Return type
List[str]
class langchain.text_splitter.SentenceTransformersTokenTextSplitter(chunk_overlap=50, model_name='sentence-transformers/all-mpnet-base-v2', tokens_per_chunk=None, **kwargs)[source]ο
Bases: langchain.text_splitter.TextSplitter
Implementation of splitting text that looks at tokens.
Parameters
chunk_overlap (int) β
model_name (str) β
tokens_per_chunk (Optional[int]) β
kwargs (Any) β
Return type
None
split_text(text)[source]ο
Split text into multiple components.
Parameters
text (str) β
Return type
List[str]
count_tokens(*, text)[source]ο
Parameters
text (str) β
Return type
int
class langchain.text_splitter.Language(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]ο
Bases: str, enum.Enum
CPP = 'cpp'ο
GO = 'go'ο
JAVA = 'java'ο
JS = 'js'ο
PHP = 'php'ο
PROTO = 'proto'ο
PYTHON = 'python'ο
RST = 'rst'ο
RUBY = 'ruby'ο
RUST = 'rust'ο
SCALA = 'scala'ο
SWIFT = 'swift'ο
MARKDOWN = 'markdown'ο
LATEX = 'latex'ο
HTML = 'html'ο
SOL = 'sol'ο
class langchain.text_splitter.RecursiveCharacterTextSplitter(separators=None, keep_separator=True, **kwargs)[source]ο
Bases: langchain.text_splitter.TextSplitter
Implementation of splitting text that looks at characters.
Recursively tries to split by different characters to find one
that works.
Parameters
separators (Optional[List[str]]) β
keep_separator (bool) β
kwargs (Any) β
Return type
None
split_text(text)[source]ο
Split text into multiple components.
Parameters
text (str) β
Return type
List[str]
classmethod from_language(language, **kwargs)[source]ο
Parameters
language (langchain.text_splitter.Language) β
kwargs (Any) β
Return type
langchain.text_splitter.RecursiveCharacterTextSplitter
static get_separators_for_language(language)[source]ο
Parameters
language (langchain.text_splitter.Language) β
Return type
List[str]
class langchain.text_splitter.NLTKTextSplitter(separator='\n\n', **kwargs)[source]ο
Bases: langchain.text_splitter.TextSplitter
Implementation of splitting text that looks at sentences using NLTK.
Parameters
separator (str) β
kwargs (Any) β
Return type
None
split_text(text)[source]ο
Split incoming text and return chunks.
Parameters
text (str) β
Return type
List[str]
class langchain.text_splitter.SpacyTextSplitter(separator='\n\n', pipeline='en_core_web_sm', **kwargs)[source]ο
Bases: langchain.text_splitter.TextSplitter
Implementation of splitting text that looks at sentences using Spacy.
Parameters
separator (str) β
pipeline (str) β
kwargs (Any) β
Return type
None
split_text(text)[source]ο
Split incoming text and return chunks.
Parameters
text (str) β
Return type
List[str]
class langchain.text_splitter.PythonCodeTextSplitter(**kwargs)[source]ο
Bases: langchain.text_splitter.RecursiveCharacterTextSplitter
Attempts to split the text along Python syntax.
Parameters
kwargs (Any) β
Return type
None
class langchain.text_splitter.MarkdownTextSplitter(**kwargs)[source]ο
Bases: langchain.text_splitter.RecursiveCharacterTextSplitter
Attempts to split the text along Markdown-formatted headings.
Parameters
kwargs (Any) β
Return type
None
class langchain.text_splitter.LatexTextSplitter(**kwargs)[source]ο
Bases: langchain.text_splitter.RecursiveCharacterTextSplitter
Attempts to split the text along Latex-formatted layout elements.
Parameters
kwargs (Any) β
Return type
None | https://api.python.langchain.com/en/latest/modules/document_transformers.html |
a697e986-4657-4695-97fa-2c921a92a446 | Agentsο
Interface for agents.
class langchain.agents.Agent(*, llm_chain, output_parser, allowed_tools=None)[source]ο
Bases: langchain.agents.agent.BaseSingleActionAgent
Class responsible for calling the language model and deciding the action.
This is driven by an LLMChain. The prompt in the LLMChain MUST include
a variable called βagent_scratchpadβ where the agent can put its
intermediary work.
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
output_parser (langchain.agents.agent.AgentOutputParser) β
allowed_tools (Optional[List[str]]) β
Return type
None
attribute allowed_tools: Optional[List[str]] = Noneο
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
attribute output_parser: langchain.agents.agent.AgentOutputParser [Required]ο
async aplan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to run.
**kwargs β User inputs.
kwargs (Any) β
Returns
Action specifying what tool to use.
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
abstract classmethod create_prompt(tools)[source]ο
Create a prompt for this class.
Parameters
tools (Sequence[langchain.tools.base.BaseTool]) β
Return type
langchain.prompts.base.BasePromptTemplate
dict(**kwargs)[source]ο
Return dictionary representation of agent.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, **kwargs)[source]ο
Construct an agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
tools (Sequence[langchain.tools.base.BaseTool]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
output_parser (Optional[langchain.agents.agent.AgentOutputParser]) β
kwargs (Any) β
Return type
langchain.agents.agent.Agent
get_allowed_tools()[source]ο
Return type
Optional[List[str]]
get_full_inputs(intermediate_steps, **kwargs)[source]ο
Create the full inputs for the LLMChain from intermediate steps.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β
kwargs (Any) β
Return type
Dict[str, Any]
plan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to run.
**kwargs β User inputs.
kwargs (Any) β
Returns
Action specifying what tool to use.
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
return_stopped_response(early_stopping_method, intermediate_steps, **kwargs)[source]ο
Return response when agent has been stopped due to max iterations.
Parameters
early_stopping_method (str) β
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β
kwargs (Any) β
Return type
langchain.schema.AgentFinish
tool_run_logging_kwargs()[source]ο
Return type
Dict
abstract property llm_prefix: strο
Prefix to append the LLM call with.
abstract property observation_prefix: strο
Prefix to append the observation with.
property return_values: List[str]ο
Return values of the agent.
class langchain.agents.AgentExecutor(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, agent, tools, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)[source]ο
Bases: langchain.chains.base.Chain
Consists of an agent using tools.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
agent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) β
tools (Sequence[langchain.tools.base.BaseTool]) β
return_intermediate_steps (bool) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
handle_parsing_errors (Union[bool, str, Callable[[langchain.schema.OutputParserException], str]]) β
Return type
None
attribute agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]ο
The agent to run for creating a plan and determining actions
to take at each step of the execution loop.
attribute early_stopping_method: str = 'force'ο
The method to use for early stopping if the agent never
returns AgentFinish. Either βforceβ or βgenerateβ.
βforceβ returns a string saying that it stopped because it met atime or iteration limit.
βgenerateβ calls the agentβs LLM Chain one final time to generatea final answer based on the previous steps.
attribute handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = Falseο
How to handle errors raised by the agentβs output parser.Defaults to False, which raises the error.
sIf true, the error will be sent back to the LLM as an observation.
If a string, the string itself will be sent to the LLM as an observation.
If a callable function, the function will be called with the exception
as an argument, and the result of that function will be passed to the agentas an observation.
attribute max_execution_time: Optional[float] = Noneο
The maximum amount of wall clock time to spend in the execution
loop.
attribute max_iterations: Optional[int] = 15ο
The maximum number of steps to take before ending the execution
loop.
Setting to βNoneβ could lead to an infinite loop.
attribute return_intermediate_steps: bool = Falseο
Whether to return the agentβs trajectory of intermediate steps
at the end in addition to the final output.
attribute tools: Sequence[BaseTool] [Required]ο
The valid tools the agent can call.
classmethod from_agent_and_tools(agent, tools, callback_manager=None, **kwargs)[source]ο
Create from agent and tools.
Parameters
agent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) β
tools (Sequence[langchain.tools.base.BaseTool]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
kwargs (Any) β
Return type
langchain.agents.agent.AgentExecutor
lookup_tool(name)[source]ο
Lookup tool by name.
Parameters
name (str) β
Return type
langchain.tools.base.BaseTool
save(file_path)[source]ο
Raise error - saving not supported for Agent Executors.
Parameters
file_path (Union[pathlib.Path, str]) β
Return type
None
save_agent(file_path)[source]ο
Save the underlying agent.
Parameters
file_path (Union[pathlib.Path, str]) β
Return type
None
class langchain.agents.AgentOutputParser[source]ο
Bases: langchain.schema.BaseOutputParser
Return type
None
abstract parse(text)[source]ο
Parse text into agent action/finish.
Parameters
text (str) β
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
class langchain.agents.AgentType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]ο
Bases: str, enum.Enum
Enumerator with the Agent types.
ZERO_SHOT_REACT_DESCRIPTION = 'zero-shot-react-description'ο
REACT_DOCSTORE = 'react-docstore'ο
SELF_ASK_WITH_SEARCH = 'self-ask-with-search'ο
CONVERSATIONAL_REACT_DESCRIPTION = 'conversational-react-description'ο
CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'chat-zero-shot-react-description'ο
CHAT_CONVERSATIONAL_REACT_DESCRIPTION = 'chat-conversational-react-description'ο
STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'structured-chat-zero-shot-react-description'ο
OPENAI_FUNCTIONS = 'openai-functions'ο
OPENAI_MULTI_FUNCTIONS = 'openai-multi-functions'ο
class langchain.agents.BaseMultiActionAgent[source]ο
Bases: pydantic.main.BaseModel
Base Agent class.
Return type
None
abstract async aplan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to run.
**kwargs β User inputs.
kwargs (Any) β
Returns
Actions specifying what tool to use.
Return type
Union[List[langchain.schema.AgentAction], langchain.schema.AgentFinish]
dict(**kwargs)[source]ο
Return dictionary representation of agent.
Parameters
kwargs (Any) β
Return type
Dict
get_allowed_tools()[source]ο
Return type
Optional[List[str]]
abstract plan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to run.
**kwargs β User inputs.
kwargs (Any) β
Returns
Actions specifying what tool to use.
Return type
Union[List[langchain.schema.AgentAction], langchain.schema.AgentFinish]
return_stopped_response(early_stopping_method, intermediate_steps, **kwargs)[source]ο
Return response when agent has been stopped due to max iterations.
Parameters
early_stopping_method (str) β
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β
kwargs (Any) β
Return type
langchain.schema.AgentFinish
save(file_path)[source]ο
Save the agent.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the agent to.
Return type
None
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=βpath/agent.yamlβ)
tool_run_logging_kwargs()[source]ο
Return type
Dict
property return_values: List[str]ο
Return values of the agent.
class langchain.agents.BaseSingleActionAgent[source]ο
Bases: pydantic.main.BaseModel
Base Agent class.
Return type
None
abstract async aplan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to run.
**kwargs β User inputs.
kwargs (Any) β
Returns
Action specifying what tool to use.
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
dict(**kwargs)[source]ο
Return dictionary representation of agent.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm_and_tools(llm, tools, callback_manager=None, **kwargs)[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
tools (Sequence[langchain.tools.base.BaseTool]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
kwargs (Any) β
Return type
langchain.agents.agent.BaseSingleActionAgent
get_allowed_tools()[source]ο
Return type
Optional[List[str]]
abstract plan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to run.
**kwargs β User inputs.
kwargs (Any) β
Returns
Action specifying what tool to use.
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
return_stopped_response(early_stopping_method, intermediate_steps, **kwargs)[source]ο
Return response when agent has been stopped due to max iterations.
Parameters
early_stopping_method (str) β
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β
kwargs (Any) β
Return type
langchain.schema.AgentFinish
save(file_path)[source]ο
Save the agent.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the agent to.
Return type
None
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=βpath/agent.yamlβ)
tool_run_logging_kwargs()[source]ο
Return type
Dict
property return_values: List[str]ο
Return values of the agent.
class langchain.agents.ConversationalAgent(*, llm_chain, output_parser=None, allowed_tools=None, ai_prefix='AI')[source]ο
Bases: langchain.agents.agent.Agent
An agent designed to hold a conversation in addition to using tools.
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
output_parser (langchain.agents.agent.AgentOutputParser) β
allowed_tools (Optional[List[str]]) β
ai_prefix (str) β
Return type
None
attribute ai_prefix: str = 'AI'ο
attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]ο
classmethod create_prompt(tools, prefix='Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix='Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions='To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix='AI', human_prefix='Human', input_variables=None)[source]ο
Create prompt in the style of the zero shot agent.
Parameters
tools (Sequence[langchain.tools.base.BaseTool]) β List of tools the agent will have access to, used to format the
prompt.
prefix (str) β String to put before the list of tools.
suffix (str) β String to put after the list of tools.
ai_prefix (str) β String to use before AI output.
human_prefix (str) β String to use before human output.
input_variables (Optional[List[str]]) β List of input variables the final prompt will expect.
format_instructions (str) β
Returns
A PromptTemplate with the template assembled from the pieces here.
Return type
langchain.prompts.prompt.PromptTemplate
classmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, prefix='Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix='Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions='To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix='AI', human_prefix='Human', input_variables=None, **kwargs)[source]ο
Construct an agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
tools (Sequence[langchain.tools.base.BaseTool]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
output_parser (Optional[langchain.agents.agent.AgentOutputParser]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
ai_prefix (str) β
human_prefix (str) β
input_variables (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.agents.agent.Agent
property llm_prefix: strο
Prefix to append the llm call with.
property observation_prefix: strο
Prefix to append the observation with.
class langchain.agents.ConversationalChatAgent(*, llm_chain, output_parser=None, allowed_tools=None, template_tool_response="TOOL RESPONSE: \n---------------------\n{observation}\n\nUSER'S INPUT\n--------------------\n\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.")[source]ο
Bases: langchain.agents.agent.Agent
An agent designed to hold a conversation in addition to using tools.
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
output_parser (langchain.agents.agent.AgentOutputParser) β
allowed_tools (Optional[List[str]]) β
template_tool_response (str) β
Return type
None
attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]ο
attribute template_tool_response: str = "TOOL RESPONSE: \n---------------------\n{observation}\n\nUSER'S INPUT\n--------------------\n\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else."ο
classmethod create_prompt(tools, system_message='Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message="TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables=None, output_parser=None)[source]ο
Create a prompt for this class.
Parameters
tools (Sequence[langchain.tools.base.BaseTool]) β
system_message (str) β
human_message (str) β
input_variables (Optional[List[str]]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
Return type
langchain.prompts.base.BasePromptTemplate
classmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, system_message='Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message="TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables=None, **kwargs)[source]ο
Construct an agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
tools (Sequence[langchain.tools.base.BaseTool]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
output_parser (Optional[langchain.agents.agent.AgentOutputParser]) β
system_message (str) β
human_message (str) β
input_variables (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.agents.agent.Agent
property llm_prefix: strο
Prefix to append the llm call with.
property observation_prefix: strο
Prefix to append the observation with.
class langchain.agents.LLMSingleActionAgent(*, llm_chain, output_parser, stop)[source]ο
Bases: langchain.agents.agent.BaseSingleActionAgent
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
output_parser (langchain.agents.agent.AgentOutputParser) β
stop (List[str]) β
Return type
None
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
attribute output_parser: langchain.agents.agent.AgentOutputParser [Required]ο
attribute stop: List[str] [Required]ο
async aplan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to run.
**kwargs β User inputs.
kwargs (Any) β
Returns
Action specifying what tool to use.
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
dict(**kwargs)[source]ο
Return dictionary representation of agent.
Parameters
kwargs (Any) β
Return type
Dict
plan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to run.
**kwargs β User inputs.
kwargs (Any) β
Returns
Action specifying what tool to use.
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
tool_run_logging_kwargs()[source]ο
Return type
Dict
class langchain.agents.MRKLChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, agent, tools, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)[source]ο
Bases: langchain.agents.agent.AgentExecutor
Chain that implements the MRKL system.
Example
from langchain import OpenAI, MRKLChain
from langchain.chains.mrkl.base import ChainConfig
llm = OpenAI(temperature=0)
prompt = PromptTemplate(...)
chains = [...]
mrkl = MRKLChain.from_chains(llm=llm, prompt=prompt)
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
agent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) β
tools (Sequence[langchain.tools.base.BaseTool]) β
return_intermediate_steps (bool) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
handle_parsing_errors (Union[bool, str, Callable[[langchain.schema.OutputParserException], str]]) β
Return type
None
classmethod from_chains(llm, chains, **kwargs)[source]ο
User friendly way to initialize the MRKL chain.
This is intended to be an easy way to get up and running with the
MRKL chain.
Parameters
llm (langchain.base_language.BaseLanguageModel) β The LLM to use as the agent LLM.
chains (List[langchain.agents.mrkl.base.ChainConfig]) β The chains the MRKL system has access to.
**kwargs β parameters to be passed to initialization.
kwargs (Any) β
Returns
An initialized MRKL chain.
Return type
langchain.agents.agent.AgentExecutor
Example
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, MRKLChain
from langchain.chains.mrkl.base import ChainConfig
llm = OpenAI(temperature=0)
search = SerpAPIWrapper()
llm_math_chain = LLMMathChain(llm=llm)
chains = [
ChainConfig(
action_name = "Search",
action=search.search,
action_description="useful for searching"
),
ChainConfig(
action_name="Calculator",
action=llm_math_chain.run,
action_description="useful for doing math"
)
]
mrkl = MRKLChain.from_chains(llm, chains)
class langchain.agents.OpenAIFunctionsAgent(*, llm, tools, prompt)[source]ο
Bases: langchain.agents.agent.BaseSingleActionAgent
An Agent driven by OpenAIs function powered API.
Parameters
llm (langchain.base_language.BaseLanguageModel) β This should be an instance of ChatOpenAI, specifically a model
that supports using functions.
tools (Sequence[langchain.tools.base.BaseTool]) β The tools this agent has access to.
prompt (langchain.prompts.base.BasePromptTemplate) β The prompt for this agent, should support agent_scratchpad as one
of the variables. For an easy way to construct this prompt, use
OpenAIFunctionsAgent.create_prompt(β¦)
Return type
None
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
attribute prompt: langchain.prompts.base.BasePromptTemplate [Required]ο
attribute tools: Sequence[langchain.tools.base.BaseTool] [Required]ο
async aplan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
**kwargs β User inputs.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Returns
Action specifying what tool to use.
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
classmethod create_prompt(system_message=SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), extra_prompt_messages=None)[source]ο
Create prompt for this agent.
Parameters
system_message (Optional[langchain.schema.SystemMessage]) β Message to use as the system message that will be the
first in the prompt.
extra_prompt_messages (Optional[List[langchain.prompts.chat.BaseMessagePromptTemplate]]) β Prompt messages that will be placed between the
system message and the new human input.
Returns
A prompt template to pass into this agent.
Return type
langchain.prompts.base.BasePromptTemplate
classmethod from_llm_and_tools(llm, tools, callback_manager=None, extra_prompt_messages=None, system_message=SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), **kwargs)[source]ο
Construct an agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
tools (Sequence[langchain.tools.base.BaseTool]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
extra_prompt_messages (Optional[List[langchain.prompts.chat.BaseMessagePromptTemplate]]) β
system_message (Optional[langchain.schema.SystemMessage]) β
kwargs (Any) β
Return type
langchain.agents.agent.BaseSingleActionAgent
get_allowed_tools()[source]ο
Get allowed tools.
Return type
List[str]
plan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date, along with observations
**kwargs β User inputs.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Returns
Action specifying what tool to use.
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
property functions: List[dict]ο
property input_keys: List[str]ο
Get input keys. Input refers to user input here.
class langchain.agents.ReActChain(llm, docstore, *, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, agent, tools, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)[source]ο
Bases: langchain.agents.agent.AgentExecutor
Chain that implements the ReAct paper.
Example
from langchain import ReActChain, OpenAI
react = ReAct(llm=OpenAI())
Parameters
llm (langchain.base_language.BaseLanguageModel) β
docstore (langchain.docstore.base.Docstore) β
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
agent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) β
tools (Sequence[langchain.tools.base.BaseTool]) β
return_intermediate_steps (bool) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
handle_parsing_errors (Union[bool, str, Callable[[langchain.schema.OutputParserException], str]]) β
Return type
None
class langchain.agents.ReActTextWorldAgent(*, llm_chain, output_parser=None, allowed_tools=None)[source]ο
Bases: langchain.agents.react.base.ReActDocstoreAgent
Agent for the ReAct TextWorld chain.
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
output_parser (langchain.agents.agent.AgentOutputParser) β
allowed_tools (Optional[List[str]]) β
Return type
None
classmethod create_prompt(tools)[source]ο
Return default prompt.
Parameters
tools (Sequence[langchain.tools.base.BaseTool]) β
Return type
langchain.prompts.base.BasePromptTemplate
class langchain.agents.SelfAskWithSearchChain(llm, search_chain, *, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, agent, tools, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)[source]ο
Bases: langchain.agents.agent.AgentExecutor
Chain that does self ask with search.
Example
from langchain import SelfAskWithSearchChain, OpenAI, GoogleSerperAPIWrapper
search_chain = GoogleSerperAPIWrapper()
self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain)
Parameters
llm (langchain.base_language.BaseLanguageModel) β
search_chain (Union[langchain.utilities.google_serper.GoogleSerperAPIWrapper, langchain.utilities.serpapi.SerpAPIWrapper]) β
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
agent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) β
tools (Sequence[langchain.tools.base.BaseTool]) β
return_intermediate_steps (bool) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
handle_parsing_errors (Union[bool, str, Callable[[langchain.schema.OutputParserException], str]]) β
Return type
None
class langchain.agents.StructuredChatAgent(*, llm_chain, output_parser=None, allowed_tools=None)[source]ο
Bases: langchain.agents.agent.Agent
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
output_parser (langchain.agents.agent.AgentOutputParser) β
allowed_tools (Optional[List[str]]) β
Return type
None
attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]ο
classmethod create_prompt(tools, prefix='Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix='Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:', human_message_template='{input}\n\n{agent_scratchpad}', format_instructions='Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{{{\nΒ "action": $TOOL_NAME,\nΒ "action_input": $INPUT\n}}}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{{{\nΒ "action": "Final Answer",\nΒ "action_input": "Final response to human"\n}}}}\n```', input_variables=None, memory_prompts=None)[source]ο
Create a prompt for this class.
Parameters
tools (Sequence[langchain.tools.base.BaseTool]) β
prefix (str) β
suffix (str) β
human_message_template (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
memory_prompts (Optional[List[langchain.prompts.base.BasePromptTemplate]]) β
Return type
langchain.prompts.base.BasePromptTemplate
classmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, prefix='Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix='Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:', human_message_template='{input}\n\n{agent_scratchpad}', format_instructions='Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{{{\nΒ "action": $TOOL_NAME,\nΒ "action_input": $INPUT\n}}}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{{{\nΒ "action": "Final Answer",\nΒ "action_input": "Final response to human"\n}}}}\n```', input_variables=None, memory_prompts=None, **kwargs)[source]ο
Construct an agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
tools (Sequence[langchain.tools.base.BaseTool]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
output_parser (Optional[langchain.agents.agent.AgentOutputParser]) β
prefix (str) β
suffix (str) β
human_message_template (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
memory_prompts (Optional[List[langchain.prompts.base.BasePromptTemplate]]) β
kwargs (Any) β
Return type
langchain.agents.agent.Agent
property llm_prefix: strο
Prefix to append the llm call with.
property observation_prefix: strο
Prefix to append the observation with.
class langchain.agents.Tool(name, func, description, *, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, coroutine=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that takes in function or coroutine directly.
Parameters
name (str) β
func (Callable[[...], str]) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
coroutine (Optional[Callable[[...], Awaitable[str]]]) β
Return type
None
attribute coroutine: Optional[Callable[[...], Awaitable[str]]] = Noneο
The asynchronous version of the function.
attribute description: str = ''ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute func: Callable[[...], str] [Required]ο
The function to run when the tool is called.
classmethod from_function(func, name, description, return_direct=False, args_schema=None, **kwargs)[source]ο
Initialize tool from a function.
Parameters
func (Callable) β
name (str) β
description (str) β
return_direct (bool) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
kwargs (Any) β
Return type
langchain.tools.base.Tool
property args: dictο
The toolβs input arguments.
class langchain.agents.ZeroShotAgent(*, llm_chain, output_parser=None, allowed_tools=None)[source]ο
Bases: langchain.agents.agent.Agent
Agent for the MRKL chain.
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
output_parser (langchain.agents.agent.AgentOutputParser) β
allowed_tools (Optional[List[str]]) β
Return type
None
attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]ο
classmethod create_prompt(tools, prefix='Answer the following questions as best you can. You have access to the following tools:', suffix='Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None)[source]ο
Create prompt in the style of the zero shot agent.
Parameters
tools (Sequence[langchain.tools.base.BaseTool]) β List of tools the agent will have access to, used to format the
prompt.
prefix (str) β String to put before the list of tools.
suffix (str) β String to put after the list of tools.
input_variables (Optional[List[str]]) β List of input variables the final prompt will expect.
format_instructions (str) β
Returns
A PromptTemplate with the template assembled from the pieces here.
Return type
langchain.prompts.prompt.PromptTemplate
classmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, prefix='Answer the following questions as best you can. You have access to the following tools:', suffix='Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, **kwargs)[source]ο
Construct an agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
tools (Sequence[langchain.tools.base.BaseTool]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
output_parser (Optional[langchain.agents.agent.AgentOutputParser]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.agents.agent.Agent
property llm_prefix: strο
Prefix to append the llm call with.
property observation_prefix: strο
Prefix to append the observation with.
langchain.agents.create_csv_agent(llm, path, pandas_kwargs=None, **kwargs)[source]ο
Create csv agent by loading to a dataframe and using pandas agent.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
path (Union[str, List[str]]) β
pandas_kwargs (Optional[dict]) β
kwargs (Any) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_json_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix='Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a json agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.json.toolkit.JsonToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_openapi_agent(llm, toolkit, callback_manager=None, prefix="You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix='Begin!\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, return_intermediate_steps=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a json agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
verbose (bool) β
return_intermediate_steps (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_pandas_dataframe_agent(llm, df, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix=None, suffix=None, input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, include_df_in_prompt=True, **kwargs)[source]ο
Construct a pandas agent from an LLM and dataframe.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
df (Any) β
agent_type (langchain.agents.agent_types.AgentType) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (Optional[str]) β
suffix (Optional[str]) β
input_variables (Optional[List[str]]) β
verbose (bool) β
return_intermediate_steps (bool) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
include_df_in_prompt (Optional[bool]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_pbi_agent(llm, toolkit, powerbi=None, callback_manager=None, prefix='You are an agent designed to help users interact with a PowerBI Dataset.\n\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix='Begin!\n\nQuestion: {input}\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', examples=None, input_variables=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a pbi agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) β
powerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
examples (Optional[str]) β
input_variables (Optional[List[str]]) β
top_k (int) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_pbi_chat_agent(llm, toolkit, powerbi=None, callback_manager=None, output_parser=None, prefix='Assistant is a large language model built to help users interact with a PowerBI Dataset.\n\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix="TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples=None, input_variables=None, memory=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a pbi agent from an Chat LLM and tools.
If you supply only a toolkit and no powerbi dataset, the same LLM is used for both.
Parameters
llm (langchain.chat_models.base.BaseChatModel) β
toolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) β
powerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
output_parser (Optional[langchain.agents.agent.AgentOutputParser]) β
prefix (str) β
suffix (str) β
examples (Optional[str]) β
input_variables (Optional[List[str]]) β
memory (Optional[langchain.memory.chat_memory.BaseChatMemory]) β
top_k (int) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_spark_dataframe_agent(llm, df, callback_manager=None, prefix='\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix='\nThis is the result of `print(df.first())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, **kwargs)[source]ο
Construct a spark agent from an LLM and dataframe.
Parameters
llm (langchain.llms.base.BaseLLM) β
df (Any) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
input_variables (Optional[List[str]]) β
verbose (bool) β
return_intermediate_steps (bool) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_spark_sql_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with Spark SQL.\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix='Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a sql agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
top_k (int) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_sql_agent(llm, toolkit, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix='You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix=None, format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a sql agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit) β
agent_type (langchain.agents.agent_types.AgentType) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (Optional[str]) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
top_k (int) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_vectorstore_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a vectorstore agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_vectorstore_router_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a vectorstore router agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.get_all_tool_names()[source]ο
Get a list of all possible tool names.
Return type
List[str]
langchain.agents.initialize_agent(tools, llm, agent=None, callback_manager=None, agent_path=None, agent_kwargs=None, *, tags=None, **kwargs)[source]ο
Load an agent executor given tools and LLM.
Parameters
tools (Sequence[langchain.tools.base.BaseTool]) β List of tools this agent has access to.
llm (langchain.base_language.BaseLanguageModel) β Language model to use as the agent.
agent (Optional[langchain.agents.agent_types.AgentType]) β Agent type to use. If None and agent_path is also None, will default to
AgentType.ZERO_SHOT_REACT_DESCRIPTION.
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β CallbackManager to use. Global callback manager is used if
not provided. Defaults to None.
agent_path (Optional[str]) β Path to serialized agent to use.
agent_kwargs (Optional[dict]) β Additional key word arguments to pass to the underlying agent
tags (Optional[Sequence[str]]) β Tags to apply to the traced runs.
**kwargs β Additional key word arguments passed to the agent executor
kwargs (Any) β
Returns
An agent executor
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.load_agent(path, **kwargs)[source]ο
Unified method for loading a agent from LangChainHub or local fs.
Parameters
path (Union[str, pathlib.Path]) β
kwargs (Any) β
Return type
Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]
langchain.agents.load_huggingface_tool(task_or_repo_id, model_repo_id=None, token=None, remote=False, **kwargs)[source]ο
Loads a tool from the HuggingFace Hub.
Parameters
task_or_repo_id (str) β Task or model repo id.
model_repo_id (Optional[str]) β Optional model repo id.
token (Optional[str]) β Optional token.
remote (bool) β Optional remote. Defaults to False.
**kwargs β
kwargs (Any) β
Returns
A tool.
Return type
langchain.tools.base.BaseTool
langchain.agents.load_tools(tool_names, llm=None, callbacks=None, **kwargs)[source]ο
Load tools based on their name.
Parameters
tool_names (List[str]) β name of tools to load.
llm (Optional[langchain.base_language.BaseLanguageModel]) β Optional language model, may be needed to initialize certain tools.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Optional callback manager or list of callback handlers.
If not provided, default global callback manager will be used.
kwargs (Any) β
Returns
List of tools.
Return type
List[langchain.tools.base.BaseTool]
langchain.agents.tool(*args, return_direct=False, args_schema=None, infer_schema=True)[source]ο
Make tools out of functions, can be used with or without arguments.
Parameters
*args β The arguments to the tool.
return_direct (bool) β Whether to return directly from the tool rather
than continuing the agent loop.
args_schema (Optional[Type[pydantic.main.BaseModel]]) β optional argument schema for user to specify
infer_schema (bool) β Whether to infer the schema of the arguments from
the functionβs signature. This also makes the resultant tool
accept a dictionary input to its run() function.
args (Union[str, Callable]) β
Return type
Callable
Requires:
Function must be of type (str) -> str
Function must have a docstring
Examples
@tool
def search_api(query: str) -> str:
# Searches the API for the query.
return
@tool("search", return_direct=True)
def search_api(query: str) -> str:
# Searches the API for the query.
return | https://api.python.langchain.com/en/latest/modules/agents.html |
00a73b87-368a-4d4b-b6fe-acf310c654c4 | Source code for langchain.requests
"""Lightweight wrapper around requests library, with async support."""
from contextlib import asynccontextmanager
from typing import Any, AsyncGenerator, Dict, Optional
import aiohttp
import requests
from pydantic import BaseModel, Extra
class Requests(BaseModel):
"""Wrapper around requests to handle auth and async.
The main purpose of this wrapper is to handle authentication (by saving
headers) and enable easy async methods on the same base object.
"""
headers: Optional[Dict[str, str]] = None
aiosession: Optional[aiohttp.ClientSession] = None
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
def get(self, url: str, **kwargs: Any) -> requests.Response:
"""GET the URL and return the text."""
return requests.get(url, headers=self.headers, **kwargs)
def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:
"""POST to the URL and return the text."""
return requests.post(url, json=data, headers=self.headers, **kwargs)
def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:
"""PATCH the URL and return the text."""
return requests.patch(url, json=data, headers=self.headers, **kwargs)
def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:
"""PUT the URL and return the text."""
return requests.put(url, json=data, headers=self.headers, **kwargs)
def delete(self, url: str, **kwargs: Any) -> requests.Response:
"""DELETE the URL and return the text."""
return requests.delete(url, headers=self.headers, **kwargs)
@asynccontextmanager
async def _arequest(
self, method: str, url: str, **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""Make an async request."""
if not self.aiosession:
async with aiohttp.ClientSession() as session:
async with session.request(
method, url, headers=self.headers, **kwargs
) as response:
yield response
else:
async with self.aiosession.request(
method, url, headers=self.headers, **kwargs
) as response:
yield response
@asynccontextmanager
async def aget(
self, url: str, **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""GET the URL and return the text asynchronously."""
async with self._arequest("GET", url, **kwargs) as response:
yield response
@asynccontextmanager
async def apost(
self, url: str, data: Dict[str, Any], **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""POST to the URL and return the text asynchronously."""
async with self._arequest("POST", url, **kwargs) as response:
yield response
@asynccontextmanager
async def apatch(
self, url: str, data: Dict[str, Any], **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""PATCH the URL and return the text asynchronously."""
async with self._arequest("PATCH", url, **kwargs) as response:
yield response
@asynccontextmanager
async def aput(
self, url: str, data: Dict[str, Any], **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""PUT the URL and return the text asynchronously."""
async with self._arequest("PUT", url, **kwargs) as response:
yield response
@asynccontextmanager
async def adelete(
self, url: str, **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""DELETE the URL and return the text asynchronously."""
async with self._arequest("DELETE", url, **kwargs) as response:
yield response
[docs]class TextRequestsWrapper(BaseModel):
"""Lightweight wrapper around requests library.
The main purpose of this wrapper is to always return a text output.
"""
headers: Optional[Dict[str, str]] = None
aiosession: Optional[aiohttp.ClientSession] = None
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@property
def requests(self) -> Requests:
return Requests(headers=self.headers, aiosession=self.aiosession)
[docs] def get(self, url: str, **kwargs: Any) -> str:
"""GET the URL and return the text."""
return self.requests.get(url, **kwargs).text
[docs] def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""POST to the URL and return the text."""
return self.requests.post(url, data, **kwargs).text
[docs] def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""PATCH the URL and return the text."""
return self.requests.patch(url, data, **kwargs).text
[docs] def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""PUT the URL and return the text."""
return self.requests.put(url, data, **kwargs).text
[docs] def delete(self, url: str, **kwargs: Any) -> str:
"""DELETE the URL and return the text."""
return self.requests.delete(url, **kwargs).text
[docs] async def aget(self, url: str, **kwargs: Any) -> str:
"""GET the URL and return the text asynchronously."""
async with self.requests.aget(url, **kwargs) as response:
return await response.text()
[docs] async def apost(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""POST to the URL and return the text asynchronously."""
async with self.requests.apost(url, **kwargs) as response:
return await response.text()
[docs] async def apatch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""PATCH the URL and return the text asynchronously."""
async with self.requests.apatch(url, **kwargs) as response:
return await response.text()
[docs] async def aput(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""PUT the URL and return the text asynchronously."""
async with self.requests.aput(url, **kwargs) as response:
return await response.text()
[docs] async def adelete(self, url: str, **kwargs: Any) -> str:
"""DELETE the URL and return the text asynchronously."""
async with self.requests.adelete(url, **kwargs) as response:
return await response.text()
# For backwards compatibility
RequestsWrapper = TextRequestsWrapper | https://api.python.langchain.com/en/latest/_modules/langchain/requests.html |
64279c2e-c500-4306-9936-fc32d5ed5c43 | Source code for langchain.text_splitter
"""Functionality for splitting text."""
from __future__ import annotations
import copy
import logging
import re
from abc import ABC, abstractmethod
from dataclasses import dataclass
from enum import Enum
from typing import (
AbstractSet,
Any,
Callable,
Collection,
Dict,
Iterable,
List,
Literal,
Optional,
Sequence,
Tuple,
Type,
TypedDict,
TypeVar,
Union,
cast,
)
from langchain.docstore.document import Document
from langchain.schema import BaseDocumentTransformer
logger = logging.getLogger(__name__)
TS = TypeVar("TS", bound="TextSplitter")
def _split_text_with_regex(
text: str, separator: str, keep_separator: bool
) -> List[str]:
# Now that we have the separator, split the text
if separator:
if keep_separator:
# The parentheses in the pattern keep the delimiters in the result.
_splits = re.split(f"({separator})", text)
splits = [_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)]
if len(_splits) % 2 == 0:
splits += _splits[-1:]
splits = [_splits[0]] + splits
else:
splits = text.split(separator)
else:
splits = list(text)
return [s for s in splits if s != ""]
[docs]class TextSplitter(BaseDocumentTransformer, ABC):
"""Interface for splitting text into chunks."""
def __init__(
self,
chunk_size: int = 4000,
chunk_overlap: int = 200,
length_function: Callable[[str], int] = len,
keep_separator: bool = False,
add_start_index: bool = False,
) -> None:
"""Create a new TextSplitter.
Args:
chunk_size: Maximum size of chunks to return
chunk_overlap: Overlap in characters between chunks
length_function: Function that measures the length of given chunks
keep_separator: Whether or not to keep the separator in the chunks
add_start_index: If `True`, includes chunk's start index in metadata
"""
if chunk_overlap > chunk_size:
raise ValueError(
f"Got a larger chunk overlap ({chunk_overlap}) than chunk size "
f"({chunk_size}), should be smaller."
)
self._chunk_size = chunk_size
self._chunk_overlap = chunk_overlap
self._length_function = length_function
self._keep_separator = keep_separator
self._add_start_index = add_start_index
[docs] @abstractmethod
def split_text(self, text: str) -> List[str]:
"""Split text into multiple components."""
[docs] def create_documents(
self, texts: List[str], metadatas: Optional[List[dict]] = None
) -> List[Document]:
"""Create documents from a list of texts."""
_metadatas = metadatas or [{}] * len(texts)
documents = []
for i, text in enumerate(texts):
index = -1
for chunk in self.split_text(text):
metadata = copy.deepcopy(_metadatas[i])
if self._add_start_index:
index = text.find(chunk, index + 1)
metadata["start_index"] = index
new_doc = Document(page_content=chunk, metadata=metadata)
documents.append(new_doc)
return documents
[docs] def split_documents(self, documents: Iterable[Document]) -> List[Document]:
"""Split documents."""
texts, metadatas = [], []
for doc in documents:
texts.append(doc.page_content)
metadatas.append(doc.metadata)
return self.create_documents(texts, metadatas=metadatas)
def _join_docs(self, docs: List[str], separator: str) -> Optional[str]:
text = separator.join(docs)
text = text.strip()
if text == "":
return None
else:
return text
def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]:
# We now want to combine these smaller pieces into medium size
# chunks to send to the LLM.
separator_len = self._length_function(separator)
docs = []
current_doc: List[str] = []
total = 0
for d in splits:
_len = self._length_function(d)
if (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
):
if total > self._chunk_size:
logger.warning(
f"Created a chunk of size {total}, "
f"which is longer than the specified {self._chunk_size}"
)
if len(current_doc) > 0:
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
# Keep on popping if:
# - we have a larger chunk than in the chunk overlap
# - or if we still have any chunks and the length is long
while total > self._chunk_overlap or (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
and total > 0
):
total -= self._length_function(current_doc[0]) + (
separator_len if len(current_doc) > 1 else 0
)
current_doc = current_doc[1:]
current_doc.append(d)
total += _len + (separator_len if len(current_doc) > 1 else 0)
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
return docs
[docs] @classmethod
def from_huggingface_tokenizer(cls, tokenizer: Any, **kwargs: Any) -> TextSplitter:
"""Text splitter that uses HuggingFace tokenizer to count length."""
try:
from transformers import PreTrainedTokenizerBase
if not isinstance(tokenizer, PreTrainedTokenizerBase):
raise ValueError(
"Tokenizer received was not an instance of PreTrainedTokenizerBase"
)
def _huggingface_tokenizer_length(text: str) -> int:
return len(tokenizer.encode(text))
except ImportError:
raise ValueError(
"Could not import transformers python package. "
"Please install it with `pip install transformers`."
)
return cls(length_function=_huggingface_tokenizer_length, **kwargs)
[docs] @classmethod
def from_tiktoken_encoder(
cls: Type[TS],
encoding_name: str = "gpt2",
model_name: Optional[str] = None,
allowed_special: Union[Literal["all"], AbstractSet[str]] = set(),
disallowed_special: Union[Literal["all"], Collection[str]] = "all",
**kwargs: Any,
) -> TS:
"""Text splitter that uses tiktoken encoder to count length."""
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to calculate max_tokens_for_prompt. "
"Please install it with `pip install tiktoken`."
)
if model_name is not None:
enc = tiktoken.encoding_for_model(model_name)
else:
enc = tiktoken.get_encoding(encoding_name)
def _tiktoken_encoder(text: str) -> int:
return len(
enc.encode(
text,
allowed_special=allowed_special,
disallowed_special=disallowed_special,
)
)
if issubclass(cls, TokenTextSplitter):
extra_kwargs = {
"encoding_name": encoding_name,
"model_name": model_name,
"allowed_special": allowed_special,
"disallowed_special": disallowed_special,
}
kwargs = {**kwargs, **extra_kwargs}
return cls(length_function=_tiktoken_encoder, **kwargs)
[docs] def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Transform sequence of documents by splitting them."""
return self.split_documents(list(documents))
[docs] async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Asynchronously transform a sequence of documents by splitting them."""
raise NotImplementedError
[docs]class CharacterTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at characters."""
def __init__(self, separator: str = "\n\n", **kwargs: Any) -> None:
"""Create a new TextSplitter."""
super().__init__(**kwargs)
self._separator = separator
[docs] def split_text(self, text: str) -> List[str]:
"""Split incoming text and return chunks."""
# First we naively split the large input into a bunch of smaller ones.
splits = _split_text_with_regex(text, self._separator, self._keep_separator)
_separator = "" if self._keep_separator else self._separator
return self._merge_splits(splits, _separator)
[docs]class LineType(TypedDict):
"""Line type as typed dict."""
metadata: Dict[str, str]
content: str
[docs]class HeaderType(TypedDict):
"""Header type as typed dict."""
level: int
name: str
data: str
[docs]class MarkdownHeaderTextSplitter:
"""Implementation of splitting markdown files based on specified headers."""
def __init__(
self, headers_to_split_on: List[Tuple[str, str]], return_each_line: bool = False
):
"""Create a new MarkdownHeaderTextSplitter.
Args:
headers_to_split_on: Headers we want to track
return_each_line: Return each line w/ associated headers
"""
# Output line-by-line or aggregated into chunks w/ common headers
self.return_each_line = return_each_line
# Given the headers we want to split on,
# (e.g., "#, ##, etc") order by length
self.headers_to_split_on = sorted(
headers_to_split_on, key=lambda split: len(split[0]), reverse=True
)
[docs] def aggregate_lines_to_chunks(self, lines: List[LineType]) -> List[Document]:
"""Combine lines with common metadata into chunks
Args:
lines: Line of text / associated header metadata
"""
aggregated_chunks: List[LineType] = []
for line in lines:
if (
aggregated_chunks
and aggregated_chunks[-1]["metadata"] == line["metadata"]
):
# If the last line in the aggregated list
# has the same metadata as the current line,
# append the current content to the last lines's content
aggregated_chunks[-1]["content"] += " \n" + line["content"]
else:
# Otherwise, append the current line to the aggregated list
aggregated_chunks.append(line)
return [
Document(page_content=chunk["content"], metadata=chunk["metadata"])
for chunk in aggregated_chunks
]
[docs] def split_text(self, text: str) -> List[Document]:
"""Split markdown file
Args:
text: Markdown file"""
# Split the input text by newline character ("\n").
lines = text.split("\n")
# Final output
lines_with_metadata: List[LineType] = []
# Content and metadata of the chunk currently being processed
current_content: List[str] = []
current_metadata: Dict[str, str] = {}
# Keep track of the nested header structure
# header_stack: List[Dict[str, Union[int, str]]] = []
header_stack: List[HeaderType] = []
initial_metadata: Dict[str, str] = {}
for line in lines:
stripped_line = line.strip()
# Check each line against each of the header types (e.g., #, ##)
for sep, name in self.headers_to_split_on:
# Check if line starts with a header that we intend to split on
if stripped_line.startswith(sep) and (
# Header with no text OR header is followed by space
# Both are valid conditions that sep is being used a header
len(stripped_line) == len(sep)
or stripped_line[len(sep)] == " "
):
# Ensure we are tracking the header as metadata
if name is not None:
# Get the current header level
current_header_level = sep.count("#")
# Pop out headers of lower or same level from the stack
while (
header_stack
and header_stack[-1]["level"] >= current_header_level
):
# We have encountered a new header
# at the same or higher level
popped_header = header_stack.pop()
# Clear the metadata for the
# popped header in initial_metadata
if popped_header["name"] in initial_metadata:
initial_metadata.pop(popped_header["name"])
# Push the current header to the stack
header: HeaderType = {
"level": current_header_level,
"name": name,
"data": stripped_line[len(sep) :].strip(),
}
header_stack.append(header)
# Update initial_metadata with the current header
initial_metadata[name] = header["data"]
# Add the previous line to the lines_with_metadata
# only if current_content is not empty
if current_content:
lines_with_metadata.append(
{
"content": "\n".join(current_content),
"metadata": current_metadata.copy(),
}
)
current_content.clear()
break
else:
if stripped_line:
current_content.append(stripped_line)
elif current_content:
lines_with_metadata.append(
{
"content": "\n".join(current_content),
"metadata": current_metadata.copy(),
}
)
current_content.clear()
current_metadata = initial_metadata.copy()
if current_content:
lines_with_metadata.append(
{"content": "\n".join(current_content), "metadata": current_metadata}
)
# lines_with_metadata has each line with associated header metadata
# aggregate these into chunks based on common metadata
if not self.return_each_line:
return self.aggregate_lines_to_chunks(lines_with_metadata)
else:
return [
Document(page_content=chunk["content"], metadata=chunk["metadata"])
for chunk in lines_with_metadata
]
# should be in newer Python versions (3.10+)
# @dataclass(frozen=True, kw_only=True, slots=True)
[docs]@dataclass(frozen=True)
class Tokenizer:
chunk_overlap: int
tokens_per_chunk: int
decode: Callable[[list[int]], str]
encode: Callable[[str], List[int]]
[docs]def split_text_on_tokens(*, text: str, tokenizer: Tokenizer) -> List[str]:
"""Split incoming text and return chunks."""
splits: List[str] = []
input_ids = tokenizer.encode(text)
start_idx = 0
cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids))
chunk_ids = input_ids[start_idx:cur_idx]
while start_idx < len(input_ids):
splits.append(tokenizer.decode(chunk_ids))
start_idx += tokenizer.tokens_per_chunk - tokenizer.chunk_overlap
cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids))
chunk_ids = input_ids[start_idx:cur_idx]
return splits
[docs]class TokenTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at tokens."""
def __init__(
self,
encoding_name: str = "gpt2",
model_name: Optional[str] = None,
allowed_special: Union[Literal["all"], AbstractSet[str]] = set(),
disallowed_special: Union[Literal["all"], Collection[str]] = "all",
**kwargs: Any,
) -> None:
"""Create a new TextSplitter."""
super().__init__(**kwargs)
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to for TokenTextSplitter. "
"Please install it with `pip install tiktoken`."
)
if model_name is not None:
enc = tiktoken.encoding_for_model(model_name)
else:
enc = tiktoken.get_encoding(encoding_name)
self._tokenizer = enc
self._allowed_special = allowed_special
self._disallowed_special = disallowed_special
[docs] def split_text(self, text: str) -> List[str]:
def _encode(_text: str) -> List[int]:
return self._tokenizer.encode(
_text,
allowed_special=self._allowed_special,
disallowed_special=self._disallowed_special,
)
tokenizer = Tokenizer(
chunk_overlap=self._chunk_overlap,
tokens_per_chunk=self._chunk_size,
decode=self._tokenizer.decode,
encode=_encode,
)
return split_text_on_tokens(text=text, tokenizer=tokenizer)
[docs]class SentenceTransformersTokenTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at tokens."""
def __init__(
self,
chunk_overlap: int = 50,
model_name: str = "sentence-transformers/all-mpnet-base-v2",
tokens_per_chunk: Optional[int] = None,
**kwargs: Any,
) -> None:
"""Create a new TextSplitter."""
super().__init__(**kwargs, chunk_overlap=chunk_overlap)
try:
from sentence_transformers import SentenceTransformer
except ImportError:
raise ImportError(
"Could not import sentence_transformer python package. "
"This is needed in order to for SentenceTransformersTokenTextSplitter. "
"Please install it with `pip install sentence-transformers`."
)
self.model_name = model_name
self._model = SentenceTransformer(self.model_name)
self.tokenizer = self._model.tokenizer
self._initialize_chunk_configuration(tokens_per_chunk=tokens_per_chunk)
def _initialize_chunk_configuration(
self, *, tokens_per_chunk: Optional[int]
) -> None:
self.maximum_tokens_per_chunk = cast(int, self._model.max_seq_length)
if tokens_per_chunk is None:
self.tokens_per_chunk = self.maximum_tokens_per_chunk
else:
self.tokens_per_chunk = tokens_per_chunk
if self.tokens_per_chunk > self.maximum_tokens_per_chunk:
raise ValueError(
f"The token limit of the models '{self.model_name}'"
f" is: {self.maximum_tokens_per_chunk}."
f" Argument tokens_per_chunk={self.tokens_per_chunk}"
f" > maximum token limit."
)
[docs] def split_text(self, text: str) -> List[str]:
def encode_strip_start_and_stop_token_ids(text: str) -> List[int]:
return self._encode(text)[1:-1]
tokenizer = Tokenizer(
chunk_overlap=self._chunk_overlap,
tokens_per_chunk=self.tokens_per_chunk,
decode=self.tokenizer.decode,
encode=encode_strip_start_and_stop_token_ids,
)
return split_text_on_tokens(text=text, tokenizer=tokenizer)
[docs] def count_tokens(self, *, text: str) -> int:
return len(self._encode(text))
_max_length_equal_32_bit_integer = 2**32
def _encode(self, text: str) -> List[int]:
token_ids_with_start_and_end_token_ids = self.tokenizer.encode(
text,
max_length=self._max_length_equal_32_bit_integer,
truncation="do_not_truncate",
)
return token_ids_with_start_and_end_token_ids
[docs]class Language(str, Enum):
CPP = "cpp"
GO = "go"
JAVA = "java"
JS = "js"
PHP = "php"
PROTO = "proto"
PYTHON = "python"
RST = "rst"
RUBY = "ruby"
RUST = "rust"
SCALA = "scala"
SWIFT = "swift"
MARKDOWN = "markdown"
LATEX = "latex"
HTML = "html"
SOL = "sol"
[docs]class RecursiveCharacterTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at characters.
Recursively tries to split by different characters to find one
that works.
"""
def __init__(
self,
separators: Optional[List[str]] = None,
keep_separator: bool = True,
**kwargs: Any,
) -> None:
"""Create a new TextSplitter."""
super().__init__(keep_separator=keep_separator, **kwargs)
self._separators = separators or ["\n\n", "\n", " ", ""]
def _split_text(self, text: str, separators: List[str]) -> List[str]:
"""Split incoming text and return chunks."""
final_chunks = []
# Get appropriate separator to use
separator = separators[-1]
new_separators = []
for i, _s in enumerate(separators):
if _s == "":
separator = _s
break
if re.search(_s, text):
separator = _s
new_separators = separators[i + 1 :]
break
splits = _split_text_with_regex(text, separator, self._keep_separator)
# Now go merging things, recursively splitting longer texts.
_good_splits = []
_separator = "" if self._keep_separator else separator
for s in splits:
if self._length_function(s) < self._chunk_size:
_good_splits.append(s)
else:
if _good_splits:
merged_text = self._merge_splits(_good_splits, _separator)
final_chunks.extend(merged_text)
_good_splits = []
if not new_separators:
final_chunks.append(s)
else:
other_info = self._split_text(s, new_separators)
final_chunks.extend(other_info)
if _good_splits:
merged_text = self._merge_splits(_good_splits, _separator)
final_chunks.extend(merged_text)
return final_chunks
[docs] def split_text(self, text: str) -> List[str]:
return self._split_text(text, self._separators)
[docs] @classmethod
def from_language(
cls, language: Language, **kwargs: Any
) -> RecursiveCharacterTextSplitter:
separators = cls.get_separators_for_language(language)
return cls(separators=separators, **kwargs)
[docs] @staticmethod
def get_separators_for_language(language: Language) -> List[str]:
if language == Language.CPP:
return [
# Split along class definitions
"\nclass ",
# Split along function definitions
"\nvoid ",
"\nint ",
"\nfloat ",
"\ndouble ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.GO:
return [
# Split along function definitions
"\nfunc ",
"\nvar ",
"\nconst ",
"\ntype ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.JAVA:
return [
# Split along class definitions
"\nclass ",
# Split along method definitions
"\npublic ",
"\nprotected ",
"\nprivate ",
"\nstatic ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.JS:
return [
# Split along function definitions
"\nfunction ",
"\nconst ",
"\nlet ",
"\nvar ",
"\nclass ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\nswitch ",
"\ncase ",
"\ndefault ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.PHP:
return [
# Split along function definitions
"\nfunction ",
# Split along class definitions
"\nclass ",
# Split along control flow statements
"\nif ",
"\nforeach ",
"\nwhile ",
"\ndo ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.PROTO:
return [
# Split along message definitions
"\nmessage ",
# Split along service definitions
"\nservice ",
# Split along enum definitions
"\nenum ",
# Split along option definitions
"\noption ",
# Split along import statements
"\nimport ",
# Split along syntax declarations
"\nsyntax ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.PYTHON:
return [
# First, try to split along class definitions
"\nclass ",
"\ndef ",
"\n\tdef ",
# Now split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.RST:
return [
# Split along section titles
"\n=+\n",
"\n-+\n",
"\n\*+\n",
# Split along directive markers
"\n\n.. *\n\n",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.RUBY:
return [
# Split along method definitions
"\ndef ",
"\nclass ",
# Split along control flow statements
"\nif ",
"\nunless ",
"\nwhile ",
"\nfor ",
"\ndo ",
"\nbegin ",
"\nrescue ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.RUST:
return [
# Split along function definitions
"\nfn ",
"\nconst ",
"\nlet ",
# Split along control flow statements
"\nif ",
"\nwhile ",
"\nfor ",
"\nloop ",
"\nmatch ",
"\nconst ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.SCALA:
return [
# Split along class definitions
"\nclass ",
"\nobject ",
# Split along method definitions
"\ndef ",
"\nval ",
"\nvar ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\nmatch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.SWIFT:
return [
# Split along function definitions
"\nfunc ",
# Split along class definitions
"\nclass ",
"\nstruct ",
"\nenum ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\ndo ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.MARKDOWN:
return [
# First, try to split along Markdown headings (starting with level 2)
"\n#{1,6} ",
# Note the alternative syntax for headings (below) is not handled here
# Heading level 2
# ---------------
# End of code block
"```\n",
# Horizontal lines
"\n\*\*\*+\n",
"\n---+\n",
"\n___+\n",
# Note that this splitter doesn't handle horizontal lines defined
# by *three or more* of ***, ---, or ___, but this is not handled
"\n\n",
"\n",
" ",
"",
]
elif language == Language.LATEX:
return [
# First, try to split along Latex sections
"\n\\\chapter{",
"\n\\\section{",
"\n\\\subsection{",
"\n\\\subsubsection{",
# Now split by environments
"\n\\\begin{enumerate}",
"\n\\\begin{itemize}",
"\n\\\begin{description}",
"\n\\\begin{list}",
"\n\\\begin{quote}",
"\n\\\begin{quotation}",
"\n\\\begin{verse}",
"\n\\\begin{verbatim}",
# Now split by math environments
"\n\\\begin{align}",
"$$",
"$",
# Now split by the normal type of lines
" ",
"",
]
elif language == Language.HTML:
return [
# First, try to split along HTML tags
"<body",
"<div",
"<p",
"<br",
"<li",
"<h1",
"<h2",
"<h3",
"<h4",
"<h5",
"<h6",
"<span",
"<table",
"<tr",
"<td",
"<th",
"<ul",
"<ol",
"<header",
"<footer",
"<nav",
# Head
"<head",
"<style",
"<script",
"<meta",
"<title",
"",
]
elif language == Language.SOL:
return [
# Split along compiler informations definitions
"\npragma ",
"\nusing ",
# Split along contract definitions
"\ncontract ",
"\ninterface ",
"\nlibrary ",
# Split along method definitions
"\nconstructor ",
"\ntype ",
"\nfunction ",
"\nevent ",
"\nmodifier ",
"\nerror ",
"\nstruct ",
"\nenum ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\ndo while ",
"\nassembly ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
else:
raise ValueError(
f"Language {language} is not supported! "
f"Please choose from {list(Language)}"
)
[docs]class NLTKTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at sentences using NLTK."""
def __init__(self, separator: str = "\n\n", **kwargs: Any) -> None:
"""Initialize the NLTK splitter."""
super().__init__(**kwargs)
try:
from nltk.tokenize import sent_tokenize
self._tokenizer = sent_tokenize
except ImportError:
raise ImportError(
"NLTK is not installed, please install it with `pip install nltk`."
)
self._separator = separator
[docs] def split_text(self, text: str) -> List[str]:
"""Split incoming text and return chunks."""
# First we naively split the large input into a bunch of smaller ones.
splits = self._tokenizer(text)
return self._merge_splits(splits, self._separator)
[docs]class SpacyTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at sentences using Spacy."""
def __init__(
self, separator: str = "\n\n", pipeline: str = "en_core_web_sm", **kwargs: Any
) -> None:
"""Initialize the spacy text splitter."""
super().__init__(**kwargs)
try:
import spacy
except ImportError:
raise ImportError(
"Spacy is not installed, please install it with `pip install spacy`."
)
self._tokenizer = spacy.load(pipeline)
self._separator = separator
[docs] def split_text(self, text: str) -> List[str]:
"""Split incoming text and return chunks."""
splits = (str(s) for s in self._tokenizer(text).sents)
return self._merge_splits(splits, self._separator)
# For backwards compatibility
[docs]class PythonCodeTextSplitter(RecursiveCharacterTextSplitter):
"""Attempts to split the text along Python syntax."""
def __init__(self, **kwargs: Any) -> None:
"""Initialize a PythonCodeTextSplitter."""
separators = self.get_separators_for_language(Language.PYTHON)
super().__init__(separators=separators, **kwargs)
[docs]class MarkdownTextSplitter(RecursiveCharacterTextSplitter):
"""Attempts to split the text along Markdown-formatted headings."""
def __init__(self, **kwargs: Any) -> None:
"""Initialize a MarkdownTextSplitter."""
separators = self.get_separators_for_language(Language.MARKDOWN)
super().__init__(separators=separators, **kwargs)
[docs]class LatexTextSplitter(RecursiveCharacterTextSplitter):
"""Attempts to split the text along Latex-formatted layout elements."""
def __init__(self, **kwargs: Any) -> None:
"""Initialize a LatexTextSplitter."""
separators = self.get_separators_for_language(Language.LATEX)
super().__init__(separators=separators, **kwargs) | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
11bb2cef-40ed-4a73-ab1d-b2444d0d686c | Source code for langchain.schema
"""Common schema objects."""
from __future__ import annotations
from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import (
Any,
Dict,
Generic,
List,
NamedTuple,
Optional,
Sequence,
TypeVar,
Union,
)
from uuid import UUID
from pydantic import BaseModel, Field, root_validator
from langchain.load.serializable import Serializable
RUN_KEY = "__run"
[docs]def get_buffer_string(
messages: List[BaseMessage], human_prefix: str = "Human", ai_prefix: str = "AI"
) -> str:
"""Get buffer string of messages."""
string_messages = []
for m in messages:
if isinstance(m, HumanMessage):
role = human_prefix
elif isinstance(m, AIMessage):
role = ai_prefix
elif isinstance(m, SystemMessage):
role = "System"
elif isinstance(m, FunctionMessage):
role = "Function"
elif isinstance(m, ChatMessage):
role = m.role
else:
raise ValueError(f"Got unsupported message type: {m}")
message = f"{role}: {m.content}"
if isinstance(m, AIMessage) and "function_call" in m.additional_kwargs:
message += f"{m.additional_kwargs['function_call']}"
string_messages.append(message)
return "\n".join(string_messages)
[docs]@dataclass
class AgentAction:
"""Agent's action to take."""
tool: str
tool_input: Union[str, dict]
log: str
[docs]class AgentFinish(NamedTuple):
"""Agent's return value."""
return_values: dict
log: str
[docs]class Generation(Serializable):
"""Output of a single generation."""
text: str
"""Generated text output."""
generation_info: Optional[Dict[str, Any]] = None
"""Raw generation info response from the provider"""
"""May include things like reason for finishing (e.g. in OpenAI)"""
# TODO: add log probs
@property
def lc_serializable(self) -> bool:
"""This class is LangChain serializable."""
return True
[docs]class BaseMessage(Serializable):
"""Message object."""
content: str
additional_kwargs: dict = Field(default_factory=dict)
@property
@abstractmethod
def type(self) -> str:
"""Type of the message, used for serialization."""
@property
def lc_serializable(self) -> bool:
"""This class is LangChain serializable."""
return True
[docs]class HumanMessage(BaseMessage):
"""Type of message that is spoken by the human."""
example: bool = False
@property
def type(self) -> str:
"""Type of the message, used for serialization."""
return "human"
[docs]class AIMessage(BaseMessage):
"""Type of message that is spoken by the AI."""
example: bool = False
@property
def type(self) -> str:
"""Type of the message, used for serialization."""
return "ai"
[docs]class SystemMessage(BaseMessage):
"""Type of message that is a system message."""
@property
def type(self) -> str:
"""Type of the message, used for serialization."""
return "system"
[docs]class FunctionMessage(BaseMessage):
name: str
@property
def type(self) -> str:
"""Type of the message, used for serialization."""
return "function"
[docs]class ChatMessage(BaseMessage):
"""Type of message with arbitrary speaker."""
role: str
@property
def type(self) -> str:
"""Type of the message, used for serialization."""
return "chat"
def _message_to_dict(message: BaseMessage) -> dict:
return {"type": message.type, "data": message.dict()}
[docs]def messages_to_dict(messages: List[BaseMessage]) -> List[dict]:
"""Convert messages to dict.
Args:
messages: List of messages to convert.
Returns:
List of dicts.
"""
return [_message_to_dict(m) for m in messages]
def _message_from_dict(message: dict) -> BaseMessage:
_type = message["type"]
if _type == "human":
return HumanMessage(**message["data"])
elif _type == "ai":
return AIMessage(**message["data"])
elif _type == "system":
return SystemMessage(**message["data"])
elif _type == "chat":
return ChatMessage(**message["data"])
else:
raise ValueError(f"Got unexpected type: {_type}")
[docs]def messages_from_dict(messages: List[dict]) -> List[BaseMessage]:
"""Convert messages from dict.
Args:
messages: List of messages (dicts) to convert.
Returns:
List of messages (BaseMessages).
"""
return [_message_from_dict(m) for m in messages]
[docs]class ChatGeneration(Generation):
"""Output of a single generation."""
text = ""
message: BaseMessage
@root_validator
def set_text(cls, values: Dict[str, Any]) -> Dict[str, Any]:
values["text"] = values["message"].content
return values
[docs]class RunInfo(BaseModel):
"""Class that contains all relevant metadata for a Run."""
run_id: UUID
[docs]class ChatResult(BaseModel):
"""Class that contains all relevant information for a Chat Result."""
generations: List[ChatGeneration]
"""List of the things generated."""
llm_output: Optional[dict] = None
"""For arbitrary LLM provider specific output."""
[docs]class LLMResult(BaseModel):
"""Class that contains all relevant information for an LLM Result."""
generations: List[List[Generation]]
"""List of the things generated. This is List[List[]] because
each input could have multiple generations."""
llm_output: Optional[dict] = None
"""For arbitrary LLM provider specific output."""
run: Optional[List[RunInfo]] = None
"""Run metadata."""
[docs] def flatten(self) -> List[LLMResult]:
"""Flatten generations into a single list."""
llm_results = []
for i, gen_list in enumerate(self.generations):
# Avoid double counting tokens in OpenAICallback
if i == 0:
llm_results.append(
LLMResult(
generations=[gen_list],
llm_output=self.llm_output,
)
)
else:
if self.llm_output is not None:
llm_output = self.llm_output.copy()
llm_output["token_usage"] = dict()
else:
llm_output = None
llm_results.append(
LLMResult(
generations=[gen_list],
llm_output=llm_output,
)
)
return llm_results
def __eq__(self, other: object) -> bool:
if not isinstance(other, LLMResult):
return NotImplemented
return (
self.generations == other.generations
and self.llm_output == other.llm_output
)
[docs]class PromptValue(Serializable, ABC):
[docs] @abstractmethod
def to_string(self) -> str:
"""Return prompt as string."""
[docs] @abstractmethod
def to_messages(self) -> List[BaseMessage]:
"""Return prompt as messages."""
[docs]class BaseMemory(Serializable, ABC):
"""Base interface for memory in chains."""
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
@property
@abstractmethod
def memory_variables(self) -> List[str]:
"""Input keys this memory class will load dynamically."""
[docs] @abstractmethod
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Return key-value pairs given the text input to the chain.
If None, return all memories
"""
[docs] @abstractmethod
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save the context of this model run to memory."""
[docs] @abstractmethod
def clear(self) -> None:
"""Clear memory contents."""
[docs]class BaseChatMessageHistory(ABC):
"""Base interface for chat message history
See `ChatMessageHistory` for default implementation.
"""
"""
Example:
.. code-block:: python
class FileChatMessageHistory(BaseChatMessageHistory):
storage_path: str
session_id: str
@property
def messages(self):
with open(os.path.join(storage_path, session_id), 'r:utf-8') as f:
messages = json.loads(f.read())
return messages_from_dict(messages)
def add_message(self, message: BaseMessage) -> None:
messages = self.messages.append(_message_to_dict(message))
with open(os.path.join(storage_path, session_id), 'w') as f:
json.dump(f, messages)
def clear(self):
with open(os.path.join(storage_path, session_id), 'w') as f:
f.write("[]")
"""
messages: List[BaseMessage]
[docs] def add_user_message(self, message: str) -> None:
"""Add a user message to the store"""
self.add_message(HumanMessage(content=message))
[docs] def add_ai_message(self, message: str) -> None:
"""Add an AI message to the store"""
self.add_message(AIMessage(content=message))
[docs] def add_message(self, message: BaseMessage) -> None:
"""Add a self-created message to the store"""
raise NotImplementedError
[docs] @abstractmethod
def clear(self) -> None:
"""Remove all messages from the store"""
[docs]class Document(Serializable):
"""Interface for interacting with a document."""
page_content: str
metadata: dict = Field(default_factory=dict)
[docs]class BaseRetriever(ABC):
"""Base interface for retrievers."""
[docs] @abstractmethod
def get_relevant_documents(self, query: str) -> List[Document]:
"""Get documents relevant for a query.
Args:
query: string to find relevant documents for
Returns:
List of relevant documents
"""
[docs] @abstractmethod
async def aget_relevant_documents(self, query: str) -> List[Document]:
"""Get documents relevant for a query.
Args:
query: string to find relevant documents for
Returns:
List of relevant documents
"""
# For backwards compatibility
Memory = BaseMemory
T = TypeVar("T")
[docs]class BaseLLMOutputParser(Serializable, ABC, Generic[T]):
[docs] @abstractmethod
def parse_result(self, result: List[Generation]) -> T:
"""Parse LLM Result."""
[docs]class BaseOutputParser(BaseLLMOutputParser, ABC, Generic[T]):
"""Class to parse the output of an LLM call.
Output parsers help structure language model responses.
"""
[docs] def parse_result(self, result: List[Generation]) -> T:
return self.parse(result[0].text)
[docs] @abstractmethod
def parse(self, text: str) -> T:
"""Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Args:
text: output of language model
Returns:
structured output
"""
[docs] def parse_with_prompt(self, completion: str, prompt: PromptValue) -> Any:
"""Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Args:
completion: output of language model
prompt: prompt value
Returns:
structured output
"""
return self.parse(completion)
[docs] def get_format_instructions(self) -> str:
"""Instructions on how the LLM output should be formatted."""
raise NotImplementedError
@property
def _type(self) -> str:
"""Return the type key."""
raise NotImplementedError(
f"_type property is not implemented in class {self.__class__.__name__}."
" This is required for serialization."
)
[docs] def dict(self, **kwargs: Any) -> Dict:
"""Return dictionary representation of output parser."""
output_parser_dict = super().dict()
output_parser_dict["_type"] = self._type
return output_parser_dict
[docs]class NoOpOutputParser(BaseOutputParser[str]):
"""Output parser that just returns the text as is."""
@property
def lc_serializable(self) -> bool:
return True
@property
def _type(self) -> str:
return "default"
[docs] def parse(self, text: str) -> str:
return text
[docs]class OutputParserException(ValueError):
"""Exception that output parsers should raise to signify a parsing error.
This exists to differentiate parsing errors from other code or execution errors
that also may arise inside the output parser. OutputParserExceptions will be
available to catch and handle in ways to fix the parsing error, while other
errors will be raised.
"""
def __init__(
self,
error: Any,
observation: str | None = None,
llm_output: str | None = None,
send_to_llm: bool = False,
):
super(OutputParserException, self).__init__(error)
if send_to_llm:
if observation is None or llm_output is None:
raise ValueError(
"Arguments 'observation' & 'llm_output'"
" are required if 'send_to_llm' is True"
)
self.observation = observation
self.llm_output = llm_output
self.send_to_llm = send_to_llm
[docs]class BaseDocumentTransformer(ABC):
"""Base interface for transforming documents."""
[docs] @abstractmethod
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Transform a list of documents."""
[docs] @abstractmethod
async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Asynchronously transform a list of documents.""" | https://api.python.langchain.com/en/latest/_modules/langchain/schema.html |
f90ea9cb-01b6-4419-8455-44ec9c6f15c3 | Source code for langchain.document_transformers
"""Transform documents"""
from typing import Any, Callable, List, Sequence
import numpy as np
from pydantic import BaseModel, Field
from langchain.embeddings.base import Embeddings
from langchain.math_utils import cosine_similarity
from langchain.schema import BaseDocumentTransformer, Document
class _DocumentWithState(Document):
"""Wrapper for a document that includes arbitrary state."""
state: dict = Field(default_factory=dict)
"""State associated with the document."""
def to_document(self) -> Document:
"""Convert the DocumentWithState to a Document."""
return Document(page_content=self.page_content, metadata=self.metadata)
@classmethod
def from_document(cls, doc: Document) -> "_DocumentWithState":
"""Create a DocumentWithState from a Document."""
if isinstance(doc, cls):
return doc
return cls(page_content=doc.page_content, metadata=doc.metadata)
[docs]def get_stateful_documents(
documents: Sequence[Document],
) -> Sequence[_DocumentWithState]:
"""Convert a list of documents to a list of documents with state.
Args:
documents: The documents to convert.
Returns:
A list of documents with state.
"""
return [_DocumentWithState.from_document(doc) for doc in documents]
def _filter_similar_embeddings(
embedded_documents: List[List[float]], similarity_fn: Callable, threshold: float
) -> List[int]:
"""Filter redundant documents based on the similarity of their embeddings."""
similarity = np.tril(similarity_fn(embedded_documents, embedded_documents), k=-1)
redundant = np.where(similarity > threshold)
redundant_stacked = np.column_stack(redundant)
redundant_sorted = np.argsort(similarity[redundant])[::-1]
included_idxs = set(range(len(embedded_documents)))
for first_idx, second_idx in redundant_stacked[redundant_sorted]:
if first_idx in included_idxs and second_idx in included_idxs:
# Default to dropping the second document of any highly similar pair.
included_idxs.remove(second_idx)
return list(sorted(included_idxs))
def _get_embeddings_from_stateful_docs(
embeddings: Embeddings, documents: Sequence[_DocumentWithState]
) -> List[List[float]]:
if len(documents) and "embedded_doc" in documents[0].state:
embedded_documents = [doc.state["embedded_doc"] for doc in documents]
else:
embedded_documents = embeddings.embed_documents(
[d.page_content for d in documents]
)
for doc, embedding in zip(documents, embedded_documents):
doc.state["embedded_doc"] = embedding
return embedded_documents
[docs]class EmbeddingsRedundantFilter(BaseDocumentTransformer, BaseModel):
"""Filter that drops redundant documents by comparing their embeddings."""
embeddings: Embeddings
"""Embeddings to use for embedding document contents."""
similarity_fn: Callable = cosine_similarity
"""Similarity function for comparing documents. Function expected to take as input
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity."""
similarity_threshold: float = 0.95
"""Threshold for determining when two documents are similar enough
to be considered redundant."""
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
[docs] def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Filter down documents."""
stateful_documents = get_stateful_documents(documents)
embedded_documents = _get_embeddings_from_stateful_docs(
self.embeddings, stateful_documents
)
included_idxs = _filter_similar_embeddings(
embedded_documents, self.similarity_fn, self.similarity_threshold
)
return [stateful_documents[i] for i in sorted(included_idxs)]
[docs] async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
raise NotImplementedError | https://api.python.langchain.com/en/latest/_modules/langchain/document_transformers.html |
4899e0e4-1602-471e-8bd2-9c58e78677eb | Source code for langchain.vectorstores.clickhouse
"""Wrapper around open source ClickHouse VectorSearch capability."""
from __future__ import annotations
import json
import logging
from hashlib import sha1
from threading import Thread
from typing import Any, Dict, Iterable, List, Optional, Tuple, Union
from pydantic import BaseSettings
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
logger = logging.getLogger()
def has_mul_sub_str(s: str, *args: Any) -> bool:
"""
Check if a string contains multiple substrings.
Args:
s: string to check.
*args: substrings to check.
Returns:
True if all substrings are in the string, False otherwise.
"""
for a in args:
if a not in s:
return False
return True
[docs]class ClickhouseSettings(BaseSettings):
"""ClickHouse Client Configuration
Attribute:
clickhouse_host (str) : An URL to connect to MyScale backend.
Defaults to 'localhost'.
clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.
username (str) : Username to login. Defaults to None.
password (str) : Password to login. Defaults to None.
index_type (str): index type string.
index_param (list): index build parameter.
index_query_params(dict): index query parameters.
database (str) : Database name to find the table. Defaults to 'default'.
table (str) : Table name to operate on.
Defaults to 'vector_table'.
metric (str) : Metric to compute distance,
supported are ('angular', 'euclidean', 'manhattan', 'hamming',
'dot'). Defaults to 'angular'.
https://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169
column_map (Dict) : Column type map to project column name onto langchain
semantics. Must have keys: `text`, `id`, `vector`,
must be same size to number of columns. For example:
.. code-block:: python
{
'id': 'text_id',
'uuid': 'global_unique_id'
'embedding': 'text_embedding',
'document': 'text_plain',
'metadata': 'metadata_dictionary_in_json',
}
Defaults to identity map.
"""
host: str = "localhost"
port: int = 8123
username: Optional[str] = None
password: Optional[str] = None
index_type: str = "annoy"
# Annoy supports L2Distance and cosineDistance.
index_param: Optional[Union[List, Dict]] = ["'L2Distance'", 100]
index_query_params: Dict[str, str] = {}
column_map: Dict[str, str] = {
"id": "id",
"uuid": "uuid",
"document": "document",
"embedding": "embedding",
"metadata": "metadata",
}
database: str = "default"
table: str = "langchain"
metric: str = "angular"
def __getitem__(self, item: str) -> Any:
return getattr(self, item)
class Config:
env_file = ".env"
env_prefix = "clickhouse_"
env_file_encoding = "utf-8"
[docs]class Clickhouse(VectorStore):
"""Wrapper around ClickHouse vector database
You need a `clickhouse-connect` python package, and a valid account
to connect to ClickHouse.
ClickHouse can not only search with simple vector indexes,
it also supports complex query with multiple conditions,
constraints and even sub-queries.
For more information, please visit
[ClickHouse official site](https://clickhouse.com/clickhouse)
"""
def __init__(
self,
embedding: Embeddings,
config: Optional[ClickhouseSettings] = None,
**kwargs: Any,
) -> None:
"""ClickHouse Wrapper to LangChain
embedding_function (Embeddings):
config (ClickHouseSettings): Configuration to ClickHouse Client
Other keyword arguments will pass into
[clickhouse-connect](https://docs.clickhouse.com/)
"""
try:
from clickhouse_connect import get_client
except ImportError:
raise ValueError(
"Could not import clickhouse connect python package. "
"Please install it with `pip install clickhouse-connect`."
)
try:
from tqdm import tqdm
self.pgbar = tqdm
except ImportError:
# Just in case if tqdm is not installed
self.pgbar = lambda x, **kwargs: x
super().__init__()
if config is not None:
self.config = config
else:
self.config = ClickhouseSettings()
assert self.config
assert self.config.host and self.config.port
assert (
self.config.column_map
and self.config.database
and self.config.table
and self.config.metric
)
for k in ["id", "embedding", "document", "metadata", "uuid"]:
assert k in self.config.column_map
assert self.config.metric in [
"angular",
"euclidean",
"manhattan",
"hamming",
"dot",
]
# initialize the schema
dim = len(embedding.embed_query("test"))
index_params = (
(
",".join([f"'{k}={v}'" for k, v in self.config.index_param.items()])
if self.config.index_param
else ""
)
if isinstance(self.config.index_param, Dict)
else ",".join([str(p) for p in self.config.index_param])
if isinstance(self.config.index_param, List)
else self.config.index_param
)
self.schema = f"""\
CREATE TABLE IF NOT EXISTS {self.config.database}.{self.config.table}(
{self.config.column_map['id']} Nullable(String),
{self.config.column_map['document']} Nullable(String),
{self.config.column_map['embedding']} Array(Float32),
{self.config.column_map['metadata']} JSON,
{self.config.column_map['uuid']} UUID DEFAULT generateUUIDv4(),
CONSTRAINT cons_vec_len CHECK length({self.config.column_map['embedding']}) = {dim},
INDEX vec_idx {self.config.column_map['embedding']} TYPE \
{self.config.index_type}({index_params}) GRANULARITY 1000
) ENGINE = MergeTree ORDER BY uuid SETTINGS index_granularity = 8192\
"""
self.dim = dim
self.BS = "\\"
self.must_escape = ("\\", "'")
self.embedding_function = embedding
self.dist_order = "ASC" # Only support ConsingDistance and L2Distance
# Create a connection to clickhouse
self.client = get_client(
host=self.config.host,
port=self.config.port,
username=self.config.username,
password=self.config.password,
**kwargs,
)
# Enable JSON type
self.client.command("SET allow_experimental_object_type=1")
# Enable Annoy index
self.client.command("SET allow_experimental_annoy_index=1")
self.client.command(self.schema)
[docs] def escape_str(self, value: str) -> str:
return "".join(f"{self.BS}{c}" if c in self.must_escape else c for c in value)
def _build_insert_sql(self, transac: Iterable, column_names: Iterable[str]) -> str:
ks = ",".join(column_names)
_data = []
for n in transac:
n = ",".join([f"'{self.escape_str(str(_n))}'" for _n in n])
_data.append(f"({n})")
i_str = f"""
INSERT INTO TABLE
{self.config.database}.{self.config.table}({ks})
VALUES
{','.join(_data)}
"""
return i_str
def _insert(self, transac: Iterable, column_names: Iterable[str]) -> None:
_insert_query = self._build_insert_sql(transac, column_names)
self.client.command(_insert_query)
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
batch_size: int = 32,
ids: Optional[Iterable[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Insert more texts through the embeddings and add to the VectorStore.
Args:
texts: Iterable of strings to add to the VectorStore.
ids: Optional list of ids to associate with the texts.
batch_size: Batch size of insertion
metadata: Optional column data to be inserted
Returns:
List of ids from adding the texts into the VectorStore.
"""
# Embed and create the documents
ids = ids or [sha1(t.encode("utf-8")).hexdigest() for t in texts]
colmap_ = self.config.column_map
transac = []
column_names = {
colmap_["id"]: ids,
colmap_["document"]: texts,
colmap_["embedding"]: self.embedding_function.embed_documents(list(texts)),
}
metadatas = metadatas or [{} for _ in texts]
column_names[colmap_["metadata"]] = map(json.dumps, metadatas)
assert len(set(colmap_) - set(column_names)) >= 0
keys, values = zip(*column_names.items())
try:
t = None
for v in self.pgbar(
zip(*values), desc="Inserting data...", total=len(metadatas)
):
assert (
len(v[keys.index(self.config.column_map["embedding"])]) == self.dim
)
transac.append(v)
if len(transac) == batch_size:
if t:
t.join()
t = Thread(target=self._insert, args=[transac, keys])
t.start()
transac = []
if len(transac) > 0:
if t:
t.join()
self._insert(transac, keys)
return [i for i in ids]
except Exception as e:
logger.error(f"\033[91m\033[1m{type(e)}\033[0m \033[95m{str(e)}\033[0m")
return []
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[Dict[Any, Any]]] = None,
config: Optional[ClickhouseSettings] = None,
text_ids: Optional[Iterable[str]] = None,
batch_size: int = 32,
**kwargs: Any,
) -> Clickhouse:
"""Create ClickHouse wrapper with existing texts
Args:
embedding_function (Embeddings): Function to extract text embedding
texts (Iterable[str]): List or tuple of strings to be added
config (ClickHouseSettings, Optional): ClickHouse configuration
text_ids (Optional[Iterable], optional): IDs for the texts.
Defaults to None.
batch_size (int, optional): Batchsize when transmitting data to ClickHouse.
Defaults to 32.
metadata (List[dict], optional): metadata to texts. Defaults to None.
Other keyword arguments will pass into
[clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)
Returns:
ClickHouse Index
"""
ctx = cls(embedding, config, **kwargs)
ctx.add_texts(texts, ids=text_ids, batch_size=batch_size, metadatas=metadatas)
return ctx
def __repr__(self) -> str:
"""Text representation for ClickHouse Vector Store, prints backends, username
and schemas. Easy to use with `str(ClickHouse())`
Returns:
repr: string to show connection info and data schema
"""
_repr = f"\033[92m\033[1m{self.config.database}.{self.config.table} @ "
_repr += f"{self.config.host}:{self.config.port}\033[0m\n\n"
_repr += f"\033[1musername: {self.config.username}\033[0m\n\nTable Schema:\n"
_repr += "-" * 51 + "\n"
for r in self.client.query(
f"DESC {self.config.database}.{self.config.table}"
).named_results():
_repr += (
f"|\033[94m{r['name']:24s}\033[0m|\033[96m{r['type']:24s}\033[0m|\n"
)
_repr += "-" * 51 + "\n"
return _repr
def _build_query_sql(
self, q_emb: List[float], topk: int, where_str: Optional[str] = None
) -> str:
q_emb_str = ",".join(map(str, q_emb))
if where_str:
where_str = f"PREWHERE {where_str}"
else:
where_str = ""
settings_strs = []
if self.config.index_query_params:
for k in self.config.index_query_params:
settings_strs.append(f"SETTING {k}={self.config.index_query_params[k]}")
q_str = f"""
SELECT {self.config.column_map['document']},
{self.config.column_map['metadata']}, dist
FROM {self.config.database}.{self.config.table}
{where_str}
ORDER BY L2Distance({self.config.column_map['embedding']}, [{q_emb_str}])
AS dist {self.dist_order}
LIMIT {topk} {' '.join(settings_strs)}
"""
return q_str
[docs] def similarity_search(
self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any
) -> List[Document]:
"""Perform a similarity search with ClickHouse
Args:
query (str): query string
k (int, optional): Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional): where condition string.
Defaults to None.
NOTE: Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use `{self.metadata_column}.attribute` instead of `attribute`
alone. The default name for it is `metadata`.
Returns:
List[Document]: List of Documents
"""
return self.similarity_search_by_vector(
self.embedding_function.embed_query(query), k, where_str, **kwargs
)
[docs] def similarity_search_by_vector(
self,
embedding: List[float],
k: int = 4,
where_str: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Perform a similarity search with ClickHouse by vectors
Args:
query (str): query string
k (int, optional): Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional): where condition string.
Defaults to None.
NOTE: Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use `{self.metadata_column}.attribute` instead of `attribute`
alone. The default name for it is `metadata`.
Returns:
List[Document]: List of (Document, similarity)
"""
q_str = self._build_query_sql(embedding, k, where_str)
try:
return [
Document(
page_content=r[self.config.column_map["document"]],
metadata=r[self.config.column_map["metadata"]],
)
for r in self.client.query(q_str).named_results()
]
except Exception as e:
logger.error(f"\033[91m\033[1m{type(e)}\033[0m \033[95m{str(e)}\033[0m")
return []
[docs] def similarity_search_with_relevance_scores(
self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any
) -> List[Tuple[Document, float]]:
"""Perform a similarity search with ClickHouse
Args:
query (str): query string
k (int, optional): Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional): where condition string.
Defaults to None.
NOTE: Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use `{self.metadata_column}.attribute` instead of `attribute`
alone. The default name for it is `metadata`.
Returns:
List[Document]: List of documents
"""
q_str = self._build_query_sql(
self.embedding_function.embed_query(query), k, where_str
)
try:
return [
(
Document(
page_content=r[self.config.column_map["document"]],
metadata=r[self.config.column_map["metadata"]],
),
r["dist"],
)
for r in self.client.query(q_str).named_results()
]
except Exception as e:
logger.error(f"\033[91m\033[1m{type(e)}\033[0m \033[95m{str(e)}\033[0m")
return []
[docs] def drop(self) -> None:
"""
Helper function: Drop data
"""
self.client.command(
f"DROP TABLE IF EXISTS {self.config.database}.{self.config.table}"
)
@property
def metadata_column(self) -> str:
return self.config.column_map["metadata"] | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html |
211b670b-f857-449e-9721-b44bc7f2bae3 | Source code for langchain.vectorstores.alibabacloud_opensearch
import json
import logging
import numbers
from hashlib import sha1
from typing import Any, Dict, Iterable, List, Optional, Tuple
from langchain.embeddings.base import Embeddings
from langchain.schema import Document
from langchain.vectorstores.base import VectorStore
logger = logging.getLogger()
[docs]class AlibabaCloudOpenSearchSettings:
"""Opensearch Client Configuration
Attribute:
endpoint (str) : The endpoint of opensearch instance, You can find it
from the console of Alibaba Cloud OpenSearch.
instance_id (str) : The identify of opensearch instance, You can find
it from the console of Alibaba Cloud OpenSearch.
datasource_name (str): The name of the data source specified when creating it.
username (str) : The username specified when purchasing the instance.
password (str) : The password specified when purchasing the instance.
embedding_index_name (str) : The name of the vector attribute specified
when configuring the instance attributes.
field_name_mapping (Dict) : Using field name mapping between opensearch
vector store and opensearch instance configuration table field names:
{
'id': 'The id field name map of index document.',
'document': 'The text field name map of index document.',
'embedding': 'In the embedding field of the opensearch instance,
the values must be in float16 multivalue type and separated by commas.',
'metadata_field_x': 'Metadata field mapping includes the mapped
field name and operator in the mapping value, separated by a comma
between the mapped field name and the operator.',
}
"""
endpoint: str
instance_id: str
username: str
password: str
datasource_name: str
embedding_index_name: str
field_name_mapping: Dict[str, str] = {
"id": "id",
"document": "document",
"embedding": "embedding",
"metadata_field_x": "metadata_field_x,operator",
}
def __init__(
self,
endpoint: str,
instance_id: str,
username: str,
password: str,
datasource_name: str,
embedding_index_name: str,
field_name_mapping: Dict[str, str],
) -> None:
self.endpoint = endpoint
self.instance_id = instance_id
self.username = username
self.password = password
self.datasource_name = datasource_name
self.embedding_index_name = embedding_index_name
self.field_name_mapping = field_name_mapping
def __getitem__(self, item: str) -> Any:
return getattr(self, item)
def create_metadata(fields: Dict[str, Any]) -> Dict[str, Any]:
"""Create metadata from fields.
Args:
fields: The fields of the document. The fields must be a dict.
Returns:
metadata: The metadata of the document. The metadata must be a dict.
"""
metadata: Dict[str, Any] = {}
for key, value in fields.items():
if key == "id" or key == "document" or key == "embedding":
continue
metadata[key] = value
return metadata
[docs]class AlibabaCloudOpenSearch(VectorStore):
"""Alibaba Cloud OpenSearch Vector Store"""
def __init__(
self,
embedding: Embeddings,
config: AlibabaCloudOpenSearchSettings,
**kwargs: Any,
) -> None:
try:
from alibabacloud_ha3engine import client, models
from alibabacloud_tea_util import models as util_models
except ImportError:
raise ValueError(
"Could not import alibaba cloud opensearch python package. "
"Please install it with `pip install alibabacloud-ha3engine`."
)
self.config = config
self.embedding = embedding
self.runtime = util_models.RuntimeOptions(
connect_timeout=5000,
read_timeout=10000,
autoretry=False,
ignore_ssl=False,
max_idle_conns=50,
)
self.ha3EngineClient = client.Client(
models.Config(
endpoint=config.endpoint,
instance_id=config.instance_id,
protocol="http",
access_user_name=config.username,
access_pass_word=config.password,
)
)
self.options_headers: Dict[str, str] = {}
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> List[str]:
def _upsert(push_doc_list: List[Dict]) -> List[str]:
if push_doc_list is None or len(push_doc_list) == 0:
return []
try:
push_request = models.PushDocumentsRequestModel(
self.options_headers, push_doc_list
)
push_response = self.ha3EngineClient.push_documents(
self.config.datasource_name, field_name_map["id"], push_request
)
json_response = json.loads(push_response.body)
if json_response["status"] == "OK":
return [
push_doc["fields"][field_name_map["id"]]
for push_doc in push_doc_list
]
return []
except Exception as e:
logger.error(
f"add doc to endpoint:{self.config.endpoint} "
f"instance_id:{self.config.instance_id} failed.",
e,
)
raise e
from alibabacloud_ha3engine import models
ids = [sha1(t.encode("utf-8")).hexdigest() for t in texts]
embeddings = self.embedding.embed_documents(list(texts))
metadatas = metadatas or [{} for _ in texts]
field_name_map = self.config.field_name_mapping
add_doc_list = []
text_list = list(texts)
for idx, doc_id in enumerate(ids):
embedding = embeddings[idx] if idx < len(embeddings) else None
metadata = metadatas[idx] if idx < len(metadatas) else None
text = text_list[idx] if idx < len(text_list) else None
add_doc: Dict[str, Any] = dict()
add_doc_fields: Dict[str, Any] = dict()
add_doc_fields.__setitem__(field_name_map["id"], doc_id)
add_doc_fields.__setitem__(field_name_map["document"], text)
if embedding is not None:
add_doc_fields.__setitem__(
field_name_map["embedding"],
",".join(str(unit) for unit in embedding),
)
if metadata is not None:
for md_key, md_value in metadata.items():
add_doc_fields.__setitem__(
field_name_map[md_key].split(",")[0], md_value
)
add_doc.__setitem__("fields", add_doc_fields)
add_doc.__setitem__("cmd", "add")
add_doc_list.append(add_doc)
return _upsert(add_doc_list)
[docs] def similarity_search(
self,
query: str,
k: int = 4,
search_filter: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Document]:
embedding = self.embedding.embed_query(query)
return self.create_results(
self.inner_embedding_query(
embedding=embedding, search_filter=search_filter, k=k
)
)
[docs] def similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
search_filter: Optional[dict] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
embedding: List[float] = self.embedding.embed_query(query)
return self.create_results_with_score(
self.inner_embedding_query(
embedding=embedding, search_filter=search_filter, k=k
)
)
[docs] def similarity_search_by_vector(
self,
embedding: List[float],
k: int = 4,
search_filter: Optional[dict] = None,
**kwargs: Any,
) -> List[Document]:
return self.create_results(
self.inner_embedding_query(
embedding=embedding, search_filter=search_filter, k=k
)
)
[docs] def inner_embedding_query(
self,
embedding: List[float],
search_filter: Optional[Dict[str, Any]] = None,
k: int = 4,
) -> Dict[str, Any]:
def generate_embedding_query() -> str:
tmp_search_config_str = (
f"config=start:0,hit:{k},format:json&&cluster=general&&kvpairs="
f"first_formula:proxima_score({self.config.embedding_index_name})&&sort=+RANK"
)
tmp_query_str = (
f"&&query={self.config.embedding_index_name}:"
+ "'"
+ ",".join(str(x) for x in embedding)
+ "'"
)
if search_filter is not None:
filter_clause = "&&filter=" + " AND ".join(
[
create_filter(md_key, md_value)
for md_key, md_value in search_filter.items()
]
)
tmp_query_str += filter_clause
return tmp_search_config_str + tmp_query_str
def create_filter(md_key: str, md_value: Any) -> str:
md_filter_expr = self.config.field_name_mapping[md_key]
if md_filter_expr is None:
return ""
expr = md_filter_expr.split(",")
if len(expr) != 2:
logger.error(
f"filter {md_filter_expr} express is not correct, "
f"must contain mapping field and operator."
)
return ""
md_filter_key = expr[0].strip()
md_filter_operator = expr[1].strip()
if isinstance(md_value, numbers.Number):
return f"{md_filter_key} {md_filter_operator} {md_value}"
return f'{md_filter_key}{md_filter_operator}"{md_value}"'
def search_data(single_query_str: str) -> Dict[str, Any]:
search_query = models.SearchQuery(query=single_query_str)
search_request = models.SearchRequestModel(
self.options_headers, search_query
)
return json.loads(self.ha3EngineClient.search(search_request).body)
from alibabacloud_ha3engine import models
try:
query_str = generate_embedding_query()
json_response = search_data(query_str)
if len(json_response["errors"]) != 0:
logger.error(
f"query {self.config.endpoint} {self.config.instance_id} "
f"errors:{json_response['errors']} failed."
)
else:
return json_response
except Exception as e:
logger.error(
f"query instance endpoint:{self.config.endpoint} "
f"instance_id:{self.config.instance_id} failed.",
e,
)
return {}
[docs] def create_results(self, json_result: Dict[str, Any]) -> List[Document]:
items = json_result["result"]["items"]
query_result_list: List[Document] = []
for item in items:
fields = item["fields"]
query_result_list.append(
Document(
page_content=fields[self.config.field_name_mapping["document"]],
metadata=create_metadata(fields),
)
)
return query_result_list
[docs] def create_results_with_score(
self, json_result: Dict[str, Any]
) -> List[Tuple[Document, float]]:
items = json_result["result"]["items"]
query_result_list: List[Tuple[Document, float]] = []
for item in items:
fields = item["fields"]
query_result_list.append(
(
Document(
page_content=fields[self.config.field_name_mapping["document"]],
metadata=create_metadata(fields),
),
float(item["sortExprValues"][0]),
)
)
return query_result_list
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
config: Optional[AlibabaCloudOpenSearchSettings] = None,
**kwargs: Any,
) -> "AlibabaCloudOpenSearch":
if config is None:
raise Exception("config can't be none")
ctx = cls(embedding, config, **kwargs)
ctx.add_texts(texts=texts, metadatas=metadatas)
return ctx
[docs] @classmethod
def from_documents(
cls,
documents: List[Document],
embedding: Embeddings,
ids: Optional[List[str]] = None,
config: Optional[AlibabaCloudOpenSearchSettings] = None,
**kwargs: Any,
) -> "AlibabaCloudOpenSearch":
if config is None:
raise Exception("config can't be none")
texts = [d.page_content for d in documents]
metadatas = [d.metadata for d in documents]
return cls.from_texts(
texts=texts,
embedding=embedding,
metadatas=metadatas,
config=config,
**kwargs,
) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html |
3d3563a0-755a-41b7-8c5d-c7617ff1d62f | Source code for langchain.vectorstores.rocksetdb
"""Wrapper around Rockset vector database."""
from __future__ import annotations
import logging
from enum import Enum
from typing import Any, Iterable, List, Optional, Tuple
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
logger = logging.getLogger(__name__)
[docs]class Rockset(VectorStore):
"""Wrapper arpund Rockset vector database.
To use, you should have the `rockset` python package installed. Note that to use
this, the collection being used must already exist in your Rockset instance.
You must also ensure you use a Rockset ingest transformation to apply
`VECTOR_ENFORCE` on the column being used to store `embedding_key` in the
collection.
See: https://rockset.com/blog/introducing-vector-search-on-rockset/ for more details
Everything below assumes `commons` Rockset workspace.
TODO: Add support for workspace args.
Example:
.. code-block:: python
from langchain.vectorstores import Rockset
from langchain.embeddings.openai import OpenAIEmbeddings
import rockset
# Make sure you use the right host (region) for your Rockset instance
# and APIKEY has both read-write access to your collection.
rs = rockset.RocksetClient(host=rockset.Regions.use1a1, api_key="***")
collection_name = "langchain_demo"
embeddings = OpenAIEmbeddings()
vectorstore = Rockset(rs, collection_name, embeddings,
"description", "description_embedding")
"""
def __init__(
self,
client: Any,
embeddings: Embeddings,
collection_name: str,
text_key: str,
embedding_key: str,
):
"""Initialize with Rockset client.
Args:
client: Rockset client object
collection: Rockset collection to insert docs / query
embeddings: Langchain Embeddings object to use to generate
embedding for given text.
text_key: column in Rockset collection to use to store the text
embedding_key: column in Rockset collection to use to store the embedding.
Note: We must apply `VECTOR_ENFORCE()` on this column via
Rockset ingest transformation.
"""
try:
from rockset import RocksetClient
except ImportError:
raise ImportError(
"Could not import rockset client python package. "
"Please install it with `pip install rockset`."
)
if not isinstance(client, RocksetClient):
raise ValueError(
f"client should be an instance of rockset.RocksetClient, "
f"got {type(client)}"
)
# TODO: check that `collection_name` exists in rockset. Create if not.
self._client = client
self._collection_name = collection_name
self._embeddings = embeddings
self._text_key = text_key
self._embedding_key = embedding_key
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
batch_size: int = 32,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
ids: Optional list of ids to associate with the texts.
batch_size: Send documents in batches to rockset.
Returns:
List of ids from adding the texts into the vectorstore.
"""
batch: list[dict] = []
stored_ids = []
for i, text in enumerate(texts):
if len(batch) == batch_size:
stored_ids += self._write_documents_to_rockset(batch)
batch = []
doc = {}
if metadatas and len(metadatas) > i:
doc = metadatas[i]
if ids and len(ids) > i:
doc["_id"] = ids[i]
doc[self._text_key] = text
doc[self._embedding_key] = self._embeddings.embed_query(text)
batch.append(doc)
if len(batch) > 0:
stored_ids += self._write_documents_to_rockset(batch)
batch = []
return stored_ids
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
client: Any = None,
collection_name: str = "",
text_key: str = "",
embedding_key: str = "",
ids: Optional[List[str]] = None,
batch_size: int = 32,
**kwargs: Any,
) -> Rockset:
"""Create Rockset wrapper with existing texts.
This is intended as a quicker way to get started.
"""
# Sanitize imputs
assert client is not None, "Rockset Client cannot be None"
assert collection_name, "Collection name cannot be empty"
assert text_key, "Text key name cannot be empty"
assert embedding_key, "Embedding key cannot be empty"
rockset = cls(client, embedding, collection_name, text_key, embedding_key)
rockset.add_texts(texts, metadatas, ids, batch_size)
return rockset
# Rockset supports these vector distance functions.
[docs] class DistanceFunction(Enum):
COSINE_SIM = "COSINE_SIM"
EUCLIDEAN_DIST = "EUCLIDEAN_DIST"
DOT_PRODUCT = "DOT_PRODUCT"
# how to sort results for "similarity"
[docs] def order_by(self) -> str:
if self.value == "EUCLIDEAN_DIST":
return "ASC"
return "DESC"
[docs] def similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
distance_func: DistanceFunction = DistanceFunction.COSINE_SIM,
where_str: Optional[str] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Perform a similarity search with Rockset
Args:
query (str): Text to look up documents similar to.
distance_func (DistanceFunction): how to compute distance between two
vectors in Rockset.
k (int, optional): Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional): Metadata filters supplied as a
SQL `where` condition string. Defaults to None.
eg. "price<=70.0 AND brand='Nintendo'"
NOTE: Please do not let end-user to fill this and always be aware
of SQL injection.
Returns:
List[Tuple[Document, float]]: List of documents with their relevance score
"""
return self.similarity_search_by_vector_with_relevance_scores(
self._embeddings.embed_query(query),
k,
distance_func,
where_str,
**kwargs,
)
[docs] def similarity_search(
self,
query: str,
k: int = 4,
distance_func: DistanceFunction = DistanceFunction.COSINE_SIM,
where_str: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Same as `similarity_search_with_relevance_scores` but
doesn't return the scores.
"""
return self.similarity_search_by_vector(
self._embeddings.embed_query(query),
k,
distance_func,
where_str,
**kwargs,
)
[docs] def similarity_search_by_vector(
self,
embedding: List[float],
k: int = 4,
distance_func: DistanceFunction = DistanceFunction.COSINE_SIM,
where_str: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Accepts a query_embedding (vector), and returns documents with
similar embeddings."""
docs_and_scores = self.similarity_search_by_vector_with_relevance_scores(
embedding, k, distance_func, where_str, **kwargs
)
return [doc for doc, _ in docs_and_scores]
[docs] def similarity_search_by_vector_with_relevance_scores(
self,
embedding: List[float],
k: int = 4,
distance_func: DistanceFunction = DistanceFunction.COSINE_SIM,
where_str: Optional[str] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Accepts a query_embedding (vector), and returns documents with
similar embeddings along with their relevance scores."""
q_str = self._build_query_sql(embedding, distance_func, k, where_str)
try:
query_response = self._client.Queries.query(sql={"query": q_str})
except Exception as e:
logger.error("Exception when querying Rockset: %s\n", e)
return []
finalResult: list[Tuple[Document, float]] = []
for document in query_response.results:
metadata = {}
assert isinstance(
document, dict
), "document should be of type `dict[str,Any]`. But found: `{}`".format(
type(document)
)
for k, v in document.items():
if k == self._text_key:
assert isinstance(
v, str
), "page content stored in column `{}` must be of type `str`. \
But found: `{}`".format(
self._text_key, type(v)
)
page_content = v
elif k == "dist":
assert isinstance(
v, float
), "Computed distance between vectors must of type `float`. \
But found {}".format(
type(v)
)
score = v
elif k not in ["_id", "_event_time", "_meta"]:
# These columns are populated by Rockset when documents are
# inserted. No need to return them in metadata dict.
metadata[k] = v
finalResult.append(
(Document(page_content=page_content, metadata=metadata), score)
)
return finalResult
# Helper functions
def _build_query_sql(
self,
query_embedding: List[float],
distance_func: DistanceFunction,
k: int = 4,
where_str: Optional[str] = None,
) -> str:
"""Builds Rockset SQL query to query similar vectors to query_vector"""
q_embedding_str = ",".join(map(str, query_embedding))
distance_str = f"""{distance_func.value}({self._embedding_key}, \
[{q_embedding_str}]) as dist"""
where_str = f"WHERE {where_str}\n" if where_str else ""
return f"""\
SELECT * EXCEPT({self._embedding_key}), {distance_str}
FROM {self._collection_name}
{where_str}\
ORDER BY dist {distance_func.order_by()}
LIMIT {str(k)}
"""
def _write_documents_to_rockset(self, batch: List[dict]) -> List[str]:
add_doc_res = self._client.Documents.add_documents(
collection=self._collection_name, data=batch
)
return [doc_status._id for doc_status in add_doc_res.data]
[docs] def delete_texts(self, ids: List[str]) -> None:
"""Delete a list of docs from the Rockset collection"""
try:
from rockset.models import DeleteDocumentsRequestData
except ImportError:
raise ImportError(
"Could not import rockset client python package. "
"Please install it with `pip install rockset`."
)
self._client.Documents.delete_documents(
collection=self._collection_name,
data=[DeleteDocumentsRequestData(id=i) for i in ids],
) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html |
46fd788a-4565-4e92-80dc-b71052deb56f | Source code for langchain.vectorstores.base
"""Interface for vector stores."""
from __future__ import annotations
import asyncio
import warnings
from abc import ABC, abstractmethod
from functools import partial
from typing import (
Any,
ClassVar,
Collection,
Dict,
Iterable,
List,
Optional,
Tuple,
Type,
TypeVar,
)
from pydantic import BaseModel, Field, root_validator
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.schema import BaseRetriever
VST = TypeVar("VST", bound="VectorStore")
[docs]class VectorStore(ABC):
"""Interface for vector stores."""
[docs] @abstractmethod
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
kwargs: vectorstore specific parameters
Returns:
List of ids from adding the texts into the vectorstore.
"""
[docs] def delete(self, ids: List[str]) -> Optional[bool]:
"""Delete by vector ID.
Args:
ids: List of ids to delete.
Returns:
Optional[bool]: True if deletion is successful,
False otherwise, None if not implemented.
"""
raise NotImplementedError(
"delete_by_id method must be implemented by subclass."
)
[docs] async def aadd_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore."""
raise NotImplementedError
[docs] def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:
"""Run more documents through the embeddings and add to the vectorstore.
Args:
documents (List[Document]: Documents to add to the vectorstore.
Returns:
List[str]: List of IDs of the added texts.
"""
# TODO: Handle the case where the user doesn't provide ids on the Collection
texts = [doc.page_content for doc in documents]
metadatas = [doc.metadata for doc in documents]
return self.add_texts(texts, metadatas, **kwargs)
[docs] async def aadd_documents(
self, documents: List[Document], **kwargs: Any
) -> List[str]:
"""Run more documents through the embeddings and add to the vectorstore.
Args:
documents (List[Document]: Documents to add to the vectorstore.
Returns:
List[str]: List of IDs of the added texts.
"""
texts = [doc.page_content for doc in documents]
metadatas = [doc.metadata for doc in documents]
return await self.aadd_texts(texts, metadatas, **kwargs)
[docs] def search(self, query: str, search_type: str, **kwargs: Any) -> List[Document]:
"""Return docs most similar to query using specified search type."""
if search_type == "similarity":
return self.similarity_search(query, **kwargs)
elif search_type == "mmr":
return self.max_marginal_relevance_search(query, **kwargs)
else:
raise ValueError(
f"search_type of {search_type} not allowed. Expected "
"search_type to be 'similarity' or 'mmr'."
)
[docs] async def asearch(
self, query: str, search_type: str, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to query using specified search type."""
if search_type == "similarity":
return await self.asimilarity_search(query, **kwargs)
elif search_type == "mmr":
return await self.amax_marginal_relevance_search(query, **kwargs)
else:
raise ValueError(
f"search_type of {search_type} not allowed. Expected "
"search_type to be 'similarity' or 'mmr'."
)
[docs] @abstractmethod
def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to query."""
[docs] def similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Args:
query: input text
k: Number of Documents to return. Defaults to 4.
**kwargs: kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns:
List of Tuples of (doc, similarity_score)
"""
docs_and_similarities = self._similarity_search_with_relevance_scores(
query, k=k, **kwargs
)
if any(
similarity < 0.0 or similarity > 1.0
for _, similarity in docs_and_similarities
):
warnings.warn(
"Relevance scores must be between"
f" 0 and 1, got {docs_and_similarities}"
)
score_threshold = kwargs.get("score_threshold")
if score_threshold is not None:
docs_and_similarities = [
(doc, similarity)
for doc, similarity in docs_and_similarities
if similarity >= score_threshold
]
if len(docs_and_similarities) == 0:
warnings.warn(
"No relevant docs were retrieved using the relevance score"
f" threshold {score_threshold}"
)
return docs_and_similarities
def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
"""
raise NotImplementedError
[docs] async def asimilarity_search_with_relevance_scores(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query."""
# This is a temporary workaround to make the similarity search
# asynchronous. The proper solution is to make the similarity search
# asynchronous in the vector store implementations.
func = partial(self.similarity_search_with_relevance_scores, query, k, **kwargs)
return await asyncio.get_event_loop().run_in_executor(None, func)
[docs] async def asimilarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to query."""
# This is a temporary workaround to make the similarity search
# asynchronous. The proper solution is to make the similarity search
# asynchronous in the vector store implementations.
func = partial(self.similarity_search, query, k, **kwargs)
return await asyncio.get_event_loop().run_in_executor(None, func)
[docs] def similarity_search_by_vector(
self, embedding: List[float], k: int = 4, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to embedding vector.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query vector.
"""
raise NotImplementedError
[docs] async def asimilarity_search_by_vector(
self, embedding: List[float], k: int = 4, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to embedding vector."""
# This is a temporary workaround to make the similarity search
# asynchronous. The proper solution is to make the similarity search
# asynchronous in the vector store implementations.
func = partial(self.similarity_search_by_vector, embedding, k, **kwargs)
return await asyncio.get_event_loop().run_in_executor(None, func)
[docs] def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
raise NotImplementedError
[docs] async def amax_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance."""
# This is a temporary workaround to make the similarity search
# asynchronous. The proper solution is to make the similarity search
# asynchronous in the vector store implementations.
func = partial(
self.max_marginal_relevance_search, query, k, fetch_k, lambda_mult, **kwargs
)
return await asyncio.get_event_loop().run_in_executor(None, func)
[docs] def max_marginal_relevance_search_by_vector(
self,
embedding: List[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
raise NotImplementedError
[docs] async def amax_marginal_relevance_search_by_vector(
self,
embedding: List[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance."""
raise NotImplementedError
[docs] @classmethod
def from_documents(
cls: Type[VST],
documents: List[Document],
embedding: Embeddings,
**kwargs: Any,
) -> VST:
"""Return VectorStore initialized from documents and embeddings."""
texts = [d.page_content for d in documents]
metadatas = [d.metadata for d in documents]
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
[docs] @classmethod
async def afrom_documents(
cls: Type[VST],
documents: List[Document],
embedding: Embeddings,
**kwargs: Any,
) -> VST:
"""Return VectorStore initialized from documents and embeddings."""
texts = [d.page_content for d in documents]
metadatas = [d.metadata for d in documents]
return await cls.afrom_texts(texts, embedding, metadatas=metadatas, **kwargs)
[docs] @classmethod
@abstractmethod
def from_texts(
cls: Type[VST],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> VST:
"""Return VectorStore initialized from texts and embeddings."""
[docs] @classmethod
async def afrom_texts(
cls: Type[VST],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> VST:
"""Return VectorStore initialized from texts and embeddings."""
raise NotImplementedError
[docs] def as_retriever(self, **kwargs: Any) -> VectorStoreRetriever:
return VectorStoreRetriever(vectorstore=self, **kwargs)
class VectorStoreRetriever(BaseRetriever, BaseModel):
vectorstore: VectorStore
search_type: str = "similarity"
search_kwargs: dict = Field(default_factory=dict)
allowed_search_types: ClassVar[Collection[str]] = (
"similarity",
"similarity_score_threshold",
"mmr",
)
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
@root_validator()
def validate_search_type(cls, values: Dict) -> Dict:
"""Validate search type."""
search_type = values["search_type"]
if search_type not in cls.allowed_search_types:
raise ValueError(
f"search_type of {search_type} not allowed. Valid values are: "
f"{cls.allowed_search_types}"
)
if search_type == "similarity_score_threshold":
score_threshold = values["search_kwargs"].get("score_threshold")
if (score_threshold is None) or (not isinstance(score_threshold, float)):
raise ValueError(
"`score_threshold` is not specified with a float value(0~1) "
"in `search_kwargs`."
)
return values
def get_relevant_documents(self, query: str) -> List[Document]:
if self.search_type == "similarity":
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
elif self.search_type == "similarity_score_threshold":
docs_and_similarities = (
self.vectorstore.similarity_search_with_relevance_scores(
query, **self.search_kwargs
)
)
docs = [doc for doc, _ in docs_and_similarities]
elif self.search_type == "mmr":
docs = self.vectorstore.max_marginal_relevance_search(
query, **self.search_kwargs
)
else:
raise ValueError(f"search_type of {self.search_type} not allowed.")
return docs
async def aget_relevant_documents(self, query: str) -> List[Document]:
if self.search_type == "similarity":
docs = await self.vectorstore.asimilarity_search(
query, **self.search_kwargs
)
elif self.search_type == "similarity_score_threshold":
docs_and_similarities = (
await self.vectorstore.asimilarity_search_with_relevance_scores(
query, **self.search_kwargs
)
)
docs = [doc for doc, _ in docs_and_similarities]
elif self.search_type == "mmr":
docs = await self.vectorstore.amax_marginal_relevance_search(
query, **self.search_kwargs
)
else:
raise ValueError(f"search_type of {self.search_type} not allowed.")
return docs
def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:
"""Add documents to vectorstore."""
return self.vectorstore.add_documents(documents, **kwargs)
async def aadd_documents(
self, documents: List[Document], **kwargs: Any
) -> List[str]:
"""Add documents to vectorstore."""
return await self.vectorstore.aadd_documents(documents, **kwargs) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html |
e854f09b-19f2-4b3b-9c98-43e5a46926b0 | Source code for langchain.vectorstores.awadb
"""Wrapper around AwaDB for embedding vectors"""
from __future__ import annotations
import logging
import uuid
from typing import TYPE_CHECKING, Any, Iterable, List, Optional, Tuple, Type
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
# from pydantic import BaseModel, Field, root_validator
if TYPE_CHECKING:
import awadb
logger = logging.getLogger()
DEFAULT_TOPN = 4
[docs]class AwaDB(VectorStore):
"""Interface implemented by AwaDB vector stores."""
_DEFAULT_TABLE_NAME = "langchain_awadb"
def __init__(
self,
table_name: str = _DEFAULT_TABLE_NAME,
embedding_model: Optional[Embeddings] = None,
log_and_data_dir: Optional[str] = None,
client: Optional[awadb.Client] = None,
) -> None:
"""Initialize with AwaDB client."""
try:
import awadb
except ImportError:
raise ValueError(
"Could not import awadb python package. "
"Please install it with `pip install awadb`."
)
if client is not None:
self.awadb_client = client
else:
if log_and_data_dir is not None:
self.awadb_client = awadb.Client(log_and_data_dir)
else:
self.awadb_client = awadb.Client()
if table_name == self._DEFAULT_TABLE_NAME:
table_name += "_"
table_name += str(uuid.uuid4()).split("-")[-1]
self.awadb_client.Create(table_name)
self.table2embeddings: dict[str, Embeddings] = {}
if embedding_model is not None:
self.table2embeddings[table_name] = embedding_model
self.using_table_name = table_name
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
is_duplicate_texts: Optional[bool] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
is_duplicate_texts: Optional whether to duplicate texts.
kwargs: vectorstore specific parameters.
Returns:
List of ids from adding the texts into the vectorstore.
"""
if self.awadb_client is None:
raise ValueError("AwaDB client is None!!!")
embeddings = None
if self.using_table_name in self.table2embeddings:
embeddings = self.table2embeddings[self.using_table_name].embed_documents(
list(texts)
)
return self.awadb_client.AddTexts(
"embedding_text",
"text_embedding",
texts,
embeddings,
metadatas,
is_duplicate_texts,
)
[docs] def load_local(
self,
table_name: str,
**kwargs: Any,
) -> bool:
if self.awadb_client is None:
raise ValueError("AwaDB client is None!!!")
return self.awadb_client.Load(table_name)
[docs] def similarity_search(
self,
query: str,
k: int = DEFAULT_TOPN,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to query."""
if self.awadb_client is None:
raise ValueError("AwaDB client is None!!!")
embedding = None
if self.using_table_name in self.table2embeddings:
embedding = self.table2embeddings[self.using_table_name].embed_query(query)
else:
from awadb import llm_embedding
llm = llm_embedding.LLMEmbedding()
embedding = llm.Embedding(query)
return self.similarity_search_by_vector(embedding, k)
[docs] def similarity_search_with_score(
self,
query: str,
k: int = DEFAULT_TOPN,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
"""
if self.awadb_client is None:
raise ValueError("AwaDB client is None!!!")
embedding = None
if self.using_table_name in self.table2embeddings:
embedding = self.table2embeddings[self.using_table_name].embed_query(query)
else:
from awadb import llm_embedding
llm = llm_embedding.LLMEmbedding()
embedding = llm.Embedding(query)
results: List[Tuple[Document, float]] = []
scores: List[float] = []
retrieval_docs = self.similarity_search_by_vector(embedding, k, scores)
L2_Norm = 0.0
for score in scores:
L2_Norm = L2_Norm + score * score
L2_Norm = pow(L2_Norm, 0.5)
doc_no = 0
for doc in retrieval_docs:
doc_tuple = (doc, 1 - (scores[doc_no] / L2_Norm))
results.append(doc_tuple)
doc_no = doc_no + 1
return results
[docs] def similarity_search_with_relevance_scores(
self,
query: str,
k: int = DEFAULT_TOPN,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
"""
if self.awadb_client is None:
raise ValueError("AwaDB client is None!!!")
embedding = None
if self.using_table_name in self.table2embeddings:
embedding = self.table2embeddings[self.using_table_name].embed_query(query)
show_results = self.awadb_client.Search(embedding, k)
results: List[Tuple[Document, float]] = []
if show_results.__len__() == 0:
return results
scores: List[float] = []
retrieval_docs = self.similarity_search_by_vector(embedding, k, scores)
L2_Norm = 0.0
for score in scores:
L2_Norm = L2_Norm + score * score
L2_Norm = pow(L2_Norm, 0.5)
doc_no = 0
for doc in retrieval_docs:
doc_tuple = (doc, 1 - scores[doc_no] / L2_Norm)
results.append(doc_tuple)
doc_no = doc_no + 1
return results
[docs] def similarity_search_by_vector(
self,
embedding: Optional[List[float]] = None,
k: int = DEFAULT_TOPN,
scores: Optional[list] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to embedding vector.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query vector.
"""
if self.awadb_client is None:
raise ValueError("AwaDB client is None!!!")
results: List[Document] = []
if embedding is None:
return results
show_results = self.awadb_client.Search(embedding, k)
if show_results.__len__() == 0:
return results
for item_detail in show_results[0]["ResultItems"]:
content = ""
meta_data = {}
for item_key in item_detail:
if (
item_key == "Field@0"
and self.using_table_name in self.table2embeddings
): # text for the document
content = item_detail[item_key]
elif item_key == "embedding_text":
content = item_detail[item_key]
elif (
item_key == "Field@1" or item_key == "text_embedding"
): # embedding field for the document
continue
elif item_key == "score": # L2 distance
if scores is not None:
score = item_detail[item_key]
scores.append(score)
else:
meta_data[item_key] = item_detail[item_key]
results.append(Document(page_content=content, metadata=meta_data))
return results
[docs] def create_table(
self,
table_name: str,
**kwargs: Any,
) -> bool:
"""Create a new table."""
if self.awadb_client is None:
return False
ret = self.awadb_client.Create(table_name)
if ret:
self.using_table_name = table_name
return ret
[docs] def use(
self,
table_name: str,
**kwargs: Any,
) -> bool:
"""Use the specified table. Don't know the tables, please invoke list_tables."""
if self.awadb_client is None:
return False
ret = self.awadb_client.Use(table_name)
if ret:
self.using_table_name = table_name
return ret
[docs] def list_tables(
self,
**kwargs: Any,
) -> List[str]:
"""List all the tables created by the client."""
if self.awadb_client is None:
return []
return self.awadb_client.ListAllTables()
[docs] def get_current_table(
self,
**kwargs: Any,
) -> str:
"""Get the current table."""
return self.using_table_name
[docs] @classmethod
def from_texts(
cls: Type[AwaDB],
texts: List[str],
embedding: Optional[Embeddings] = None,
metadatas: Optional[List[dict]] = None,
table_name: str = _DEFAULT_TABLE_NAME,
logging_and_data_dir: Optional[str] = None,
client: Optional[awadb.Client] = None,
**kwargs: Any,
) -> AwaDB:
"""Create an AwaDB vectorstore from a raw documents.
Args:
texts (List[str]): List of texts to add to the table.
embedding (Optional[Embeddings]): Embedding function. Defaults to None.
metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.
table_name (str): Name of the table to create.
logging_and_data_dir (Optional[str]): Directory of logging and persistence.
client (Optional[awadb.Client]): AwaDB client
Returns:
AwaDB: AwaDB vectorstore.
"""
awadb_client = cls(
table_name=table_name,
embedding_model=embedding,
log_and_data_dir=logging_and_data_dir,
client=client,
)
awadb_client.add_texts(texts=texts, metadatas=metadatas)
return awadb_client
[docs] @classmethod
def from_documents(
cls: Type[AwaDB],
documents: List[Document],
embedding: Optional[Embeddings] = None,
table_name: str = _DEFAULT_TABLE_NAME,
logging_and_data_dir: Optional[str] = None,
client: Optional[awadb.Client] = None,
**kwargs: Any,
) -> AwaDB:
"""Create an AwaDB vectorstore from a list of documents.
If a logging_and_data_dir specified, the table will be persisted there.
Args:
documents (List[Document]): List of documents to add to the vectorstore.
embedding (Optional[Embeddings]): Embedding function. Defaults to None.
table_name (str): Name of the table to create.
logging_and_data_dir (Optional[str]): Directory to persist the table.
client (Optional[awadb.Client]): AwaDB client
Returns:
AwaDB: AwaDB vectorstore.
"""
texts = [doc.page_content for doc in documents]
metadatas = [doc.metadata for doc in documents]
return cls.from_texts(
texts=texts,
embedding=embedding,
metadatas=metadatas,
table_name=table_name,
logging_and_data_dir=logging_and_data_dir,
client=client,
) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html |
c3b8d61c-2db6-4497-9c84-32f596ffd713 | Source code for langchain.vectorstores.milvus
"""Wrapper around the Milvus vector database."""
from __future__ import annotations
import logging
from typing import Any, Iterable, List, Optional, Tuple, Union
from uuid import uuid4
import numpy as np
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
logger = logging.getLogger(__name__)
DEFAULT_MILVUS_CONNECTION = {
"host": "localhost",
"port": "19530",
"user": "",
"password": "",
"secure": False,
}
[docs]class Milvus(VectorStore):
"""Wrapper around the Milvus vector database."""
def __init__(
self,
embedding_function: Embeddings,
collection_name: str = "LangChainCollection",
connection_args: Optional[dict[str, Any]] = None,
consistency_level: str = "Session",
index_params: Optional[dict] = None,
search_params: Optional[dict] = None,
drop_old: Optional[bool] = False,
):
"""Initialize wrapper around the milvus vector database.
In order to use this you need to have `pymilvus` installed and a
running Milvus/Zilliz Cloud instance.
See the following documentation for how to run a Milvus instance:
https://milvus.io/docs/install_standalone-docker.md
If looking for a hosted Milvus, take a looka this documentation:
https://zilliz.com/cloud
IF USING L2/IP metric IT IS HIGHLY SUGGESTED TO NORMALIZE YOUR DATA.
The connection args used for this class comes in the form of a dict,
here are a few of the options:
address (str): The actual address of Milvus
instance. Example address: "localhost:19530"
uri (str): The uri of Milvus instance. Example uri:
"http://randomwebsite:19530",
"tcp:foobarsite:19530",
"https://ok.s3.south.com:19530".
host (str): The host of Milvus instance. Default at "localhost",
PyMilvus will fill in the default host if only port is provided.
port (str/int): The port of Milvus instance. Default at 19530, PyMilvus
will fill in the default port if only host is provided.
user (str): Use which user to connect to Milvus instance. If user and
password are provided, we will add related header in every RPC call.
password (str): Required when user is provided. The password
corresponding to the user.
secure (bool): Default is false. If set to true, tls will be enabled.
client_key_path (str): If use tls two-way authentication, need to
write the client.key path.
client_pem_path (str): If use tls two-way authentication, need to
write the client.pem path.
ca_pem_path (str): If use tls two-way authentication, need to write
the ca.pem path.
server_pem_path (str): If use tls one-way authentication, need to
write the server.pem path.
server_name (str): If use tls, need to write the common name.
Args:
embedding_function (Embeddings): Function used to embed the text.
collection_name (str): Which Milvus collection to use. Defaults to
"LangChainCollection".
connection_args (Optional[dict[str, any]]): The arguments for connection to
Milvus/Zilliz instance. Defaults to DEFAULT_MILVUS_CONNECTION.
consistency_level (str): The consistency level to use for a collection.
Defaults to "Session".
index_params (Optional[dict]): Which index params to use. Defaults to
HNSW/AUTOINDEX depending on service.
search_params (Optional[dict]): Which search params to use. Defaults to
default of index.
drop_old (Optional[bool]): Whether to drop the current collection. Defaults
to False.
"""
try:
from pymilvus import Collection, utility
except ImportError:
raise ValueError(
"Could not import pymilvus python package. "
"Please install it with `pip install pymilvus`."
)
# Default search params when one is not provided.
self.default_search_params = {
"IVF_FLAT": {"metric_type": "L2", "params": {"nprobe": 10}},
"IVF_SQ8": {"metric_type": "L2", "params": {"nprobe": 10}},
"IVF_PQ": {"metric_type": "L2", "params": {"nprobe": 10}},
"HNSW": {"metric_type": "L2", "params": {"ef": 10}},
"RHNSW_FLAT": {"metric_type": "L2", "params": {"ef": 10}},
"RHNSW_SQ": {"metric_type": "L2", "params": {"ef": 10}},
"RHNSW_PQ": {"metric_type": "L2", "params": {"ef": 10}},
"IVF_HNSW": {"metric_type": "L2", "params": {"nprobe": 10, "ef": 10}},
"ANNOY": {"metric_type": "L2", "params": {"search_k": 10}},
"AUTOINDEX": {"metric_type": "L2", "params": {}},
}
self.embedding_func = embedding_function
self.collection_name = collection_name
self.index_params = index_params
self.search_params = search_params
self.consistency_level = consistency_level
# In order for a collection to be compatible, pk needs to be auto'id and int
self._primary_field = "pk"
# In order for compatiblility, the text field will need to be called "text"
self._text_field = "text"
# In order for compatbility, the vector field needs to be called "vector"
self._vector_field = "vector"
self.fields: list[str] = []
# Create the connection to the server
if connection_args is None:
connection_args = DEFAULT_MILVUS_CONNECTION
self.alias = self._create_connection_alias(connection_args)
self.col: Optional[Collection] = None
# Grab the existing colection if it exists
if utility.has_collection(self.collection_name, using=self.alias):
self.col = Collection(
self.collection_name,
using=self.alias,
)
# If need to drop old, drop it
if drop_old and isinstance(self.col, Collection):
self.col.drop()
self.col = None
# Initialize the vector store
self._init()
def _create_connection_alias(self, connection_args: dict) -> str:
"""Create the connection to the Milvus server."""
from pymilvus import MilvusException, connections
# Grab the connection arguments that are used for checking existing connection
host: str = connection_args.get("host", None)
port: Union[str, int] = connection_args.get("port", None)
address: str = connection_args.get("address", None)
uri: str = connection_args.get("uri", None)
user = connection_args.get("user", None)
# Order of use is host/port, uri, address
if host is not None and port is not None:
given_address = str(host) + ":" + str(port)
elif uri is not None:
given_address = uri.split("https://")[1]
elif address is not None:
given_address = address
else:
given_address = None
logger.debug("Missing standard address type for reuse atttempt")
# User defaults to empty string when getting connection info
if user is not None:
tmp_user = user
else:
tmp_user = ""
# If a valid address was given, then check if a connection exists
if given_address is not None:
for con in connections.list_connections():
addr = connections.get_connection_addr(con[0])
if (
con[1]
and ("address" in addr)
and (addr["address"] == given_address)
and ("user" in addr)
and (addr["user"] == tmp_user)
):
logger.debug("Using previous connection: %s", con[0])
return con[0]
# Generate a new connection if one doesnt exist
alias = uuid4().hex
try:
connections.connect(alias=alias, **connection_args)
logger.debug("Created new connection using: %s", alias)
return alias
except MilvusException as e:
logger.error("Failed to create new connection using: %s", alias)
raise e
def _init(
self, embeddings: Optional[list] = None, metadatas: Optional[list[dict]] = None
) -> None:
if embeddings is not None:
self._create_collection(embeddings, metadatas)
self._extract_fields()
self._create_index()
self._create_search_params()
self._load()
def _create_collection(
self, embeddings: list, metadatas: Optional[list[dict]] = None
) -> None:
from pymilvus import (
Collection,
CollectionSchema,
DataType,
FieldSchema,
MilvusException,
)
from pymilvus.orm.types import infer_dtype_bydata
# Determine embedding dim
dim = len(embeddings[0])
fields = []
# Determine metadata schema
if metadatas:
# Create FieldSchema for each entry in metadata.
for key, value in metadatas[0].items():
# Infer the corresponding datatype of the metadata
dtype = infer_dtype_bydata(value)
# Datatype isnt compatible
if dtype == DataType.UNKNOWN or dtype == DataType.NONE:
logger.error(
"Failure to create collection, unrecognized dtype for key: %s",
key,
)
raise ValueError(f"Unrecognized datatype for {key}.")
# Dataype is a string/varchar equivalent
elif dtype == DataType.VARCHAR:
fields.append(FieldSchema(key, DataType.VARCHAR, max_length=65_535))
else:
fields.append(FieldSchema(key, dtype))
# Create the text field
fields.append(
FieldSchema(self._text_field, DataType.VARCHAR, max_length=65_535)
)
# Create the primary key field
fields.append(
FieldSchema(
self._primary_field, DataType.INT64, is_primary=True, auto_id=True
)
)
# Create the vector field, supports binary or float vectors
fields.append(
FieldSchema(self._vector_field, infer_dtype_bydata(embeddings[0]), dim=dim)
)
# Create the schema for the collection
schema = CollectionSchema(fields)
# Create the collection
try:
self.col = Collection(
name=self.collection_name,
schema=schema,
consistency_level=self.consistency_level,
using=self.alias,
)
except MilvusException as e:
logger.error(
"Failed to create collection: %s error: %s", self.collection_name, e
)
raise e
def _extract_fields(self) -> None:
"""Grab the existing fields from the Collection"""
from pymilvus import Collection
if isinstance(self.col, Collection):
schema = self.col.schema
for x in schema.fields:
self.fields.append(x.name)
# Since primary field is auto-id, no need to track it
self.fields.remove(self._primary_field)
def _get_index(self) -> Optional[dict[str, Any]]:
"""Return the vector index information if it exists"""
from pymilvus import Collection
if isinstance(self.col, Collection):
for x in self.col.indexes:
if x.field_name == self._vector_field:
return x.to_dict()
return None
def _create_index(self) -> None:
"""Create a index on the collection"""
from pymilvus import Collection, MilvusException
if isinstance(self.col, Collection) and self._get_index() is None:
try:
# If no index params, use a default HNSW based one
if self.index_params is None:
self.index_params = {
"metric_type": "L2",
"index_type": "HNSW",
"params": {"M": 8, "efConstruction": 64},
}
try:
self.col.create_index(
self._vector_field,
index_params=self.index_params,
using=self.alias,
)
# If default did not work, most likely on Zilliz Cloud
except MilvusException:
# Use AUTOINDEX based index
self.index_params = {
"metric_type": "L2",
"index_type": "AUTOINDEX",
"params": {},
}
self.col.create_index(
self._vector_field,
index_params=self.index_params,
using=self.alias,
)
logger.debug(
"Successfully created an index on collection: %s",
self.collection_name,
)
except MilvusException as e:
logger.error(
"Failed to create an index on collection: %s", self.collection_name
)
raise e
def _create_search_params(self) -> None:
"""Generate search params based on the current index type"""
from pymilvus import Collection
if isinstance(self.col, Collection) and self.search_params is None:
index = self._get_index()
if index is not None:
index_type: str = index["index_param"]["index_type"]
metric_type: str = index["index_param"]["metric_type"]
self.search_params = self.default_search_params[index_type]
self.search_params["metric_type"] = metric_type
def _load(self) -> None:
"""Load the collection if available."""
from pymilvus import Collection
if isinstance(self.col, Collection) and self._get_index() is not None:
self.col.load()
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
timeout: Optional[int] = None,
batch_size: int = 1000,
**kwargs: Any,
) -> List[str]:
"""Insert text data into Milvus.
Inserting data when the collection has not be made yet will result
in creating a new Collection. The data of the first entity decides
the schema of the new collection, the dim is extracted from the first
embedding and the columns are decided by the first metadata dict.
Metada keys will need to be present for all inserted values. At
the moment there is no None equivalent in Milvus.
Args:
texts (Iterable[str]): The texts to embed, it is assumed
that they all fit in memory.
metadatas (Optional[List[dict]]): Metadata dicts attached to each of
the texts. Defaults to None.
timeout (Optional[int]): Timeout for each batch insert. Defaults
to None.
batch_size (int, optional): Batch size to use for insertion.
Defaults to 1000.
Raises:
MilvusException: Failure to add texts
Returns:
List[str]: The resulting keys for each inserted element.
"""
from pymilvus import Collection, MilvusException
texts = list(texts)
try:
embeddings = self.embedding_func.embed_documents(texts)
except NotImplementedError:
embeddings = [self.embedding_func.embed_query(x) for x in texts]
if len(embeddings) == 0:
logger.debug("Nothing to insert, skipping.")
return []
# If the collection hasnt been initialized yet, perform all steps to do so
if not isinstance(self.col, Collection):
self._init(embeddings, metadatas)
# Dict to hold all insert columns
insert_dict: dict[str, list] = {
self._text_field: texts,
self._vector_field: embeddings,
}
# Collect the metadata into the insert dict.
if metadatas is not None:
for d in metadatas:
for key, value in d.items():
if key in self.fields:
insert_dict.setdefault(key, []).append(value)
# Total insert count
vectors: list = insert_dict[self._vector_field]
total_count = len(vectors)
pks: list[str] = []
assert isinstance(self.col, Collection)
for i in range(0, total_count, batch_size):
# Grab end index
end = min(i + batch_size, total_count)
# Convert dict to list of lists batch for insertion
insert_list = [insert_dict[x][i:end] for x in self.fields]
# Insert into the collection.
try:
res: Collection
res = self.col.insert(insert_list, timeout=timeout, **kwargs)
pks.extend(res.primary_keys)
except MilvusException as e:
logger.error(
"Failed to insert batch starting at entity: %s/%s", i, total_count
)
raise e
return pks
[docs] def similarity_search(
self,
query: str,
k: int = 4,
param: Optional[dict] = None,
expr: Optional[str] = None,
timeout: Optional[int] = None,
**kwargs: Any,
) -> List[Document]:
"""Perform a similarity search against the query string.
Args:
query (str): The text to search.
k (int, optional): How many results to return. Defaults to 4.
param (dict, optional): The search params for the index type.
Defaults to None.
expr (str, optional): Filtering expression. Defaults to None.
timeout (int, optional): How long to wait before timeout error.
Defaults to None.
kwargs: Collection.search() keyword arguments.
Returns:
List[Document]: Document results for search.
"""
if self.col is None:
logger.debug("No existing collection to search.")
return []
res = self.similarity_search_with_score(
query=query, k=k, param=param, expr=expr, timeout=timeout, **kwargs
)
return [doc for doc, _ in res]
[docs] def similarity_search_by_vector(
self,
embedding: List[float],
k: int = 4,
param: Optional[dict] = None,
expr: Optional[str] = None,
timeout: Optional[int] = None,
**kwargs: Any,
) -> List[Document]:
"""Perform a similarity search against the query string.
Args:
embedding (List[float]): The embedding vector to search.
k (int, optional): How many results to return. Defaults to 4.
param (dict, optional): The search params for the index type.
Defaults to None.
expr (str, optional): Filtering expression. Defaults to None.
timeout (int, optional): How long to wait before timeout error.
Defaults to None.
kwargs: Collection.search() keyword arguments.
Returns:
List[Document]: Document results for search.
"""
if self.col is None:
logger.debug("No existing collection to search.")
return []
res = self.similarity_search_with_score_by_vector(
embedding=embedding, k=k, param=param, expr=expr, timeout=timeout, **kwargs
)
return [doc for doc, _ in res]
[docs] def similarity_search_with_score(
self,
query: str,
k: int = 4,
param: Optional[dict] = None,
expr: Optional[str] = None,
timeout: Optional[int] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Perform a search on a query string and return results with score.
For more information about the search parameters, take a look at the pymilvus
documentation found here:
https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md
Args:
query (str): The text being searched.
k (int, optional): The amount of results ot return. Defaults to 4.
param (dict): The search params for the specified index.
Defaults to None.
expr (str, optional): Filtering expression. Defaults to None.
timeout (int, optional): How long to wait before timeout error.
Defaults to None.
kwargs: Collection.search() keyword arguments.
Returns:
List[float], List[Tuple[Document, any, any]]:
"""
if self.col is None:
logger.debug("No existing collection to search.")
return []
# Embed the query text.
embedding = self.embedding_func.embed_query(query)
res = self.similarity_search_with_score_by_vector(
embedding=embedding, k=k, param=param, expr=expr, timeout=timeout, **kwargs
)
return res
[docs] def similarity_search_with_score_by_vector(
self,
embedding: List[float],
k: int = 4,
param: Optional[dict] = None,
expr: Optional[str] = None,
timeout: Optional[int] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Perform a search on a query string and return results with score.
For more information about the search parameters, take a look at the pymilvus
documentation found here:
https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md
Args:
embedding (List[float]): The embedding vector being searched.
k (int, optional): The amount of results ot return. Defaults to 4.
param (dict): The search params for the specified index.
Defaults to None.
expr (str, optional): Filtering expression. Defaults to None.
timeout (int, optional): How long to wait before timeout error.
Defaults to None.
kwargs: Collection.search() keyword arguments.
Returns:
List[Tuple[Document, float]]: Result doc and score.
"""
if self.col is None:
logger.debug("No existing collection to search.")
return []
if param is None:
param = self.search_params
# Determine result metadata fields.
output_fields = self.fields[:]
output_fields.remove(self._vector_field)
# Perform the search.
res = self.col.search(
data=[embedding],
anns_field=self._vector_field,
param=param,
limit=k,
expr=expr,
output_fields=output_fields,
timeout=timeout,
**kwargs,
)
# Organize results.
ret = []
for result in res[0]:
meta = {x: result.entity.get(x) for x in output_fields}
doc = Document(page_content=meta.pop(self._text_field), metadata=meta)
pair = (doc, result.score)
ret.append(pair)
return ret
[docs] def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
param: Optional[dict] = None,
expr: Optional[str] = None,
timeout: Optional[int] = None,
**kwargs: Any,
) -> List[Document]:
"""Perform a search and return results that are reordered by MMR.
Args:
query (str): The text being searched.
k (int, optional): How many results to give. Defaults to 4.
fetch_k (int, optional): Total results to select k from.
Defaults to 20.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5
param (dict, optional): The search params for the specified index.
Defaults to None.
expr (str, optional): Filtering expression. Defaults to None.
timeout (int, optional): How long to wait before timeout error.
Defaults to None.
kwargs: Collection.search() keyword arguments.
Returns:
List[Document]: Document results for search.
"""
if self.col is None:
logger.debug("No existing collection to search.")
return []
embedding = self.embedding_func.embed_query(query)
return self.max_marginal_relevance_search_by_vector(
embedding=embedding,
k=k,
fetch_k=fetch_k,
lambda_mult=lambda_mult,
param=param,
expr=expr,
timeout=timeout,
**kwargs,
)
[docs] def max_marginal_relevance_search_by_vector(
self,
embedding: list[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
param: Optional[dict] = None,
expr: Optional[str] = None,
timeout: Optional[int] = None,
**kwargs: Any,
) -> List[Document]:
"""Perform a search and return results that are reordered by MMR.
Args:
embedding (str): The embedding vector being searched.
k (int, optional): How many results to give. Defaults to 4.
fetch_k (int, optional): Total results to select k from.
Defaults to 20.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5
param (dict, optional): The search params for the specified index.
Defaults to None.
expr (str, optional): Filtering expression. Defaults to None.
timeout (int, optional): How long to wait before timeout error.
Defaults to None.
kwargs: Collection.search() keyword arguments.
Returns:
List[Document]: Document results for search.
"""
if self.col is None:
logger.debug("No existing collection to search.")
return []
if param is None:
param = self.search_params
# Determine result metadata fields.
output_fields = self.fields[:]
output_fields.remove(self._vector_field)
# Perform the search.
res = self.col.search(
data=[embedding],
anns_field=self._vector_field,
param=param,
limit=fetch_k,
expr=expr,
output_fields=output_fields,
timeout=timeout,
**kwargs,
)
# Organize results.
ids = []
documents = []
scores = []
for result in res[0]:
meta = {x: result.entity.get(x) for x in output_fields}
doc = Document(page_content=meta.pop(self._text_field), metadata=meta)
documents.append(doc)
scores.append(result.score)
ids.append(result.id)
vectors = self.col.query(
expr=f"{self._primary_field} in {ids}",
output_fields=[self._primary_field, self._vector_field],
timeout=timeout,
)
# Reorganize the results from query to match search order.
vectors = {x[self._primary_field]: x[self._vector_field] for x in vectors}
ordered_result_embeddings = [vectors[x] for x in ids]
# Get the new order of results.
new_ordering = maximal_marginal_relevance(
np.array(embedding), ordered_result_embeddings, k=k, lambda_mult=lambda_mult
)
# Reorder the values and return.
ret = []
for x in new_ordering:
# Function can return -1 index
if x == -1:
break
else:
ret.append(documents[x])
return ret
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
collection_name: str = "LangChainCollection",
connection_args: dict[str, Any] = DEFAULT_MILVUS_CONNECTION,
consistency_level: str = "Session",
index_params: Optional[dict] = None,
search_params: Optional[dict] = None,
drop_old: bool = False,
**kwargs: Any,
) -> Milvus:
"""Create a Milvus collection, indexes it with HNSW, and insert data.
Args:
texts (List[str]): Text data.
embedding (Embeddings): Embedding function.
metadatas (Optional[List[dict]]): Metadata for each text if it exists.
Defaults to None.
collection_name (str, optional): Collection name to use. Defaults to
"LangChainCollection".
connection_args (dict[str, Any], optional): Connection args to use. Defaults
to DEFAULT_MILVUS_CONNECTION.
consistency_level (str, optional): Which consistency level to use. Defaults
to "Session".
index_params (Optional[dict], optional): Which index_params to use. Defaults
to None.
search_params (Optional[dict], optional): Which search params to use.
Defaults to None.
drop_old (Optional[bool], optional): Whether to drop the collection with
that name if it exists. Defaults to False.
Returns:
Milvus: Milvus Vector Store
"""
vector_db = cls(
embedding_function=embedding,
collection_name=collection_name,
connection_args=connection_args,
consistency_level=consistency_level,
index_params=index_params,
search_params=search_params,
drop_old=drop_old,
**kwargs,
)
vector_db.add_texts(texts=texts, metadatas=metadatas)
return vector_db | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html |
8fb7cb62-979e-4302-9a82-5bbcf885a61a | Source code for langchain.vectorstores.elastic_vector_search
"""Wrapper around Elasticsearch vector database."""
from __future__ import annotations
import uuid
from abc import ABC
from typing import (
TYPE_CHECKING,
Any,
Dict,
Iterable,
List,
Mapping,
Optional,
Tuple,
Union,
)
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.utils import get_from_env
from langchain.vectorstores.base import VectorStore
if TYPE_CHECKING:
from elasticsearch import Elasticsearch
def _default_text_mapping(dim: int) -> Dict:
return {
"properties": {
"text": {"type": "text"},
"vector": {"type": "dense_vector", "dims": dim},
}
}
def _default_script_query(query_vector: List[float], filter: Optional[dict]) -> Dict:
if filter:
((key, value),) = filter.items()
filter = {"match": {f"metadata.{key}.keyword": f"{value}"}}
else:
filter = {"match_all": {}}
return {
"script_score": {
"query": filter,
"script": {
"source": "cosineSimilarity(params.query_vector, 'vector') + 1.0",
"params": {"query_vector": query_vector},
},
}
}
# ElasticVectorSearch is a concrete implementation of the abstract base class
# VectorStore, which defines a common interface for all vector database
# implementations. By inheriting from the ABC class, ElasticVectorSearch can be
# defined as an abstract base class itself, allowing the creation of subclasses with
# their own specific implementations. If you plan to subclass ElasticVectorSearch,
# you can inherit from it and define your own implementation of the necessary methods
# and attributes.
[docs]class ElasticVectorSearch(VectorStore, ABC):
"""Wrapper around Elasticsearch as a vector database.
To connect to an Elasticsearch instance that does not require
login credentials, pass the Elasticsearch URL and index name along with the
embedding object to the constructor.
Example:
.. code-block:: python
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url="http://localhost:9200",
index_name="test_index",
embedding=embedding
)
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the "Deployments" page.
To obtain your Elastic Cloud password for the default "elastic" user:
1. Log in to the Elastic Cloud console at https://cloud.elastic.co
2. Go to "Security" > "Users"
3. Locate the "elastic" user and click "Edit"
4. Click "Reset password"
5. Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
Example:
.. code-block:: python
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_host = "cluster_id.region_id.gcp.cloud.es.io"
elasticsearch_url = f"https://username:password@{elastic_host}:9243"
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url=elasticsearch_url,
index_name="test_index",
embedding=embedding
)
Args:
elasticsearch_url (str): The URL for the Elasticsearch instance.
index_name (str): The name of the Elasticsearch index for the embeddings.
embedding (Embeddings): An object that provides the ability to embed text.
It should be an instance of a class that subclasses the Embeddings
abstract base class, such as OpenAIEmbeddings()
Raises:
ValueError: If the elasticsearch python package is not installed.
"""
def __init__(
self,
elasticsearch_url: str,
index_name: str,
embedding: Embeddings,
*,
ssl_verify: Optional[Dict[str, Any]] = None,
):
"""Initialize with necessary components."""
try:
import elasticsearch
except ImportError:
raise ImportError(
"Could not import elasticsearch python package. "
"Please install it with `pip install elasticsearch`."
)
self.embedding = embedding
self.index_name = index_name
_ssl_verify = ssl_verify or {}
try:
self.client = elasticsearch.Elasticsearch(elasticsearch_url, **_ssl_verify)
except ValueError as e:
raise ValueError(
f"Your elasticsearch client string is mis-formatted. Got error: {e} "
)
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
refresh_indices: bool = True,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
refresh_indices: bool to refresh ElasticSearch indices
Returns:
List of ids from adding the texts into the vectorstore.
"""
try:
from elasticsearch.exceptions import NotFoundError
from elasticsearch.helpers import bulk
except ImportError:
raise ImportError(
"Could not import elasticsearch python package. "
"Please install it with `pip install elasticsearch`."
)
requests = []
ids = ids or [str(uuid.uuid4()) for _ in texts]
embeddings = self.embedding.embed_documents(list(texts))
dim = len(embeddings[0])
mapping = _default_text_mapping(dim)
# check to see if the index already exists
try:
self.client.indices.get(index=self.index_name)
except NotFoundError:
# TODO would be nice to create index before embedding,
# just to save expensive steps for last
self.create_index(self.client, self.index_name, mapping)
for i, text in enumerate(texts):
metadata = metadatas[i] if metadatas else {}
request = {
"_op_type": "index",
"_index": self.index_name,
"vector": embeddings[i],
"text": text,
"metadata": metadata,
"_id": ids[i],
}
requests.append(request)
bulk(self.client, requests)
if refresh_indices:
self.client.indices.refresh(index=self.index_name)
return ids
[docs] def similarity_search(
self, query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query.
"""
docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
documents = [d[0] for d in docs_and_scores]
return documents
[docs] def similarity_search_with_score(
self, query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query.
"""
embedding = self.embedding.embed_query(query)
script_query = _default_script_query(embedding, filter)
response = self.client_search(
self.client, self.index_name, script_query, size=k
)
hits = [hit for hit in response["hits"]["hits"]]
docs_and_scores = [
(
Document(
page_content=hit["_source"]["text"],
metadata=hit["_source"]["metadata"],
),
hit["_score"],
)
for hit in hits
]
return docs_and_scores
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
elasticsearch_url: Optional[str] = None,
index_name: Optional[str] = None,
refresh_indices: bool = True,
**kwargs: Any,
) -> ElasticVectorSearch:
"""Construct ElasticVectorSearch wrapper from raw documents.
This is a user-friendly interface that:
1. Embeds documents.
2. Creates a new index for the embeddings in the Elasticsearch instance.
3. Adds the documents to the newly created Elasticsearch index.
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch.from_texts(
texts,
embeddings,
elasticsearch_url="http://localhost:9200"
)
"""
elasticsearch_url = elasticsearch_url or get_from_env(
"elasticsearch_url", "ELASTICSEARCH_URL"
)
index_name = index_name or uuid.uuid4().hex
vectorsearch = cls(elasticsearch_url, index_name, embedding, **kwargs)
vectorsearch.add_texts(
texts, metadatas=metadatas, refresh_indices=refresh_indices
)
return vectorsearch
[docs] def create_index(self, client: Any, index_name: str, mapping: Dict) -> None:
version_num = client.info()["version"]["number"][0]
version_num = int(version_num)
if version_num >= 8:
client.indices.create(index=index_name, mappings=mapping)
else:
client.indices.create(index=index_name, body={"mappings": mapping})
[docs] def client_search(
self, client: Any, index_name: str, script_query: Dict, size: int
) -> Any:
version_num = client.info()["version"]["number"][0]
version_num = int(version_num)
if version_num >= 8:
response = client.search(index=index_name, query=script_query, size=size)
else:
response = client.search(
index=index_name, body={"query": script_query, "size": size}
)
return response
[docs] def delete(self, ids: List[str]) -> None:
"""Delete by vector IDs.
Args:
ids: List of ids to delete.
"""
# TODO: Check if this can be done in bulk
for id in ids:
self.client.delete(index=self.index_name, id=id)
class ElasticKnnSearch(ElasticVectorSearch):
"""
A class for performing k-Nearest Neighbors (k-NN) search on an Elasticsearch index.
The class is designed for a text search scenario where documents are text strings
and their embeddings are vector representations of those strings.
"""
def __init__(
self,
index_name: str,
embedding: Embeddings,
es_connection: Optional["Elasticsearch"] = None,
es_cloud_id: Optional[str] = None,
es_user: Optional[str] = None,
es_password: Optional[str] = None,
vector_query_field: Optional[str] = "vector",
query_field: Optional[str] = "text",
):
"""
Initializes an instance of the ElasticKnnSearch class and sets up the
Elasticsearch client.
Args:
index_name: The name of the Elasticsearch index.
embedding: An instance of the Embeddings class, used to generate vector
representations of text strings.
es_connection: An existing Elasticsearch connection.
es_cloud_id: The Cloud ID of the Elasticsearch instance. Required if
creating a new connection.
es_user: The username for the Elasticsearch instance. Required if
creating a new connection.
es_password: The password for the Elasticsearch instance. Required if
creating a new connection.
"""
try:
import elasticsearch
except ImportError:
raise ImportError(
"Could not import elasticsearch python package. "
"Please install it with `pip install elasticsearch`."
)
self.embedding = embedding
self.index_name = index_name
self.query_field = query_field
self.vector_query_field = vector_query_field
# If a pre-existing Elasticsearch connection is provided, use it.
if es_connection is not None:
self.client = es_connection
else:
# If credentials for a new Elasticsearch connection are provided,
# create a new connection.
if es_cloud_id and es_user and es_password:
self.client = elasticsearch.Elasticsearch(
cloud_id=es_cloud_id, basic_auth=(es_user, es_password)
)
else:
raise ValueError(
"""Either provide a pre-existing Elasticsearch connection, \
or valid credentials for creating a new connection."""
)
@staticmethod
def _default_knn_mapping(dims: int) -> Dict:
"""Generates a default index mapping for kNN search."""
return {
"properties": {
"text": {"type": "text"},
"vector": {
"type": "dense_vector",
"dims": dims,
"index": True,
"similarity": "dot_product",
},
}
}
def _default_knn_query(
self,
query_vector: Optional[List[float]] = None,
query: Optional[str] = None,
model_id: Optional[str] = None,
k: Optional[int] = 10,
num_candidates: Optional[int] = 10,
) -> Dict:
knn: Dict = {
"field": self.vector_query_field,
"k": k,
"num_candidates": num_candidates,
}
# Case 1: `query_vector` is provided, but not `model_id` -> use query_vector
if query_vector and not model_id:
knn["query_vector"] = query_vector
# Case 2: `query` and `model_id` are provided, -> use query_vector_builder
elif query and model_id:
knn["query_vector_builder"] = {
"text_embedding": {
"model_id": model_id, # use 'model_id' argument
"model_text": query, # use 'query' argument
}
}
else:
raise ValueError(
"Either `query_vector` or `model_id` must be provided, but not both."
)
return knn
def knn_search(
self,
query: Optional[str] = None,
k: Optional[int] = 10,
query_vector: Optional[List[float]] = None,
model_id: Optional[str] = None,
size: Optional[int] = 10,
source: Optional[bool] = True,
fields: Optional[
Union[List[Mapping[str, Any]], Tuple[Mapping[str, Any], ...], None]
] = None,
) -> Dict:
"""
Performs a k-nearest neighbor (k-NN) search on the Elasticsearch index.
The search can be conducted using either a raw query vector or a model ID.
The method first generates
the body of the search query, which can be interpreted by Elasticsearch.
It then performs the k-NN
search on the Elasticsearch index and returns the results.
Args:
query: The query or queries to be used for the search. Required if
`query_vector` is not provided.
k: The number of nearest neighbors to return. Defaults to 10.
query_vector: The query vector to be used for the search. Required if
`query` is not provided.
model_id: The ID of the model to use for generating the query vector, if
`query` is provided.
size: The number of search hits to return. Defaults to 10.
source: Whether to include the source of each hit in the results.
fields: The fields to include in the source of each hit. If None, all
fields are included.
vector_query_field: Field name to use in knn search if not default 'vector'
Returns:
The search results.
Raises:
ValueError: If neither `query_vector` nor `model_id` is provided, or if
both are provided.
"""
knn_query_body = self._default_knn_query(
query_vector=query_vector, query=query, model_id=model_id, k=k
)
# Perform the kNN search on the Elasticsearch index and return the results.
res = self.client.search(
index=self.index_name,
knn=knn_query_body,
size=size,
source=source,
fields=fields,
)
return dict(res)
def knn_hybrid_search(
self,
query: Optional[str] = None,
k: Optional[int] = 10,
query_vector: Optional[List[float]] = None,
model_id: Optional[str] = None,
size: Optional[int] = 10,
source: Optional[bool] = True,
knn_boost: Optional[float] = 0.9,
query_boost: Optional[float] = 0.1,
fields: Optional[
Union[List[Mapping[str, Any]], Tuple[Mapping[str, Any], ...], None]
] = None,
) -> Dict[Any, Any]:
"""Performs a hybrid k-nearest neighbor (k-NN) and text-based search on the
Elasticsearch index.
The search can be conducted using either a raw query vector or a model ID.
The method first generates
the body of the k-NN search query and the text-based query, which can be
interpreted by Elasticsearch.
It then performs the hybrid search on the Elasticsearch index and returns the
results.
Args:
query: The query or queries to be used for the search. Required if
`query_vector` is not provided.
k: The number of nearest neighbors to return. Defaults to 10.
query_vector: The query vector to be used for the search. Required if
`query` is not provided.
model_id: The ID of the model to use for generating the query vector, if
`query` is provided.
size: The number of search hits to return. Defaults to 10.
source: Whether to include the source of each hit in the results.
knn_boost: The boost factor for the k-NN part of the search.
query_boost: The boost factor for the text-based part of the search.
fields
The fields to include in the source of each hit. If None, all fields are
included. Defaults to None.
vector_query_field: Field name to use in knn search if not default 'vector'
query_field: Field name to use in search if not default 'text'
Returns:
The search results.
Raises:
ValueError: If neither `query_vector` nor `model_id` is provided, or if
both are provided.
"""
knn_query_body = self._default_knn_query(
query_vector=query_vector, query=query, model_id=model_id, k=k
)
# Modify the knn_query_body to add a "boost" parameter
knn_query_body["boost"] = knn_boost
# Generate the body of the standard Elasticsearch query
match_query_body = {
"match": {self.query_field: {"query": query, "boost": query_boost}}
}
# Perform the hybrid search on the Elasticsearch index and return the results.
res = self.client.search(
index=self.index_name,
query=match_query_body,
knn=knn_query_body,
fields=fields,
size=size,
source=source,
)
return dict(res) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html |
bc831b53-5810-4ade-a742-d8dc2cc1dba9 | Source code for langchain.vectorstores.mongodb_atlas
from __future__ import annotations
import logging
from typing import (
TYPE_CHECKING,
Any,
Dict,
Generator,
Iterable,
List,
Optional,
Tuple,
TypeVar,
Union,
)
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
if TYPE_CHECKING:
from pymongo.collection import Collection
MongoDBDocumentType = TypeVar("MongoDBDocumentType", bound=Dict[str, Any])
logger = logging.getLogger(__name__)
DEFAULT_INSERT_BATCH_SIZE = 100
[docs]class MongoDBAtlasVectorSearch(VectorStore):
"""Wrapper around MongoDB Atlas Vector Search.
To use, you should have both:
- the ``pymongo`` python package installed
- a connection string associated with a MongoDB Atlas Cluster having deployed an
Atlas Search index
Example:
.. code-block:: python
from langchain.vectorstores import MongoDBAtlasVectorSearch
from langchain.embeddings.openai import OpenAIEmbeddings
from pymongo import MongoClient
mongo_client = MongoClient("<YOUR-CONNECTION-STRING>")
collection = mongo_client["<db_name>"]["<collection_name>"]
embeddings = OpenAIEmbeddings()
vectorstore = MongoDBAtlasVectorSearch(collection, embeddings)
"""
def __init__(
self,
collection: Collection[MongoDBDocumentType],
embedding: Embeddings,
*,
index_name: str = "default",
text_key: str = "text",
embedding_key: str = "embedding",
):
"""
Args:
collection: MongoDB collection to add the texts to.
embedding: Text embedding model to use.
text_key: MongoDB field that will contain the text for each
document.
embedding_key: MongoDB field that will contain the embedding for
each document.
"""
self._collection = collection
self._embedding = embedding
self._index_name = index_name
self._text_key = text_key
self._embedding_key = embedding_key
[docs] @classmethod
def from_connection_string(
cls,
connection_string: str,
namespace: str,
embedding: Embeddings,
**kwargs: Any,
) -> MongoDBAtlasVectorSearch:
try:
from pymongo import MongoClient
except ImportError:
raise ImportError(
"Could not import pymongo, please install it with "
"`pip install pymongo`."
)
client: MongoClient = MongoClient(connection_string)
db_name, collection_name = namespace.split(".")
collection = client[db_name][collection_name]
return cls(collection, embedding, **kwargs)
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[Dict[str, Any]]] = None,
**kwargs: Any,
) -> List:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
Returns:
List of ids from adding the texts into the vectorstore.
"""
batch_size = kwargs.get("batch_size", DEFAULT_INSERT_BATCH_SIZE)
_metadatas: Union[List, Generator] = metadatas or ({} for _ in texts)
texts_batch = []
metadatas_batch = []
result_ids = []
for i, (text, metadata) in enumerate(zip(texts, _metadatas)):
texts_batch.append(text)
metadatas_batch.append(metadata)
if (i + 1) % batch_size == 0:
result_ids.extend(self._insert_texts(texts_batch, metadatas_batch))
texts_batch = []
metadatas_batch = []
if texts_batch:
result_ids.extend(self._insert_texts(texts_batch, metadatas_batch))
return result_ids
def _insert_texts(self, texts: List[str], metadatas: List[Dict[str, Any]]) -> List:
if not texts:
return []
# Embed and create the documents
embeddings = self._embedding.embed_documents(texts)
to_insert = [
{self._text_key: t, self._embedding_key: embedding, **m}
for t, m, embedding in zip(texts, metadatas, embeddings)
]
# insert the documents in MongoDB Atlas
insert_result = self._collection.insert_many(to_insert)
return insert_result.inserted_ids
[docs] def similarity_search_with_score(
self,
query: str,
*,
k: int = 4,
pre_filter: Optional[dict] = None,
post_filter_pipeline: Optional[List[Dict]] = None,
) -> List[Tuple[Document, float]]:
"""Return MongoDB documents most similar to query, along with scores.
Use the knnBeta Operator available in MongoDB Atlas Search
This feature is in early access and available only for evaluation purposes, to
validate functionality, and to gather feedback from a small closed group of
early access users. It is not recommended for production deployments as we
may introduce breaking changes.
For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta
Args:
query: Text to look up documents similar to.
k: Optional Number of Documents to return. Defaults to 4.
pre_filter: Optional Dictionary of argument(s) to prefilter on document
fields.
post_filter_pipeline: Optional Pipeline of MongoDB aggregation stages
following the knnBeta search.
Returns:
List of Documents most similar to the query and score for each
"""
knn_beta = {
"vector": self._embedding.embed_query(query),
"path": self._embedding_key,
"k": k,
}
if pre_filter:
knn_beta["filter"] = pre_filter
pipeline = [
{
"$search": {
"index": self._index_name,
"knnBeta": knn_beta,
}
},
{"$project": {"score": {"$meta": "searchScore"}, self._embedding_key: 0}},
]
if post_filter_pipeline is not None:
pipeline.extend(post_filter_pipeline)
cursor = self._collection.aggregate(pipeline)
docs = []
for res in cursor:
text = res.pop(self._text_key)
score = res.pop("score")
docs.append((Document(page_content=text, metadata=res), score))
return docs
[docs] def similarity_search(
self,
query: str,
k: int = 4,
pre_filter: Optional[dict] = None,
post_filter_pipeline: Optional[List[Dict]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return MongoDB documents most similar to query.
Use the knnBeta Operator available in MongoDB Atlas Search
This feature is in early access and available only for evaluation purposes, to
validate functionality, and to gather feedback from a small closed group of
early access users. It is not recommended for production deployments as we may
introduce breaking changes.
For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta
Args:
query: Text to look up documents similar to.
k: Optional Number of Documents to return. Defaults to 4.
pre_filter: Optional Dictionary of argument(s) to prefilter on document
fields.
post_filter_pipeline: Optional Pipeline of MongoDB aggregation stages
following the knnBeta search.
Returns:
List of Documents most similar to the query and score for each
"""
docs_and_scores = self.similarity_search_with_score(
query,
k=k,
pre_filter=pre_filter,
post_filter_pipeline=post_filter_pipeline,
)
return [doc for doc, _ in docs_and_scores]
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
collection: Optional[Collection[MongoDBDocumentType]] = None,
**kwargs: Any,
) -> MongoDBAtlasVectorSearch:
"""Construct MongoDBAtlasVectorSearch wrapper from raw documents.
This is a user-friendly interface that:
1. Embeds documents.
2. Adds the documents to a provided MongoDB Atlas Vector Search index
(Lucene)
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from pymongo import MongoClient
from langchain.vectorstores import MongoDBAtlasVectorSearch
from langchain.embeddings import OpenAIEmbeddings
client = MongoClient("<YOUR-CONNECTION-STRING>")
collection = mongo_client["<db_name>"]["<collection_name>"]
embeddings = OpenAIEmbeddings()
vectorstore = MongoDBAtlasVectorSearch.from_texts(
texts,
embeddings,
metadatas=metadatas,
collection=collection
)
"""
if collection is None:
raise ValueError("Must provide 'collection' named parameter.")
vecstore = cls(collection, embedding, **kwargs)
vecstore.add_texts(texts, metadatas=metadatas)
return vecstore | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html |
0239964e-13d8-4344-9872-2623ed923446 | Source code for langchain.vectorstores.clarifai
from __future__ import annotations
import logging
import os
import traceback
from typing import Any, Iterable, List, Optional, Tuple
import requests
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
logger = logging.getLogger(__name__)
[docs]class Clarifai(VectorStore):
"""Wrapper around Clarifai AI platform's vector store.
To use, you should have the ``clarifai`` python package installed.
Example:
.. code-block:: python
from langchain.vectorstores import Clarifai
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Clarifai("langchain_store", embeddings.embed_query)
"""
def __init__(
self,
user_id: Optional[str] = None,
app_id: Optional[str] = None,
pat: Optional[str] = None,
number_of_docs: Optional[int] = None,
api_base: Optional[str] = None,
) -> None:
"""Initialize with Clarifai client.
Args:
user_id (Optional[str], optional): User ID. Defaults to None.
app_id (Optional[str], optional): App ID. Defaults to None.
pat (Optional[str], optional): Personal access token. Defaults to None.
number_of_docs (Optional[int], optional): Number of documents to return
during vector search. Defaults to None.
api_base (Optional[str], optional): API base. Defaults to None.
Raises:
ValueError: If user ID, app ID or personal access token is not provided.
"""
try:
from clarifai.auth.helper import DEFAULT_BASE, ClarifaiAuthHelper
from clarifai.client import create_stub
except ImportError:
raise ValueError(
"Could not import clarifai python package. "
"Please install it with `pip install clarifai`."
)
if api_base is None:
self._api_base = DEFAULT_BASE
self._user_id = user_id or os.environ.get("CLARIFAI_USER_ID")
self._app_id = app_id or os.environ.get("CLARIFAI_APP_ID")
self._pat = pat or os.environ.get("CLARIFAI_PAT_KEY")
if self._user_id is None or self._app_id is None or self._pat is None:
raise ValueError(
"Could not find CLARIFAI_USER_ID, CLARIFAI_APP_ID or\
CLARIFAI_PAT in your environment. "
"Please set those env variables with a valid user ID, \
app ID and personal access token \
from https://clarifai.com/settings/security."
)
self._auth = ClarifaiAuthHelper(
user_id=self._user_id,
app_id=self._app_id,
pat=self._pat,
base=self._api_base,
)
self._stub = create_stub(self._auth)
self._userDataObject = self._auth.get_user_app_id_proto()
self._number_of_docs = number_of_docs
def _post_text_input(self, text: str, metadata: dict) -> str:
"""Post text to Clarifai and return the ID of the input.
Args:
text (str): Text to post.
metadata (dict): Metadata to post.
Returns:
str: ID of the input.
"""
try:
from clarifai_grpc.grpc.api import resources_pb2, service_pb2
from clarifai_grpc.grpc.api.status import status_code_pb2
from google.protobuf.struct_pb2 import Struct # type: ignore
except ImportError as e:
raise ImportError(
"Could not import clarifai python package. "
"Please install it with `pip install clarifai`."
) from e
input_metadata = Struct()
input_metadata.update(metadata)
post_inputs_response = self._stub.PostInputs(
service_pb2.PostInputsRequest(
user_app_id=self._userDataObject,
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
text=resources_pb2.Text(raw=text),
metadata=input_metadata,
)
)
],
)
)
if post_inputs_response.status.code != status_code_pb2.SUCCESS:
logger.error(post_inputs_response.status)
raise Exception(
"Post inputs failed, status: " + post_inputs_response.status.description
)
input_id = post_inputs_response.inputs[0].id
return input_id
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Add texts to the Clarifai vectorstore. This will push the text
to a Clarifai application.
Application use base workflow that create and store embedding for each text.
Make sure you are using a base workflow that is compatible with text
(such as Language Understanding).
Args:
texts (Iterable[str]): Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional): Optional list of metadatas.
ids (Optional[List[str]], optional): Optional list of IDs.
Returns:
List[str]: List of IDs of the added texts.
"""
assert len(list(texts)) > 0, "No texts provided to add to the vectorstore."
if metadatas is not None:
assert len(list(texts)) == len(
metadatas
), "Number of texts and metadatas should be the same."
input_ids = []
for idx, text in enumerate(texts):
try:
metadata = metadatas[idx] if metadatas else {}
input_id = self._post_text_input(text, metadata)
input_ids.append(input_id)
logger.debug(f"Input {input_id} posted successfully.")
except Exception as error:
logger.warning(f"Post inputs failed: {error}")
traceback.print_exc()
return input_ids
[docs] def similarity_search_with_score(
self,
query: str,
k: int = 4,
filter: Optional[dict] = None,
namespace: Optional[str] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Run similarity search with score using Clarifai.
Args:
query (str): Query text to search for.
k (int): Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]): Filter by metadata.
Defaults to None.
Returns:
List[Document]: List of documents most simmilar to the query text.
"""
try:
from clarifai_grpc.grpc.api import resources_pb2, service_pb2
from clarifai_grpc.grpc.api.status import status_code_pb2
from google.protobuf import json_format # type: ignore
except ImportError as e:
raise ImportError(
"Could not import clarifai python package. "
"Please install it with `pip install clarifai`."
) from e
# Get number of docs to return
if self._number_of_docs is not None:
k = self._number_of_docs
post_annotations_searches_response = self._stub.PostAnnotationsSearches(
service_pb2.PostAnnotationsSearchesRequest(
user_app_id=self._userDataObject,
searches=[
resources_pb2.Search(
query=resources_pb2.Query(
ranks=[
resources_pb2.Rank(
annotation=resources_pb2.Annotation(
data=resources_pb2.Data(
text=resources_pb2.Text(raw=query),
)
)
)
]
)
)
],
pagination=service_pb2.Pagination(page=1, per_page=k),
)
)
# Check if search was successful
if post_annotations_searches_response.status.code != status_code_pb2.SUCCESS:
raise Exception(
"Post searches failed, status: "
+ post_annotations_searches_response.status.description
)
# Retrieve hits
hits = post_annotations_searches_response.hits
docs_and_scores = []
# Iterate over hits and retrieve metadata and text
for hit in hits:
metadata = json_format.MessageToDict(hit.input.data.metadata)
request = requests.get(hit.input.data.text.url)
# override encoding by real educated guess as provided by chardet
request.encoding = request.apparent_encoding
requested_text = request.text
logger.debug(
f"\tScore {hit.score:.2f} for annotation: {hit.annotation.id}\
off input: {hit.input.id}, text: {requested_text[:125]}"
)
docs_and_scores.append(
(Document(page_content=requested_text, metadata=metadata), hit.score)
)
return docs_and_scores
[docs] def similarity_search(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Document]:
"""Run similarity search using Clarifai.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query and score for each
"""
docs_and_scores = self.similarity_search_with_score(query, **kwargs)
return [doc for doc, _ in docs_and_scores]
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Optional[Embeddings] = None,
metadatas: Optional[List[dict]] = None,
user_id: Optional[str] = None,
app_id: Optional[str] = None,
pat: Optional[str] = None,
number_of_docs: Optional[int] = None,
api_base: Optional[str] = None,
**kwargs: Any,
) -> Clarifai:
"""Create a Clarifai vectorstore from a list of texts.
Args:
user_id (str): User ID.
app_id (str): App ID.
texts (List[str]): List of texts to add.
pat (Optional[str]): Personal access token. Defaults to None.
number_of_docs (Optional[int]): Number of documents to return
during vector search. Defaults to None.
api_base (Optional[str]): API base. Defaults to None.
metadatas (Optional[List[dict]]): Optional list of metadatas.
Defaults to None.
Returns:
Clarifai: Clarifai vectorstore.
"""
clarifai_vector_db = cls(
user_id=user_id,
app_id=app_id,
pat=pat,
number_of_docs=number_of_docs,
api_base=api_base,
)
clarifai_vector_db.add_texts(texts=texts, metadatas=metadatas)
return clarifai_vector_db
[docs] @classmethod
def from_documents(
cls,
documents: List[Document],
embedding: Optional[Embeddings] = None,
user_id: Optional[str] = None,
app_id: Optional[str] = None,
pat: Optional[str] = None,
number_of_docs: Optional[int] = None,
api_base: Optional[str] = None,
**kwargs: Any,
) -> Clarifai:
"""Create a Clarifai vectorstore from a list of documents.
Args:
user_id (str): User ID.
app_id (str): App ID.
documents (List[Document]): List of documents to add.
pat (Optional[str]): Personal access token. Defaults to None.
number_of_docs (Optional[int]): Number of documents to return
during vector search. Defaults to None.
api_base (Optional[str]): API base. Defaults to None.
Returns:
Clarifai: Clarifai vectorstore.
"""
texts = [doc.page_content for doc in documents]
metadatas = [doc.metadata for doc in documents]
return cls.from_texts(
user_id=user_id,
app_id=app_id,
texts=texts,
pat=pat,
number_of_docs=number_of_docs,
api_base=api_base,
metadatas=metadatas,
) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html |
076f8df6-bb46-4200-b23f-a215208be56b | Source code for langchain.vectorstores.chroma
"""Wrapper around ChromaDB embeddings platform."""
from __future__ import annotations
import logging
import uuid
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Tuple, Type
import numpy as np
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.utils import xor_args
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
if TYPE_CHECKING:
import chromadb
import chromadb.config
from chromadb.api.types import ID, OneOrMany, Where, WhereDocument
logger = logging.getLogger()
DEFAULT_K = 4 # Number of Documents to return.
def _results_to_docs(results: Any) -> List[Document]:
return [doc for doc, _ in _results_to_docs_and_scores(results)]
def _results_to_docs_and_scores(results: Any) -> List[Tuple[Document, float]]:
return [
# TODO: Chroma can do batch querying,
# we shouldn't hard code to the 1st result
(Document(page_content=result[0], metadata=result[1] or {}), result[2])
for result in zip(
results["documents"][0],
results["metadatas"][0],
results["distances"][0],
)
]
[docs]class Chroma(VectorStore):
"""Wrapper around ChromaDB embeddings platform.
To use, you should have the ``chromadb`` python package installed.
Example:
.. code-block:: python
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Chroma("langchain_store", embeddings)
"""
_LANGCHAIN_DEFAULT_COLLECTION_NAME = "langchain"
def __init__(
self,
collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,
embedding_function: Optional[Embeddings] = None,
persist_directory: Optional[str] = None,
client_settings: Optional[chromadb.config.Settings] = None,
collection_metadata: Optional[Dict] = None,
client: Optional[chromadb.Client] = None,
) -> None:
"""Initialize with Chroma client."""
try:
import chromadb
import chromadb.config
except ImportError:
raise ValueError(
"Could not import chromadb python package. "
"Please install it with `pip install chromadb`."
)
if client is not None:
self._client = client
else:
if client_settings:
self._client_settings = client_settings
else:
self._client_settings = chromadb.config.Settings()
if persist_directory is not None:
self._client_settings = chromadb.config.Settings(
chroma_db_impl="duckdb+parquet",
persist_directory=persist_directory,
)
self._client = chromadb.Client(self._client_settings)
self._embedding_function = embedding_function
self._persist_directory = persist_directory
self._collection = self._client.get_or_create_collection(
name=collection_name,
embedding_function=self._embedding_function.embed_documents
if self._embedding_function is not None
else None,
metadata=collection_metadata,
)
@xor_args(("query_texts", "query_embeddings"))
def __query_collection(
self,
query_texts: Optional[List[str]] = None,
query_embeddings: Optional[List[List[float]]] = None,
n_results: int = 4,
where: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[Document]:
"""Query the chroma collection."""
try:
import chromadb # noqa: F401
except ImportError:
raise ValueError(
"Could not import chromadb python package. "
"Please install it with `pip install chromadb`."
)
return self._collection.query(
query_texts=query_texts,
query_embeddings=query_embeddings,
n_results=n_results,
where=where,
**kwargs,
)
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts (Iterable[str]): Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional): Optional list of metadatas.
ids (Optional[List[str]], optional): Optional list of IDs.
Returns:
List[str]: List of IDs of the added texts.
"""
# TODO: Handle the case where the user doesn't provide ids on the Collection
if ids is None:
ids = [str(uuid.uuid1()) for _ in texts]
embeddings = None
if self._embedding_function is not None:
embeddings = self._embedding_function.embed_documents(list(texts))
self._collection.upsert(
metadatas=metadatas, embeddings=embeddings, documents=texts, ids=ids
)
return ids
[docs] def similarity_search(
self,
query: str,
k: int = DEFAULT_K,
filter: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[Document]:
"""Run similarity search with Chroma.
Args:
query (str): Query text to search for.
k (int): Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List[Document]: List of documents most similar to the query text.
"""
docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
return [doc for doc, _ in docs_and_scores]
[docs] def similarity_search_by_vector(
self,
embedding: List[float],
k: int = DEFAULT_K,
filter: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to embedding vector.
Args:
embedding (str): Embedding to look up documents similar to.
k (int): Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List of Documents most similar to the query vector.
"""
results = self.__query_collection(
query_embeddings=embedding, n_results=k, where=filter
)
return _results_to_docs(results)
[docs] def similarity_search_with_score(
self,
query: str,
k: int = DEFAULT_K,
filter: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Run similarity search with Chroma with distance.
Args:
query (str): Query text to search for.
k (int): Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List[Tuple[Document, float]]: List of documents most similar to
the query text and cosine distance in float for each.
Lower score represents more similarity.
"""
if self._embedding_function is None:
results = self.__query_collection(
query_texts=[query], n_results=k, where=filter
)
else:
query_embedding = self._embedding_function.embed_query(query)
results = self.__query_collection(
query_embeddings=[query_embedding], n_results=k, where=filter
)
return _results_to_docs_and_scores(results)
def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
return self.similarity_search_with_score(query, k, **kwargs)
[docs] def max_marginal_relevance_search_by_vector(
self,
embedding: List[float],
k: int = DEFAULT_K,
fetch_k: int = 20,
lambda_mult: float = 0.5,
filter: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List of Documents selected by maximal marginal relevance.
"""
results = self.__query_collection(
query_embeddings=embedding,
n_results=fetch_k,
where=filter,
include=["metadatas", "documents", "distances", "embeddings"],
)
mmr_selected = maximal_marginal_relevance(
np.array(embedding, dtype=np.float32),
results["embeddings"][0],
k=k,
lambda_mult=lambda_mult,
)
candidates = _results_to_docs(results)
selected_results = [r for i, r in enumerate(candidates) if i in mmr_selected]
return selected_results
[docs] def max_marginal_relevance_search(
self,
query: str,
k: int = DEFAULT_K,
fetch_k: int = 20,
lambda_mult: float = 0.5,
filter: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List of Documents selected by maximal marginal relevance.
"""
if self._embedding_function is None:
raise ValueError(
"For MMR search, you must specify an embedding function on" "creation."
)
embedding = self._embedding_function.embed_query(query)
docs = self.max_marginal_relevance_search_by_vector(
embedding, k, fetch_k, lambda_mul=lambda_mult, filter=filter
)
return docs
[docs] def delete_collection(self) -> None:
"""Delete the collection."""
self._client.delete_collection(self._collection.name)
[docs] def get(
self,
ids: Optional[OneOrMany[ID]] = None,
where: Optional[Where] = None,
limit: Optional[int] = None,
offset: Optional[int] = None,
where_document: Optional[WhereDocument] = None,
include: Optional[List[str]] = None,
) -> Dict[str, Any]:
"""Gets the collection.
Args:
ids: The ids of the embeddings to get. Optional.
where: A Where type dict used to filter results by.
E.g. `{"color" : "red", "price": 4.20}`. Optional.
limit: The number of documents to return. Optional.
offset: The offset to start returning results from.
Useful for paging results with limit. Optional.
where_document: A WhereDocument type dict used to filter by the documents.
E.g. `{$contains: {"text": "hello"}}`. Optional.
include: A list of what to include in the results.
Can contain `"embeddings"`, `"metadatas"`, `"documents"`.
Ids are always included.
Defaults to `["metadatas", "documents"]`. Optional.
"""
kwargs = {
"ids": ids,
"where": where,
"limit": limit,
"offset": offset,
"where_document": where_document,
}
if include is not None:
kwargs["include"] = include
return self._collection.get(**kwargs)
[docs] def persist(self) -> None:
"""Persist the collection.
This can be used to explicitly persist the data to disk.
It will also be called automatically when the object is destroyed.
"""
if self._persist_directory is None:
raise ValueError(
"You must specify a persist_directory on"
"creation to persist the collection."
)
self._client.persist()
[docs] def update_document(self, document_id: str, document: Document) -> None:
"""Update a document in the collection.
Args:
document_id (str): ID of the document to update.
document (Document): Document to update.
"""
text = document.page_content
metadata = document.metadata
if self._embedding_function is None:
raise ValueError(
"For update, you must specify an embedding function on creation."
)
embeddings = self._embedding_function.embed_documents([text])
self._collection.update(
ids=[document_id],
embeddings=embeddings,
documents=[text],
metadatas=[metadata],
)
[docs] @classmethod
def from_texts(
cls: Type[Chroma],
texts: List[str],
embedding: Optional[Embeddings] = None,
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,
persist_directory: Optional[str] = None,
client_settings: Optional[chromadb.config.Settings] = None,
client: Optional[chromadb.Client] = None,
**kwargs: Any,
) -> Chroma:
"""Create a Chroma vectorstore from a raw documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Args:
texts (List[str]): List of texts to add to the collection.
collection_name (str): Name of the collection to create.
persist_directory (Optional[str]): Directory to persist the collection.
embedding (Optional[Embeddings]): Embedding function. Defaults to None.
metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.
ids (Optional[List[str]]): List of document IDs. Defaults to None.
client_settings (Optional[chromadb.config.Settings]): Chroma client settings
Returns:
Chroma: Chroma vectorstore.
"""
chroma_collection = cls(
collection_name=collection_name,
embedding_function=embedding,
persist_directory=persist_directory,
client_settings=client_settings,
client=client,
)
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
return chroma_collection
[docs] @classmethod
def from_documents(
cls: Type[Chroma],
documents: List[Document],
embedding: Optional[Embeddings] = None,
ids: Optional[List[str]] = None,
collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,
persist_directory: Optional[str] = None,
client_settings: Optional[chromadb.config.Settings] = None,
client: Optional[chromadb.Client] = None, # Add this line
**kwargs: Any,
) -> Chroma:
"""Create a Chroma vectorstore from a list of documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Args:
collection_name (str): Name of the collection to create.
persist_directory (Optional[str]): Directory to persist the collection.
ids (Optional[List[str]]): List of document IDs. Defaults to None.
documents (List[Document]): List of documents to add to the vectorstore.
embedding (Optional[Embeddings]): Embedding function. Defaults to None.
client_settings (Optional[chromadb.config.Settings]): Chroma client settings
Returns:
Chroma: Chroma vectorstore.
"""
texts = [doc.page_content for doc in documents]
metadatas = [doc.metadata for doc in documents]
return cls.from_texts(
texts=texts,
embedding=embedding,
metadatas=metadatas,
ids=ids,
collection_name=collection_name,
persist_directory=persist_directory,
client_settings=client_settings,
client=client,
)
[docs] def delete(self, ids: List[str]) -> None:
"""Delete by vector IDs.
Args:
ids: List of ids to delete.
"""
self._collection.delete(ids=ids) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html |
501f6fba-6bcb-410a-bd6f-dabf4ba0ef6a | Source code for langchain.vectorstores.qdrant
"""Wrapper around Qdrant vector database."""
from __future__ import annotations
import uuid
import warnings
from itertools import islice
from operator import itemgetter
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
Iterable,
List,
Optional,
Sequence,
Tuple,
Type,
Union,
)
import numpy as np
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
if TYPE_CHECKING:
from qdrant_client.conversions import common_types
from qdrant_client.http import models as rest
DictFilter = Dict[str, Union[str, int, bool, dict, list]]
MetadataFilter = Union[DictFilter, common_types.Filter]
[docs]class Qdrant(VectorStore):
"""Wrapper around Qdrant vector database.
To use you should have the ``qdrant-client`` package installed.
Example:
.. code-block:: python
from qdrant_client import QdrantClient
from langchain import Qdrant
client = QdrantClient()
collection_name = "MyCollection"
qdrant = Qdrant(client, collection_name, embedding_function)
"""
CONTENT_KEY = "page_content"
METADATA_KEY = "metadata"
def __init__(
self,
client: Any,
collection_name: str,
embeddings: Optional[Embeddings] = None,
content_payload_key: str = CONTENT_KEY,
metadata_payload_key: str = METADATA_KEY,
embedding_function: Optional[Callable] = None, # deprecated
):
"""Initialize with necessary components."""
try:
import qdrant_client
except ImportError:
raise ValueError(
"Could not import qdrant-client python package. "
"Please install it with `pip install qdrant-client`."
)
if not isinstance(client, qdrant_client.QdrantClient):
raise ValueError(
f"client should be an instance of qdrant_client.QdrantClient, "
f"got {type(client)}"
)
if embeddings is None and embedding_function is None:
raise ValueError(
"`embeddings` value can't be None. Pass `Embeddings` instance."
)
if embeddings is not None and embedding_function is not None:
raise ValueError(
"Both `embeddings` and `embedding_function` are passed. "
"Use `embeddings` only."
)
self.embeddings = embeddings
self._embeddings_function = embedding_function
self.client: qdrant_client.QdrantClient = client
self.collection_name = collection_name
self.content_payload_key = content_payload_key or self.CONTENT_KEY
self.metadata_payload_key = metadata_payload_key or self.METADATA_KEY
if embedding_function is not None:
warnings.warn(
"Using `embedding_function` is deprecated. "
"Pass `Embeddings` instance to `embeddings` instead."
)
if not isinstance(embeddings, Embeddings):
warnings.warn(
"`embeddings` should be an instance of `Embeddings`."
"Using `embeddings` as `embedding_function` which is deprecated"
)
self._embeddings_function = embeddings
self.embeddings = None
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[Sequence[str]] = None,
batch_size: int = 64,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
ids:
Optional list of ids to associate with the texts. Ids have to be
uuid-like strings.
batch_size:
How many vectors upload per-request.
Default: 64
Returns:
List of ids from adding the texts into the vectorstore.
"""
from qdrant_client.http import models as rest
added_ids = []
texts_iterator = iter(texts)
metadatas_iterator = iter(metadatas or [])
ids_iterator = iter(ids or [uuid.uuid4().hex for _ in iter(texts)])
while batch_texts := list(islice(texts_iterator, batch_size)):
# Take the corresponding metadata and id for each text in a batch
batch_metadatas = list(islice(metadatas_iterator, batch_size)) or None
batch_ids = list(islice(ids_iterator, batch_size))
self.client.upsert(
collection_name=self.collection_name,
points=rest.Batch.construct(
ids=batch_ids,
vectors=self._embed_texts(batch_texts),
payloads=self._build_payloads(
batch_texts,
batch_metadatas,
self.content_payload_key,
self.metadata_payload_key,
),
),
)
added_ids.extend(batch_ids)
return added_ids
[docs] def similarity_search(
self,
query: str,
k: int = 4,
filter: Optional[MetadataFilter] = None,
search_params: Optional[common_types.SearchParams] = None,
offset: int = 0,
score_threshold: Optional[float] = None,
consistency: Optional[common_types.ReadConsistency] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter: Filter by metadata. Defaults to None.
search_params: Additional search params
offset:
Offset of the first result to return.
May be used to paginate results.
Note: large offset values may cause performance issues.
score_threshold:
Define a minimal score threshold for the result.
If defined, less similar results will not be returned.
Score of the returned result might be higher or smaller than the
threshold depending on the Distance function used.
E.g. for cosine similarity only higher scores will be returned.
consistency:
Read consistency of the search. Defines how many replicas should be
queried before returning the result.
Values:
- int - number of replicas to query, values should present in all
queried replicas
- 'majority' - query all replicas, but return values present in the
majority of replicas
- 'quorum' - query the majority of replicas, return values present in
all of them
- 'all' - query all replicas, and return values present in all replicas
Returns:
List of Documents most similar to the query.
"""
results = self.similarity_search_with_score(
query,
k,
filter=filter,
search_params=search_params,
offset=offset,
score_threshold=score_threshold,
consistency=consistency,
**kwargs,
)
return list(map(itemgetter(0), results))
[docs] def similarity_search_with_score(
self,
query: str,
k: int = 4,
filter: Optional[MetadataFilter] = None,
search_params: Optional[common_types.SearchParams] = None,
offset: int = 0,
score_threshold: Optional[float] = None,
consistency: Optional[common_types.ReadConsistency] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter: Filter by metadata. Defaults to None.
search_params: Additional search params
offset:
Offset of the first result to return.
May be used to paginate results.
Note: large offset values may cause performance issues.
score_threshold:
Define a minimal score threshold for the result.
If defined, less similar results will not be returned.
Score of the returned result might be higher or smaller than the
threshold depending on the Distance function used.
E.g. for cosine similarity only higher scores will be returned.
consistency:
Read consistency of the search. Defines how many replicas should be
queried before returning the result.
Values:
- int - number of replicas to query, values should present in all
queried replicas
- 'majority' - query all replicas, but return values present in the
majority of replicas
- 'quorum' - query the majority of replicas, return values present in
all of them
- 'all' - query all replicas, and return values present in all replicas
Returns:
List of documents most similar to the query text and cosine
distance in float for each.
Lower score represents more similarity.
"""
return self.similarity_search_with_score_by_vector(
self._embed_query(query),
k,
filter=filter,
search_params=search_params,
offset=offset,
score_threshold=score_threshold,
consistency=consistency,
**kwargs,
)
[docs] def similarity_search_by_vector(
self,
embedding: List[float],
k: int = 4,
filter: Optional[MetadataFilter] = None,
search_params: Optional[common_types.SearchParams] = None,
offset: int = 0,
score_threshold: Optional[float] = None,
consistency: Optional[common_types.ReadConsistency] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to embedding vector.
Args:
embedding: Embedding vector to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter: Filter by metadata. Defaults to None.
search_params: Additional search params
offset:
Offset of the first result to return.
May be used to paginate results.
Note: large offset values may cause performance issues.
score_threshold:
Define a minimal score threshold for the result.
If defined, less similar results will not be returned.
Score of the returned result might be higher or smaller than the
threshold depending on the Distance function used.
E.g. for cosine similarity only higher scores will be returned.
consistency:
Read consistency of the search. Defines how many replicas should be
queried before returning the result.
Values:
- int - number of replicas to query, values should present in all
queried replicas
- 'majority' - query all replicas, but return values present in the
majority of replicas
- 'quorum' - query the majority of replicas, return values present in
all of them
- 'all' - query all replicas, and return values present in all replicas
Returns:
List of Documents most similar to the query.
"""
results = self.similarity_search_with_score_by_vector(
embedding,
k,
filter=filter,
search_params=search_params,
offset=offset,
score_threshold=score_threshold,
consistency=consistency,
**kwargs,
)
return list(map(itemgetter(0), results))
[docs] def similarity_search_with_score_by_vector(
self,
embedding: List[float],
k: int = 4,
filter: Optional[MetadataFilter] = None,
search_params: Optional[common_types.SearchParams] = None,
offset: int = 0,
score_threshold: Optional[float] = None,
consistency: Optional[common_types.ReadConsistency] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs most similar to embedding vector.
Args:
embedding: Embedding vector to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter: Filter by metadata. Defaults to None.
search_params: Additional search params
offset:
Offset of the first result to return.
May be used to paginate results.
Note: large offset values may cause performance issues.
score_threshold:
Define a minimal score threshold for the result.
If defined, less similar results will not be returned.
Score of the returned result might be higher or smaller than the
threshold depending on the Distance function used.
E.g. for cosine similarity only higher scores will be returned.
consistency:
Read consistency of the search. Defines how many replicas should be
queried before returning the result.
Values:
- int - number of replicas to query, values should present in all
queried replicas
- 'majority' - query all replicas, but return values present in the
majority of replicas
- 'quorum' - query the majority of replicas, return values present in
all of them
- 'all' - query all replicas, and return values present in all replicas
Returns:
List of documents most similar to the query text and cosine
distance in float for each.
Lower score represents more similarity.
"""
if filter is not None and isinstance(filter, dict):
warnings.warn(
"Using dict as a `filter` is deprecated. Please use qdrant-client "
"filters directly: "
"https://qdrant.tech/documentation/concepts/filtering/",
DeprecationWarning,
)
qdrant_filter = self._qdrant_filter_from_dict(filter)
else:
qdrant_filter = filter
results = self.client.search(
collection_name=self.collection_name,
query_vector=embedding,
query_filter=qdrant_filter,
search_params=search_params,
limit=k,
offset=offset,
with_payload=True,
with_vectors=False, # Langchain does not expect vectors to be returned
score_threshold=score_threshold,
consistency=consistency,
**kwargs,
)
return [
(
self._document_from_scored_point(
result, self.content_payload_key, self.metadata_payload_key
),
result.score,
)
for result in results
]
def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Args:
query: input text
k: Number of Documents to return. Defaults to 4.
**kwargs: kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns:
List of Tuples of (doc, similarity_score)
"""
return self.similarity_search_with_score(query, k, **kwargs)
[docs] def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
embedding = self._embed_query(query)
results = self.client.search(
collection_name=self.collection_name,
query_vector=embedding,
with_payload=True,
with_vectors=True,
limit=fetch_k,
)
embeddings = [result.vector for result in results]
mmr_selected = maximal_marginal_relevance(
np.array(embedding), embeddings, k=k, lambda_mult=lambda_mult
)
return [
self._document_from_scored_point(
results[i], self.content_payload_key, self.metadata_payload_key
)
for i in mmr_selected
]
[docs] @classmethod
def from_texts(
cls: Type[Qdrant],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
ids: Optional[Sequence[str]] = None,
location: Optional[str] = None,
url: Optional[str] = None,
port: Optional[int] = 6333,
grpc_port: int = 6334,
prefer_grpc: bool = False,
https: Optional[bool] = None,
api_key: Optional[str] = None,
prefix: Optional[str] = None,
timeout: Optional[float] = None,
host: Optional[str] = None,
path: Optional[str] = None,
collection_name: Optional[str] = None,
distance_func: str = "Cosine",
content_payload_key: str = CONTENT_KEY,
metadata_payload_key: str = METADATA_KEY,
batch_size: int = 64,
shard_number: Optional[int] = None,
replication_factor: Optional[int] = None,
write_consistency_factor: Optional[int] = None,
on_disk_payload: Optional[bool] = None,
hnsw_config: Optional[common_types.HnswConfigDiff] = None,
optimizers_config: Optional[common_types.OptimizersConfigDiff] = None,
wal_config: Optional[common_types.WalConfigDiff] = None,
quantization_config: Optional[common_types.QuantizationConfig] = None,
init_from: Optional[common_types.InitFrom] = None,
**kwargs: Any,
) -> Qdrant:
"""Construct Qdrant wrapper from a list of texts.
Args:
texts: A list of texts to be indexed in Qdrant.
embedding: A subclass of `Embeddings`, responsible for text vectorization.
metadatas:
An optional list of metadata. If provided it has to be of the same
length as a list of texts.
ids:
Optional list of ids to associate with the texts. Ids have to be
uuid-like strings.
location:
If `:memory:` - use in-memory Qdrant instance.
If `str` - use it as a `url` parameter.
If `None` - fallback to relying on `host` and `port` parameters.
url: either host or str of "Optional[scheme], host, Optional[port],
Optional[prefix]". Default: `None`
port: Port of the REST API interface. Default: 6333
grpc_port: Port of the gRPC interface. Default: 6334
prefer_grpc:
If true - use gPRC interface whenever possible in custom methods.
Default: False
https: If true - use HTTPS(SSL) protocol. Default: None
api_key: API key for authentication in Qdrant Cloud. Default: None
prefix:
If not None - add prefix to the REST URL path.
Example: service/v1 will result in
http://localhost:6333/service/v1/{qdrant-endpoint} for REST API.
Default: None
timeout:
Timeout for REST and gRPC API requests.
Default: 5.0 seconds for REST and unlimited for gRPC
host:
Host name of Qdrant service. If url and host are None, set to
'localhost'. Default: None
path:
Path in which the vectors will be stored while using local mode.
Default: None
collection_name:
Name of the Qdrant collection to be used. If not provided,
it will be created randomly. Default: None
distance_func:
Distance function. One of: "Cosine" / "Euclid" / "Dot".
Default: "Cosine"
content_payload_key:
A payload key used to store the content of the document.
Default: "page_content"
metadata_payload_key:
A payload key used to store the metadata of the document.
Default: "metadata"
batch_size:
How many vectors upload per-request.
Default: 64
shard_number: Number of shards in collection. Default is 1, minimum is 1.
replication_factor:
Replication factor for collection. Default is 1, minimum is 1.
Defines how many copies of each shard will be created.
Have effect only in distributed mode.
write_consistency_factor:
Write consistency factor for collection. Default is 1, minimum is 1.
Defines how many replicas should apply the operation for us to consider
it successful. Increasing this number will make the collection more
resilient to inconsistencies, but will also make it fail if not enough
replicas are available.
Does not have any performance impact.
Have effect only in distributed mode.
on_disk_payload:
If true - point`s payload will not be stored in memory.
It will be read from the disk every time it is requested.
This setting saves RAM by (slightly) increasing the response time.
Note: those payload values that are involved in filtering and are
indexed - remain in RAM.
hnsw_config: Params for HNSW index
optimizers_config: Params for optimizer
wal_config: Params for Write-Ahead-Log
quantization_config:
Params for quantization, if None - quantization will be disabled
init_from:
Use data stored in another collection to initialize this collection
**kwargs:
Additional arguments passed directly into REST client initialization
This is a user-friendly interface that:
1. Creates embeddings, one for each text
2. Initializes the Qdrant database as an in-memory docstore by default
(and overridable to a remote docstore)
3. Adds the text embeddings to the Qdrant database
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain import Qdrant
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
qdrant = Qdrant.from_texts(texts, embeddings, "localhost")
"""
try:
import qdrant_client
except ImportError:
raise ValueError(
"Could not import qdrant-client python package. "
"Please install it with `pip install qdrant-client`."
)
from qdrant_client.http import models as rest
# Just do a single quick embedding to get vector size
partial_embeddings = embedding.embed_documents(texts[:1])
vector_size = len(partial_embeddings[0])
collection_name = collection_name or uuid.uuid4().hex
distance_func = distance_func.upper()
client = qdrant_client.QdrantClient(
location=location,
url=url,
port=port,
grpc_port=grpc_port,
prefer_grpc=prefer_grpc,
https=https,
api_key=api_key,
prefix=prefix,
timeout=timeout,
host=host,
path=path,
**kwargs,
)
client.recreate_collection(
collection_name=collection_name,
vectors_config=rest.VectorParams(
size=vector_size,
distance=rest.Distance[distance_func],
),
shard_number=shard_number,
replication_factor=replication_factor,
write_consistency_factor=write_consistency_factor,
on_disk_payload=on_disk_payload,
hnsw_config=hnsw_config,
optimizers_config=optimizers_config,
wal_config=wal_config,
quantization_config=quantization_config,
init_from=init_from,
timeout=timeout, # type: ignore[arg-type]
)
texts_iterator = iter(texts)
metadatas_iterator = iter(metadatas or [])
ids_iterator = iter(ids or [uuid.uuid4().hex for _ in iter(texts)])
while batch_texts := list(islice(texts_iterator, batch_size)):
# Take the corresponding metadata and id for each text in a batch
batch_metadatas = list(islice(metadatas_iterator, batch_size)) or None
batch_ids = list(islice(ids_iterator, batch_size))
# Generate the embeddings for all the texts in a batch
batch_embeddings = embedding.embed_documents(batch_texts)
client.upsert(
collection_name=collection_name,
points=rest.Batch.construct(
ids=batch_ids,
vectors=batch_embeddings,
payloads=cls._build_payloads(
batch_texts,
batch_metadatas,
content_payload_key,
metadata_payload_key,
),
),
)
return cls(
client=client,
collection_name=collection_name,
embeddings=embedding,
content_payload_key=content_payload_key,
metadata_payload_key=metadata_payload_key,
)
@classmethod
def _build_payloads(
cls,
texts: Iterable[str],
metadatas: Optional[List[dict]],
content_payload_key: str,
metadata_payload_key: str,
) -> List[dict]:
payloads = []
for i, text in enumerate(texts):
if text is None:
raise ValueError(
"At least one of the texts is None. Please remove it before "
"calling .from_texts or .add_texts on Qdrant instance."
)
metadata = metadatas[i] if metadatas is not None else None
payloads.append(
{
content_payload_key: text,
metadata_payload_key: metadata,
}
)
return payloads
@classmethod
def _document_from_scored_point(
cls,
scored_point: Any,
content_payload_key: str,
metadata_payload_key: str,
) -> Document:
return Document(
page_content=scored_point.payload.get(content_payload_key),
metadata=scored_point.payload.get(metadata_payload_key) or {},
)
def _build_condition(self, key: str, value: Any) -> List[rest.FieldCondition]:
from qdrant_client.http import models as rest
out = []
if isinstance(value, dict):
for _key, value in value.items():
out.extend(self._build_condition(f"{key}.{_key}", value))
elif isinstance(value, list):
for _value in value:
if isinstance(_value, dict):
out.extend(self._build_condition(f"{key}[]", _value))
else:
out.extend(self._build_condition(f"{key}", _value))
else:
out.append(
rest.FieldCondition(
key=f"{self.metadata_payload_key}.{key}",
match=rest.MatchValue(value=value),
)
)
return out
def _qdrant_filter_from_dict(
self, filter: Optional[DictFilter]
) -> Optional[rest.Filter]:
from qdrant_client.http import models as rest
if not filter:
return None
return rest.Filter(
must=[
condition
for key, value in filter.items()
for condition in self._build_condition(key, value)
]
)
def _embed_query(self, query: str) -> List[float]:
"""Embed query text.
Used to provide backward compatibility with `embedding_function` argument.
Args:
query: Query text.
Returns:
List of floats representing the query embedding.
"""
if self.embeddings is not None:
embedding = self.embeddings.embed_query(query)
else:
if self._embeddings_function is not None:
embedding = self._embeddings_function(query)
else:
raise ValueError("Neither of embeddings or embedding_function is set")
return embedding.tolist() if hasattr(embedding, "tolist") else embedding
def _embed_texts(self, texts: Iterable[str]) -> List[List[float]]:
"""Embed search texts.
Used to provide backward compatibility with `embedding_function` argument.
Args:
texts: Iterable of texts to embed.
Returns:
List of floats representing the texts embedding.
"""
if self.embeddings is not None:
embeddings = self.embeddings.embed_documents(list(texts))
if hasattr(embeddings, "tolist"):
embeddings = embeddings.tolist()
elif self._embeddings_function is not None:
embeddings = []
for text in texts:
embedding = self._embeddings_function(text)
if hasattr(embeddings, "tolist"):
embedding = embedding.tolist()
embeddings.append(embedding)
else:
raise ValueError("Neither of embeddings or embedding_function is set")
return embeddings | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html |
406b321a-f513-49f7-90b6-c76f3a498f72 | Source code for langchain.vectorstores.azuresearch
"""Wrapper around Azure Cognitive Search."""
from __future__ import annotations
import base64
import json
import logging
import uuid
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
Iterable,
List,
Optional,
Tuple,
Type,
)
import numpy as np
from pydantic import BaseModel, root_validator
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.schema import BaseRetriever
from langchain.utils import get_from_env
from langchain.vectorstores.base import VectorStore
logger = logging.getLogger()
if TYPE_CHECKING:
from azure.search.documents import SearchClient
# Allow overriding field names for Azure Search
FIELDS_ID = get_from_env(
key="AZURESEARCH_FIELDS_ID", env_key="AZURESEARCH_FIELDS_ID", default="id"
)
FIELDS_CONTENT = get_from_env(
key="AZURESEARCH_FIELDS_CONTENT",
env_key="AZURESEARCH_FIELDS_CONTENT",
default="content",
)
FIELDS_CONTENT_VECTOR = get_from_env(
key="AZURESEARCH_FIELDS_CONTENT_VECTOR",
env_key="AZURESEARCH_FIELDS_CONTENT_VECTOR",
default="content_vector",
)
FIELDS_METADATA = get_from_env(
key="AZURESEARCH_FIELDS_TAG", env_key="AZURESEARCH_FIELDS_TAG", default="metadata"
)
MAX_UPLOAD_BATCH_SIZE = 1000
def _get_search_client(
endpoint: str,
key: str,
index_name: str,
embedding_function: Callable,
semantic_configuration_name: Optional[str] = None,
) -> SearchClient:
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import ResourceNotFoundError
from azure.identity import DefaultAzureCredential
from azure.search.documents import SearchClient
from azure.search.documents.indexes import SearchIndexClient
from azure.search.documents.indexes.models import (
PrioritizedFields,
SearchableField,
SearchField,
SearchFieldDataType,
SearchIndex,
SemanticConfiguration,
SemanticField,
SemanticSettings,
SimpleField,
VectorSearch,
VectorSearchAlgorithmConfiguration,
)
if key is None:
credential = DefaultAzureCredential()
else:
credential = AzureKeyCredential(key)
index_client: SearchIndexClient = SearchIndexClient(
endpoint=endpoint, credential=credential
)
try:
index_client.get_index(name=index_name)
except ResourceNotFoundError:
# Fields configuration
fields = [
SimpleField(
name=FIELDS_ID,
type=SearchFieldDataType.String,
key=True,
filterable=True,
),
SearchableField(
name=FIELDS_CONTENT,
type=SearchFieldDataType.String,
searchable=True,
retrievable=True,
),
SearchField(
name=FIELDS_CONTENT_VECTOR,
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
dimensions=len(embedding_function("Text")),
vector_search_configuration="default",
),
SearchableField(
name=FIELDS_METADATA,
type=SearchFieldDataType.String,
searchable=True,
retrievable=True,
),
]
# Vector search configuration
vector_search = VectorSearch(
algorithm_configurations=[
VectorSearchAlgorithmConfiguration(
name="default",
kind="hnsw",
hnsw_parameters={
"m": 4,
"efConstruction": 400,
"efSearch": 500,
"metric": "cosine",
},
)
]
)
# Create the semantic settings with the configuration
semantic_settings = (
None
if semantic_configuration_name is None
else SemanticSettings(
configurations=[
SemanticConfiguration(
name=semantic_configuration_name,
prioritized_fields=PrioritizedFields(
prioritized_content_fields=[
SemanticField(field_name=FIELDS_CONTENT)
],
),
)
]
)
)
# Create the search index with the semantic settings and vector search
index = SearchIndex(
name=index_name,
fields=fields,
vector_search=vector_search,
semantic_settings=semantic_settings,
)
index_client.create_index(index)
# Create the search client
return SearchClient(endpoint=endpoint, index_name=index_name, credential=credential)
[docs]class AzureSearch(VectorStore):
def __init__(
self,
azure_search_endpoint: str,
azure_search_key: str,
index_name: str,
embedding_function: Callable,
search_type: str = "hybrid",
semantic_configuration_name: Optional[str] = None,
semantic_query_language: str = "en-us",
**kwargs: Any,
):
"""Initialize with necessary components."""
# Initialize base class
self.embedding_function = embedding_function
self.client = _get_search_client(
azure_search_endpoint,
azure_search_key,
index_name,
embedding_function,
semantic_configuration_name,
)
self.search_type = search_type
self.semantic_configuration_name = semantic_configuration_name
self.semantic_query_language = semantic_query_language
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> List[str]:
"""Add texts data to an existing index."""
keys = kwargs.get("keys")
ids = []
# Write data to index
data = []
for i, text in enumerate(texts):
# Use provided key otherwise use default key
key = keys[i] if keys else str(uuid.uuid4())
# Encoding key for Azure Search valid characters
key = base64.urlsafe_b64encode(bytes(key, "utf-8")).decode("ascii")
metadata = metadatas[i] if metadatas else {}
# Add data to index
data.append(
{
"@search.action": "upload",
FIELDS_ID: key,
FIELDS_CONTENT: text,
FIELDS_CONTENT_VECTOR: np.array(
self.embedding_function(text), dtype=np.float32
).tolist(),
FIELDS_METADATA: json.dumps(metadata),
}
)
ids.append(key)
# Upload data in batches
if len(data) == MAX_UPLOAD_BATCH_SIZE:
response = self.client.upload_documents(documents=data)
# Check if all documents were successfully uploaded
if not all([r.succeeded for r in response]):
raise Exception(response)
# Reset data
data = []
# Considering case where data is an exact multiple of batch-size entries
if len(data) == 0:
return ids
# Upload data to index
response = self.client.upload_documents(documents=data)
# Check if all documents were successfully uploaded
if all([r.succeeded for r in response]):
return ids
else:
raise Exception(response)
[docs] def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
search_type = kwargs.get("search_type", self.search_type)
if search_type == "similarity":
docs = self.vector_search(query, k=k, **kwargs)
elif search_type == "hybrid":
docs = self.hybrid_search(query, k=k, **kwargs)
elif search_type == "semantic_hybrid":
docs = self.semantic_hybrid_search(query, k=k, **kwargs)
else:
raise ValueError(f"search_type of {search_type} not allowed.")
return docs
[docs] def vector_search(self, query: str, k: int = 4, **kwargs: Any) -> List[Document]:
"""
Returns the most similar indexed documents to the query text.
Args:
query (str): The query text for which to find similar documents.
k (int): The number of documents to return. Default is 4.
Returns:
List[Document]: A list of documents that are most similar to the query text.
"""
docs_and_scores = self.vector_search_with_score(
query, k=k, filters=kwargs.get("filters", None)
)
return [doc for doc, _ in docs_and_scores]
[docs] def vector_search_with_score(
self, query: str, k: int = 4, filters: Optional[str] = None
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query and score for each
"""
from azure.search.documents.models import Vector
results = self.client.search(
search_text="",
vector=Vector(
value=np.array(
self.embedding_function(query), dtype=np.float32
).tolist(),
k=k,
fields=FIELDS_CONTENT_VECTOR,
),
select=[f"{FIELDS_ID},{FIELDS_CONTENT},{FIELDS_METADATA}"],
filter=filters,
)
# Convert results to Document objects
docs = [
(
Document(
page_content=result[FIELDS_CONTENT],
metadata=json.loads(result[FIELDS_METADATA]),
),
float(result["@search.score"]),
)
for result in results
]
return docs
[docs] def hybrid_search(self, query: str, k: int = 4, **kwargs: Any) -> List[Document]:
"""
Returns the most similar indexed documents to the query text.
Args:
query (str): The query text for which to find similar documents.
k (int): The number of documents to return. Default is 4.
Returns:
List[Document]: A list of documents that are most similar to the query text.
"""
docs_and_scores = self.hybrid_search_with_score(
query, k=k, filters=kwargs.get("filters", None)
)
return [doc for doc, _ in docs_and_scores]
[docs] def hybrid_search_with_score(
self, query: str, k: int = 4, filters: Optional[str] = None
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query with an hybrid query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query and score for each
"""
from azure.search.documents.models import Vector
results = self.client.search(
search_text=query,
vector=Vector(
value=np.array(
self.embedding_function(query), dtype=np.float32
).tolist(),
k=k,
fields=FIELDS_CONTENT_VECTOR,
),
select=[f"{FIELDS_ID},{FIELDS_CONTENT},{FIELDS_METADATA}"],
filter=filters,
top=k,
)
# Convert results to Document objects
docs = [
(
Document(
page_content=result[FIELDS_CONTENT],
metadata=json.loads(result[FIELDS_METADATA]),
),
float(result["@search.score"]),
)
for result in results
]
return docs
[docs] def semantic_hybrid_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""
Returns the most similar indexed documents to the query text.
Args:
query (str): The query text for which to find similar documents.
k (int): The number of documents to return. Default is 4.
Returns:
List[Document]: A list of documents that are most similar to the query text.
"""
docs_and_scores = self.semantic_hybrid_search_with_score(
query, k=k, filters=kwargs.get("filters", None)
)
return [doc for doc, _ in docs_and_scores]
[docs] def semantic_hybrid_search_with_score(
self, query: str, k: int = 4, filters: Optional[str] = None
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query with an hybrid query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query and score for each
"""
from azure.search.documents.models import Vector
results = self.client.search(
search_text=query,
vector=Vector(
value=np.array(
self.embedding_function(query), dtype=np.float32
).tolist(),
k=50, # Hardcoded value to maximize L2 retrieval
fields=FIELDS_CONTENT_VECTOR,
),
select=[f"{FIELDS_ID},{FIELDS_CONTENT},{FIELDS_METADATA}"],
filter=filters,
query_type="semantic",
query_language=self.semantic_query_language,
semantic_configuration_name=self.semantic_configuration_name,
query_caption="extractive",
query_answer="extractive",
top=k,
)
# Get Semantic Answers
semantic_answers = results.get_answers()
semantic_answers_dict = {}
for semantic_answer in semantic_answers:
semantic_answers_dict[semantic_answer.key] = {
"text": semantic_answer.text,
"highlights": semantic_answer.highlights,
}
# Convert results to Document objects
docs = [
(
Document(
page_content=result["content"],
metadata={
**json.loads(result["metadata"]),
**{
"captions": {
"text": result.get("@search.captions", [{}])[0].text,
"highlights": result.get("@search.captions", [{}])[
0
].highlights,
}
if result.get("@search.captions")
else {},
"answers": semantic_answers_dict.get(
json.loads(result["metadata"]).get("key"), ""
),
},
},
),
float(result["@search.score"]),
)
for result in results
]
return docs
[docs] @classmethod
def from_texts(
cls: Type[AzureSearch],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
azure_search_endpoint: str = "",
azure_search_key: str = "",
index_name: str = "langchain-index",
**kwargs: Any,
) -> AzureSearch:
# Creating a new Azure Search instance
azure_search = cls(
azure_search_endpoint,
azure_search_key,
index_name,
embedding.embed_query,
)
azure_search.add_texts(texts, metadatas, **kwargs)
return azure_search
class AzureSearchVectorStoreRetriever(BaseRetriever, BaseModel):
vectorstore: AzureSearch
search_type: str = "hybrid"
k: int = 4
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
@root_validator()
def validate_search_type(cls, values: Dict) -> Dict:
"""Validate search type."""
if "search_type" in values:
search_type = values["search_type"]
if search_type not in ("similarity", "hybrid", "semantic_hybrid"):
raise ValueError(f"search_type of {search_type} not allowed.")
return values
def get_relevant_documents(self, query: str) -> List[Document]:
if self.search_type == "similarity":
docs = self.vectorstore.vector_search(query, k=self.k)
elif self.search_type == "hybrid":
docs = self.vectorstore.hybrid_search(query, k=self.k)
elif self.search_type == "semantic_hybrid":
docs = self.vectorstore.semantic_hybrid_search(query, k=self.k)
else:
raise ValueError(f"search_type of {self.search_type} not allowed.")
return docs
async def aget_relevant_documents(self, query: str) -> List[Document]:
raise NotImplementedError(
"AzureSearchVectorStoreRetriever does not support async"
) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html |
ccb07509-aa60-40dd-978e-b0177d7a7122 | Source code for langchain.vectorstores.cassandra
"""Wrapper around Cassandra vector-store capabilities, based on cassIO."""
from __future__ import annotations
import hashlib
import typing
from typing import Any, Iterable, List, Optional, Tuple, Type, TypeVar
import numpy as np
if typing.TYPE_CHECKING:
from cassandra.cluster import Session
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
CVST = TypeVar("CVST", bound="Cassandra")
# a positive number of seconds to expire entries, or None for no expiration.
CASSANDRA_VECTORSTORE_DEFAULT_TTL_SECONDS = None
def _hash(_input: str) -> str:
"""Use a deterministic hashing approach."""
return hashlib.md5(_input.encode()).hexdigest()
[docs]class Cassandra(VectorStore):
"""Wrapper around Cassandra embeddings platform.
There is no notion of a default table name, since each embedding
function implies its own vector dimension, which is part of the schema.
Example:
.. code-block:: python
from langchain.vectorstores import Cassandra
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
session = ...
keyspace = 'my_keyspace'
vectorstore = Cassandra(embeddings, session, keyspace, 'my_doc_archive')
"""
_embedding_dimension: int | None
def _getEmbeddingDimension(self) -> int:
if self._embedding_dimension is None:
self._embedding_dimension = len(
self.embedding.embed_query("This is a sample sentence.")
)
return self._embedding_dimension
def __init__(
self,
embedding: Embeddings,
session: Session,
keyspace: str,
table_name: str,
ttl_seconds: int | None = CASSANDRA_VECTORSTORE_DEFAULT_TTL_SECONDS,
) -> None:
try:
from cassio.vector import VectorTable
except (ImportError, ModuleNotFoundError):
raise ImportError(
"Could not import cassio python package. "
"Please install it with `pip install cassio`."
)
"""Create a vector table."""
self.embedding = embedding
self.session = session
self.keyspace = keyspace
self.table_name = table_name
self.ttl_seconds = ttl_seconds
#
self._embedding_dimension = None
#
self.table = VectorTable(
session=session,
keyspace=keyspace,
table=table_name,
embedding_dimension=self._getEmbeddingDimension(),
auto_id=False, # the `add_texts` contract admits user-provided ids
)
[docs] def delete_collection(self) -> None:
"""
Just an alias for `clear`
(to better align with other VectorStore implementations).
"""
self.clear()
[docs] def clear(self) -> None:
"""Empty the collection."""
self.table.clear()
[docs] def delete_by_document_id(self, document_id: str) -> None:
return self.table.delete(document_id)
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts (Iterable[str]): Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional): Optional list of metadatas.
ids (Optional[List[str]], optional): Optional list of IDs.
Returns:
List[str]: List of IDs of the added texts.
"""
_texts = list(texts) # lest it be a generator or something
if ids is None:
# unless otherwise specified, we have deterministic IDs:
# re-inserting an existing document will not create a duplicate.
# (and effectively update the metadata)
ids = [_hash(text) for text in _texts]
if metadatas is None:
metadatas = [{} for _ in _texts]
#
ttl_seconds = kwargs.get("ttl_seconds", self.ttl_seconds)
#
embedding_vectors = self.embedding.embed_documents(_texts)
for text, embedding_vector, text_id, metadata in zip(
_texts, embedding_vectors, ids, metadatas
):
self.table.put(
document=text,
embedding_vector=embedding_vector,
document_id=text_id,
metadata=metadata,
ttl_seconds=ttl_seconds,
)
#
return ids
# id-returning search facilities
[docs] def similarity_search_with_score_id_by_vector(
self,
embedding: List[float],
k: int = 4,
) -> List[Tuple[Document, float, str]]:
"""Return docs most similar to embedding vector.
No support for `filter` query (on metadata) along with vector search.
Args:
embedding (str): Embedding to look up documents similar to.
k (int): Number of Documents to return. Defaults to 4.
Returns:
List of (Document, score, id), the most similar to the query vector.
"""
hits = self.table.search(
embedding_vector=embedding,
top_k=k,
metric="cos",
metric_threshold=None,
)
# We stick to 'cos' distance as it can be normalized on a 0-1 axis
# (1=most relevant), as required by this class' contract.
return [
(
Document(
page_content=hit["document"],
metadata=hit["metadata"],
),
0.5 + 0.5 * hit["distance"],
hit["document_id"],
)
for hit in hits
]
[docs] def similarity_search_with_score_id(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float, str]]:
embedding_vector = self.embedding.embed_query(query)
return self.similarity_search_with_score_id_by_vector(
embedding=embedding_vector,
k=k,
)
# id-unaware search facilities
[docs] def similarity_search_with_score_by_vector(
self,
embedding: List[float],
k: int = 4,
) -> List[Tuple[Document, float]]:
"""Return docs most similar to embedding vector.
No support for `filter` query (on metadata) along with vector search.
Args:
embedding (str): Embedding to look up documents similar to.
k (int): Number of Documents to return. Defaults to 4.
Returns:
List of (Document, score), the most similar to the query vector.
"""
return [
(doc, score)
for (doc, score, docId) in self.similarity_search_with_score_id_by_vector(
embedding=embedding,
k=k,
)
]
[docs] def similarity_search(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Document]:
#
embedding_vector = self.embedding.embed_query(query)
return self.similarity_search_by_vector(
embedding_vector,
k,
**kwargs,
)
[docs] def similarity_search_by_vector(
self,
embedding: List[float],
k: int = 4,
**kwargs: Any,
) -> List[Document]:
return [
doc
for doc, _ in self.similarity_search_with_score_by_vector(
embedding,
k,
)
]
[docs] def similarity_search_with_score(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
embedding_vector = self.embedding.embed_query(query)
return self.similarity_search_with_score_by_vector(
embedding_vector,
k,
)
# Even though this is a `_`-method,
# it is apparently used by VectorSearch parent class
# in an exposed method (`similarity_search_with_relevance_scores`).
# So we implement it (hmm).
def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
return self.similarity_search_with_score(
query,
k,
**kwargs,
)
[docs] def max_marginal_relevance_search_by_vector(
self,
embedding: List[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Returns:
List of Documents selected by maximal marginal relevance.
"""
prefetchHits = self.table.search(
embedding_vector=embedding,
top_k=fetch_k,
metric="cos",
metric_threshold=None,
)
# let the mmr utility pick the *indices* in the above array
mmrChosenIndices = maximal_marginal_relevance(
np.array(embedding, dtype=np.float32),
[pfHit["embedding_vector"] for pfHit in prefetchHits],
k=k,
lambda_mult=lambda_mult,
)
mmrHits = [
pfHit
for pfIndex, pfHit in enumerate(prefetchHits)
if pfIndex in mmrChosenIndices
]
return [
Document(
page_content=hit["document"],
metadata=hit["metadata"],
)
for hit in mmrHits
]
[docs] def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Optional.
Returns:
List of Documents selected by maximal marginal relevance.
"""
embedding_vector = self.embedding.embed_query(query)
return self.max_marginal_relevance_search_by_vector(
embedding_vector,
k,
fetch_k,
lambda_mult=lambda_mult,
)
[docs] @classmethod
def from_texts(
cls: Type[CVST],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> CVST:
"""Create a Cassandra vectorstore from raw texts.
No support for specifying text IDs
Returns:
a Cassandra vectorstore.
"""
session: Session = kwargs["session"]
keyspace: str = kwargs["keyspace"]
table_name: str = kwargs["table_name"]
cassandraStore = cls(
embedding=embedding,
session=session,
keyspace=keyspace,
table_name=table_name,
)
cassandraStore.add_texts(texts=texts, metadatas=metadatas)
return cassandraStore
[docs] @classmethod
def from_documents(
cls: Type[CVST],
documents: List[Document],
embedding: Embeddings,
**kwargs: Any,
) -> CVST:
"""Create a Cassandra vectorstore from a document list.
No support for specifying text IDs
Returns:
a Cassandra vectorstore.
"""
texts = [doc.page_content for doc in documents]
metadatas = [doc.metadata for doc in documents]
session: Session = kwargs["session"]
keyspace: str = kwargs["keyspace"]
table_name: str = kwargs["table_name"]
return cls.from_texts(
texts=texts,
metadatas=metadatas,
embedding=embedding,
session=session,
keyspace=keyspace,
table_name=table_name,
) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html |
cd6529f4-b3f5-4c28-8a1a-488d935a11f6 | Source code for langchain.vectorstores.lancedb
"""Wrapper around LanceDB vector database"""
from __future__ import annotations
import uuid
from typing import Any, Iterable, List, Optional
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
[docs]class LanceDB(VectorStore):
"""Wrapper around LanceDB vector database.
To use, you should have ``lancedb`` python package installed.
Example:
.. code-block:: python
db = lancedb.connect('./lancedb')
table = db.open_table('my_table')
vectorstore = LanceDB(table, embedding_function)
vectorstore.add_texts(['text1', 'text2'])
result = vectorstore.similarity_search('text1')
"""
def __init__(
self,
connection: Any,
embedding: Embeddings,
vector_key: Optional[str] = "vector",
id_key: Optional[str] = "id",
text_key: Optional[str] = "text",
):
"""Initialize with Lance DB connection"""
try:
import lancedb
except ImportError:
raise ValueError(
"Could not import lancedb python package. "
"Please install it with `pip install lancedb`."
)
if not isinstance(connection, lancedb.db.LanceTable):
raise ValueError(
"connection should be an instance of lancedb.db.LanceTable, ",
f"got {type(connection)}",
)
self._connection = connection
self._embedding = embedding
self._vector_key = vector_key
self._id_key = id_key
self._text_key = text_key
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Turn texts into embedding and add it to the database
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
ids: Optional list of ids to associate with the texts.
Returns:
List of ids of the added texts.
"""
# Embed texts and create documents
docs = []
ids = ids or [str(uuid.uuid4()) for _ in texts]
embeddings = self._embedding.embed_documents(list(texts))
for idx, text in enumerate(texts):
embedding = embeddings[idx]
metadata = metadatas[idx] if metadatas else {}
docs.append(
{
self._vector_key: embedding,
self._id_key: ids[idx],
self._text_key: text,
**metadata,
}
)
self._connection.add(docs)
return ids
[docs] def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""Return documents most similar to the query
Args:
query: String to query the vectorstore with.
k: Number of documents to return.
Returns:
List of documents most similar to the query.
"""
embedding = self._embedding.embed_query(query)
docs = self._connection.search(embedding).limit(k).to_df()
return [
Document(
page_content=row[self._text_key],
metadata=row[docs.columns != self._text_key],
)
for _, row in docs.iterrows()
]
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
connection: Any = None,
vector_key: Optional[str] = "vector",
id_key: Optional[str] = "id",
text_key: Optional[str] = "text",
**kwargs: Any,
) -> LanceDB:
instance = LanceDB(
connection,
embedding,
vector_key,
id_key,
text_key,
)
instance.add_texts(texts, metadatas=metadatas, **kwargs)
return instance | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/lancedb.html |
36705cea-01dc-4cc4-94b5-df9781dee535 | Source code for langchain.vectorstores.sklearn
""" Wrapper around scikit-learn NearestNeighbors implementation.
The vector store can be persisted in json, bson or parquet format.
"""
import json
import math
import os
from abc import ABC, abstractmethod
from typing import Any, Dict, Iterable, List, Literal, Optional, Tuple, Type
from uuid import uuid4
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.utils import guard_import
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
DEFAULT_K = 4 # Number of Documents to return.
DEFAULT_FETCH_K = 20 # Number of Documents to initially fetch during MMR search.
class BaseSerializer(ABC):
"""Abstract base class for saving and loading data."""
def __init__(self, persist_path: str) -> None:
self.persist_path = persist_path
@classmethod
@abstractmethod
def extension(cls) -> str:
"""The file extension suggested by this serializer (without dot)."""
@abstractmethod
def save(self, data: Any) -> None:
"""Saves the data to the persist_path"""
@abstractmethod
def load(self) -> Any:
"""Loads the data from the persist_path"""
class JsonSerializer(BaseSerializer):
"""Serializes data in json using the json package from python standard library."""
@classmethod
def extension(cls) -> str:
return "json"
def save(self, data: Any) -> None:
with open(self.persist_path, "w") as fp:
json.dump(data, fp)
def load(self) -> Any:
with open(self.persist_path, "r") as fp:
return json.load(fp)
class BsonSerializer(BaseSerializer):
"""Serializes data in binary json using the bson python package."""
def __init__(self, persist_path: str) -> None:
super().__init__(persist_path)
self.bson = guard_import("bson")
@classmethod
def extension(cls) -> str:
return "bson"
def save(self, data: Any) -> None:
with open(self.persist_path, "wb") as fp:
fp.write(self.bson.dumps(data))
def load(self) -> Any:
with open(self.persist_path, "rb") as fp:
return self.bson.loads(fp.read())
class ParquetSerializer(BaseSerializer):
"""Serializes data in Apache Parquet format using the pyarrow package."""
def __init__(self, persist_path: str) -> None:
super().__init__(persist_path)
self.pd = guard_import("pandas")
self.pa = guard_import("pyarrow")
self.pq = guard_import("pyarrow.parquet")
@classmethod
def extension(cls) -> str:
return "parquet"
def save(self, data: Any) -> None:
df = self.pd.DataFrame(data)
table = self.pa.Table.from_pandas(df)
if os.path.exists(self.persist_path):
backup_path = str(self.persist_path) + "-backup"
os.rename(self.persist_path, backup_path)
try:
self.pq.write_table(table, self.persist_path)
except Exception as exc:
os.rename(backup_path, self.persist_path)
raise exc
else:
os.remove(backup_path)
else:
self.pq.write_table(table, self.persist_path)
def load(self) -> Any:
table = self.pq.read_table(self.persist_path)
df = table.to_pandas()
return {col: series.tolist() for col, series in df.items()}
SERIALIZER_MAP: Dict[str, Type[BaseSerializer]] = {
"json": JsonSerializer,
"bson": BsonSerializer,
"parquet": ParquetSerializer,
}
class SKLearnVectorStoreException(RuntimeError):
"""Exception raised by SKLearnVectorStore."""
pass
[docs]class SKLearnVectorStore(VectorStore):
"""A simple in-memory vector store based on the scikit-learn library
NearestNeighbors implementation."""
def __init__(
self,
embedding: Embeddings,
*,
persist_path: Optional[str] = None,
serializer: Literal["json", "bson", "parquet"] = "json",
metric: str = "cosine",
**kwargs: Any,
) -> None:
np = guard_import("numpy")
sklearn_neighbors = guard_import("sklearn.neighbors", pip_name="scikit-learn")
# non-persistent properties
self._np = np
self._neighbors = sklearn_neighbors.NearestNeighbors(metric=metric, **kwargs)
self._neighbors_fitted = False
self._embedding_function = embedding
self._persist_path = persist_path
self._serializer: Optional[BaseSerializer] = None
if self._persist_path is not None:
serializer_cls = SERIALIZER_MAP[serializer]
self._serializer = serializer_cls(persist_path=self._persist_path)
# data properties
self._embeddings: List[List[float]] = []
self._texts: List[str] = []
self._metadatas: List[dict] = []
self._ids: List[str] = []
# cache properties
self._embeddings_np: Any = np.asarray([])
if self._persist_path is not None and os.path.isfile(self._persist_path):
self._load()
[docs] def persist(self) -> None:
if self._serializer is None:
raise SKLearnVectorStoreException(
"You must specify a persist_path on creation to persist the "
"collection."
)
data = {
"ids": self._ids,
"texts": self._texts,
"metadatas": self._metadatas,
"embeddings": self._embeddings,
}
self._serializer.save(data)
def _load(self) -> None:
if self._serializer is None:
raise SKLearnVectorStoreException(
"You must specify a persist_path on creation to load the " "collection."
)
data = self._serializer.load()
self._embeddings = data["embeddings"]
self._texts = data["texts"]
self._metadatas = data["metadatas"]
self._ids = data["ids"]
self._update_neighbors()
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
_texts = list(texts)
_ids = ids or [str(uuid4()) for _ in _texts]
self._texts.extend(_texts)
self._embeddings.extend(self._embedding_function.embed_documents(_texts))
self._metadatas.extend(metadatas or ([{}] * len(_texts)))
self._ids.extend(_ids)
self._update_neighbors()
return _ids
def _update_neighbors(self) -> None:
if len(self._embeddings) == 0:
raise SKLearnVectorStoreException(
"No data was added to SKLearnVectorStore."
)
self._embeddings_np = self._np.asarray(self._embeddings)
self._neighbors.fit(self._embeddings_np)
self._neighbors_fitted = True
def _similarity_index_search_with_score(
self, query_embedding: List[float], *, k: int = DEFAULT_K, **kwargs: Any
) -> List[Tuple[int, float]]:
"""Search k embeddings similar to the query embedding. Returns a list of
(index, distance) tuples."""
if not self._neighbors_fitted:
raise SKLearnVectorStoreException(
"No data was added to SKLearnVectorStore."
)
neigh_dists, neigh_idxs = self._neighbors.kneighbors(
[query_embedding], n_neighbors=k
)
return list(zip(neigh_idxs[0], neigh_dists[0]))
[docs] def similarity_search_with_score(
self, query: str, *, k: int = DEFAULT_K, **kwargs: Any
) -> List[Tuple[Document, float]]:
query_embedding = self._embedding_function.embed_query(query)
indices_dists = self._similarity_index_search_with_score(
query_embedding, k=k, **kwargs
)
return [
(
Document(
page_content=self._texts[idx],
metadata={"id": self._ids[idx], **self._metadatas[idx]},
),
dist,
)
for idx, dist in indices_dists
]
[docs] def similarity_search(
self, query: str, k: int = DEFAULT_K, **kwargs: Any
) -> List[Document]:
docs_scores = self.similarity_search_with_score(query, k=k, **kwargs)
return [doc for doc, _ in docs_scores]
def _similarity_search_with_relevance_scores(
self, query: str, k: int = DEFAULT_K, **kwargs: Any
) -> List[Tuple[Document, float]]:
docs_dists = self.similarity_search_with_score(query, k=k, **kwargs)
docs, dists = zip(*docs_dists)
scores = [1 / math.exp(dist) for dist in dists]
return list(zip(list(docs), scores))
[docs] def max_marginal_relevance_search_by_vector(
self,
embedding: List[float],
k: int = DEFAULT_K,
fetch_k: int = DEFAULT_FETCH_K,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
indices_dists = self._similarity_index_search_with_score(
embedding, k=fetch_k, **kwargs
)
indices, _ = zip(*indices_dists)
result_embeddings = self._embeddings_np[indices,]
mmr_selected = maximal_marginal_relevance(
self._np.array(embedding, dtype=self._np.float32),
result_embeddings,
k=k,
lambda_mult=lambda_mult,
)
mmr_indices = [indices[i] for i in mmr_selected]
return [
Document(
page_content=self._texts[idx],
metadata={"id": self._ids[idx], **self._metadatas[idx]},
)
for idx in mmr_indices
]
[docs] def max_marginal_relevance_search(
self,
query: str,
k: int = DEFAULT_K,
fetch_k: int = DEFAULT_FETCH_K,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
if self._embedding_function is None:
raise ValueError(
"For MMR search, you must specify an embedding function on creation."
)
embedding = self._embedding_function.embed_query(query)
docs = self.max_marginal_relevance_search_by_vector(
embedding, k, fetch_k, lambda_mul=lambda_mult
)
return docs
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
persist_path: Optional[str] = None,
**kwargs: Any,
) -> "SKLearnVectorStore":
vs = SKLearnVectorStore(embedding, persist_path=persist_path, **kwargs)
vs.add_texts(texts, metadatas=metadatas, ids=ids)
return vs | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html |
dc38a84a-1164-4ee2-bc0e-75f3d25ef9b9 | Source code for langchain.vectorstores.analyticdb
"""VectorStore wrapper around a Postgres/PGVector database."""
from __future__ import annotations
import logging
import uuid
from typing import Any, Dict, Iterable, List, Optional, Sequence, Tuple, Type
from sqlalchemy import REAL, Column, String, Table, create_engine, insert, text
from sqlalchemy.dialects.postgresql import ARRAY, JSON, TEXT
from sqlalchemy.engine import Row
try:
from sqlalchemy.orm import declarative_base
except ImportError:
from sqlalchemy.ext.declarative import declarative_base
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.utils import get_from_dict_or_env
from langchain.vectorstores.base import VectorStore
_LANGCHAIN_DEFAULT_EMBEDDING_DIM = 1536
_LANGCHAIN_DEFAULT_COLLECTION_NAME = "langchain_document"
Base = declarative_base() # type: Any
[docs]class AnalyticDB(VectorStore):
"""VectorStore implementation using AnalyticDB.
AnalyticDB is a distributed full PostgresSQL syntax cloud-native database.
- `connection_string` is a postgres connection string.
- `embedding_function` any embedding function implementing
`langchain.embeddings.base.Embeddings` interface.
- `collection_name` is the name of the collection to use. (default: langchain)
- NOTE: This is not the name of the table, but the name of the collection.
The tables will be created when initializing the store (if not exists)
So, make sure the user has the right permissions to create tables.
- `pre_delete_collection` if True, will delete the collection if it exists.
(default: False)
- Useful for testing.
"""
def __init__(
self,
connection_string: str,
embedding_function: Embeddings,
embedding_dimension: int = _LANGCHAIN_DEFAULT_EMBEDDING_DIM,
collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,
pre_delete_collection: bool = False,
logger: Optional[logging.Logger] = None,
) -> None:
self.connection_string = connection_string
self.embedding_function = embedding_function
self.embedding_dimension = embedding_dimension
self.collection_name = collection_name
self.pre_delete_collection = pre_delete_collection
self.logger = logger or logging.getLogger(__name__)
self.__post_init__()
def __post_init__(
self,
) -> None:
"""
Initialize the store.
"""
self.engine = create_engine(self.connection_string)
self.create_collection()
[docs] def create_table_if_not_exists(self) -> None:
# Define the dynamic table
Table(
self.collection_name,
Base.metadata,
Column("id", TEXT, primary_key=True, default=uuid.uuid4),
Column("embedding", ARRAY(REAL)),
Column("document", String, nullable=True),
Column("metadata", JSON, nullable=True),
extend_existing=True,
)
with self.engine.connect() as conn:
with conn.begin():
# Create the table
Base.metadata.create_all(conn)
# Check if the index exists
index_name = f"{self.collection_name}_embedding_idx"
index_query = text(
f"""
SELECT 1
FROM pg_indexes
WHERE indexname = '{index_name}';
"""
)
result = conn.execute(index_query).scalar()
# Create the index if it doesn't exist
if not result:
index_statement = text(
f"""
CREATE INDEX {index_name}
ON {self.collection_name} USING ann(embedding)
WITH (
"dim" = {self.embedding_dimension},
"hnsw_m" = 100
);
"""
)
conn.execute(index_statement)
[docs] def create_collection(self) -> None:
if self.pre_delete_collection:
self.delete_collection()
self.create_table_if_not_exists()
[docs] def delete_collection(self) -> None:
self.logger.debug("Trying to delete collection")
drop_statement = text(f"DROP TABLE IF EXISTS {self.collection_name};")
with self.engine.connect() as conn:
with conn.begin():
conn.execute(drop_statement)
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
batch_size: int = 500,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
kwargs: vectorstore specific parameters
Returns:
List of ids from adding the texts into the vectorstore.
"""
if ids is None:
ids = [str(uuid.uuid1()) for _ in texts]
embeddings = self.embedding_function.embed_documents(list(texts))
if not metadatas:
metadatas = [{} for _ in texts]
# Define the table schema
chunks_table = Table(
self.collection_name,
Base.metadata,
Column("id", TEXT, primary_key=True),
Column("embedding", ARRAY(REAL)),
Column("document", String, nullable=True),
Column("metadata", JSON, nullable=True),
extend_existing=True,
)
chunks_table_data = []
with self.engine.connect() as conn:
with conn.begin():
for document, metadata, chunk_id, embedding in zip(
texts, metadatas, ids, embeddings
):
chunks_table_data.append(
{
"id": chunk_id,
"embedding": embedding,
"document": document,
"metadata": metadata,
}
)
# Execute the batch insert when the batch size is reached
if len(chunks_table_data) == batch_size:
conn.execute(insert(chunks_table).values(chunks_table_data))
# Clear the chunks_table_data list for the next batch
chunks_table_data.clear()
# Insert any remaining records that didn't make up a full batch
if chunks_table_data:
conn.execute(insert(chunks_table).values(chunks_table_data))
return ids
[docs] def similarity_search(
self,
query: str,
k: int = 4,
filter: Optional[dict] = None,
**kwargs: Any,
) -> List[Document]:
"""Run similarity search with AnalyticDB with distance.
Args:
query (str): Query text to search for.
k (int): Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List of Documents most similar to the query.
"""
embedding = self.embedding_function.embed_query(text=query)
return self.similarity_search_by_vector(
embedding=embedding,
k=k,
filter=filter,
)
[docs] def similarity_search_with_score(
self,
query: str,
k: int = 4,
filter: Optional[dict] = None,
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List of Documents most similar to the query and score for each
"""
embedding = self.embedding_function.embed_query(query)
docs = self.similarity_search_with_score_by_vector(
embedding=embedding, k=k, filter=filter
)
return docs
def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Args:
query: input text
k: Number of Documents to return. Defaults to 4.
**kwargs: kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns:
List of Tuples of (doc, similarity_score)
"""
return self.similarity_search_with_score(query, k, **kwargs)
[docs] def similarity_search_with_score_by_vector(
self,
embedding: List[float],
k: int = 4,
filter: Optional[dict] = None,
) -> List[Tuple[Document, float]]:
# Add the filter if provided
filter_condition = ""
if filter is not None:
conditions = [
f"metadata->>{key!r} = {value!r}" for key, value in filter.items()
]
filter_condition = f"WHERE {' AND '.join(conditions)}"
# Define the base query
sql_query = f"""
SELECT *, l2_distance(embedding, :embedding) as distance
FROM {self.collection_name}
{filter_condition}
ORDER BY embedding <-> :embedding
LIMIT :k
"""
# Set up the query parameters
params = {"embedding": embedding, "k": k}
# Execute the query and fetch the results
with self.engine.connect() as conn:
results: Sequence[Row] = conn.execute(text(sql_query), params).fetchall()
documents_with_scores = [
(
Document(
page_content=result.document,
metadata=result.metadata,
),
result.distance if self.embedding_function is not None else None,
)
for result in results
]
return documents_with_scores
[docs] def similarity_search_by_vector(
self,
embedding: List[float],
k: int = 4,
filter: Optional[dict] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to embedding vector.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List of Documents most similar to the query vector.
"""
docs_and_scores = self.similarity_search_with_score_by_vector(
embedding=embedding, k=k, filter=filter
)
return [doc for doc, _ in docs_and_scores]
[docs] @classmethod
def from_texts(
cls: Type[AnalyticDB],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
embedding_dimension: int = _LANGCHAIN_DEFAULT_EMBEDDING_DIM,
collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,
ids: Optional[List[str]] = None,
pre_delete_collection: bool = False,
**kwargs: Any,
) -> AnalyticDB:
"""
Return VectorStore initialized from texts and embeddings.
Postgres Connection string is required
Either pass it as a parameter
or set the PG_CONNECTION_STRING environment variable.
"""
connection_string = cls.get_connection_string(kwargs)
store = cls(
connection_string=connection_string,
collection_name=collection_name,
embedding_function=embedding,
embedding_dimension=embedding_dimension,
pre_delete_collection=pre_delete_collection,
)
store.add_texts(texts=texts, metadatas=metadatas, ids=ids, **kwargs)
return store
[docs] @classmethod
def get_connection_string(cls, kwargs: Dict[str, Any]) -> str:
connection_string: str = get_from_dict_or_env(
data=kwargs,
key="connection_string",
env_key="PG_CONNECTION_STRING",
)
if not connection_string:
raise ValueError(
"Postgres connection string is required"
"Either pass it as a parameter"
"or set the PG_CONNECTION_STRING environment variable."
)
return connection_string
[docs] @classmethod
def from_documents(
cls: Type[AnalyticDB],
documents: List[Document],
embedding: Embeddings,
embedding_dimension: int = _LANGCHAIN_DEFAULT_EMBEDDING_DIM,
collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,
ids: Optional[List[str]] = None,
pre_delete_collection: bool = False,
**kwargs: Any,
) -> AnalyticDB:
"""
Return VectorStore initialized from documents and embeddings.
Postgres Connection string is required
Either pass it as a parameter
or set the PG_CONNECTION_STRING environment variable.
"""
texts = [d.page_content for d in documents]
metadatas = [d.metadata for d in documents]
connection_string = cls.get_connection_string(kwargs)
kwargs["connection_string"] = connection_string
return cls.from_texts(
texts=texts,
pre_delete_collection=pre_delete_collection,
embedding=embedding,
embedding_dimension=embedding_dimension,
metadatas=metadatas,
ids=ids,
collection_name=collection_name,
**kwargs,
)
[docs] @classmethod
def connection_string_from_db_params(
cls,
driver: str,
host: str,
port: int,
database: str,
user: str,
password: str,
) -> str:
"""Return connection string from database parameters."""
return f"postgresql+{driver}://{user}:{password}@{host}:{port}/{database}" | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html |
31effff9-3e6c-44d4-a659-a0c7bad8fff4 | Source code for langchain.vectorstores.opensearch_vector_search
"""Wrapper around OpenSearch vector database."""
from __future__ import annotations
import uuid
from typing import Any, Dict, Iterable, List, Optional, Tuple
import numpy as np
from langchain.embeddings.base import Embeddings
from langchain.schema import Document
from langchain.utils import get_from_dict_or_env
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
IMPORT_OPENSEARCH_PY_ERROR = (
"Could not import OpenSearch. Please install it with `pip install opensearch-py`."
)
SCRIPT_SCORING_SEARCH = "script_scoring"
PAINLESS_SCRIPTING_SEARCH = "painless_scripting"
MATCH_ALL_QUERY = {"match_all": {}} # type: Dict
def _import_opensearch() -> Any:
"""Import OpenSearch if available, otherwise raise error."""
try:
from opensearchpy import OpenSearch
except ImportError:
raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)
return OpenSearch
def _import_bulk() -> Any:
"""Import bulk if available, otherwise raise error."""
try:
from opensearchpy.helpers import bulk
except ImportError:
raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)
return bulk
def _import_not_found_error() -> Any:
"""Import not found error if available, otherwise raise error."""
try:
from opensearchpy.exceptions import NotFoundError
except ImportError:
raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)
return NotFoundError
def _get_opensearch_client(opensearch_url: str, **kwargs: Any) -> Any:
"""Get OpenSearch client from the opensearch_url, otherwise raise error."""
try:
opensearch = _import_opensearch()
client = opensearch(opensearch_url, **kwargs)
except ValueError as e:
raise ValueError(
f"OpenSearch client string provided is not in proper format. "
f"Got error: {e} "
)
return client
def _validate_embeddings_and_bulk_size(embeddings_length: int, bulk_size: int) -> None:
"""Validate Embeddings Length and Bulk Size."""
if embeddings_length == 0:
raise RuntimeError("Embeddings size is zero")
if bulk_size < embeddings_length:
raise RuntimeError(
f"The embeddings count, {embeddings_length} is more than the "
f"[bulk_size], {bulk_size}. Increase the value of [bulk_size]."
)
def _bulk_ingest_embeddings(
client: Any,
index_name: str,
embeddings: List[List[float]],
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
vector_field: str = "vector_field",
text_field: str = "text",
mapping: Optional[Dict] = None,
) -> List[str]:
"""Bulk Ingest Embeddings into given index."""
if not mapping:
mapping = dict()
bulk = _import_bulk()
not_found_error = _import_not_found_error()
requests = []
return_ids = []
mapping = mapping
try:
client.indices.get(index=index_name)
except not_found_error:
client.indices.create(index=index_name, body=mapping)
for i, text in enumerate(texts):
metadata = metadatas[i] if metadatas else {}
_id = ids[i] if ids else str(uuid.uuid4())
request = {
"_op_type": "index",
"_index": index_name,
vector_field: embeddings[i],
text_field: text,
"metadata": metadata,
"_id": _id,
}
requests.append(request)
return_ids.append(_id)
bulk(client, requests)
client.indices.refresh(index=index_name)
return return_ids
def _default_scripting_text_mapping(
dim: int,
vector_field: str = "vector_field",
) -> Dict:
"""For Painless Scripting or Script Scoring,the default mapping to create index."""
return {
"mappings": {
"properties": {
vector_field: {"type": "knn_vector", "dimension": dim},
}
}
}
def _default_text_mapping(
dim: int,
engine: str = "nmslib",
space_type: str = "l2",
ef_search: int = 512,
ef_construction: int = 512,
m: int = 16,
vector_field: str = "vector_field",
) -> Dict:
"""For Approximate k-NN Search, this is the default mapping to create index."""
return {
"settings": {"index": {"knn": True, "knn.algo_param.ef_search": ef_search}},
"mappings": {
"properties": {
vector_field: {
"type": "knn_vector",
"dimension": dim,
"method": {
"name": "hnsw",
"space_type": space_type,
"engine": engine,
"parameters": {"ef_construction": ef_construction, "m": m},
},
}
}
},
}
def _default_approximate_search_query(
query_vector: List[float],
k: int = 4,
vector_field: str = "vector_field",
) -> Dict:
"""For Approximate k-NN Search, this is the default query."""
return {
"size": k,
"query": {"knn": {vector_field: {"vector": query_vector, "k": k}}},
}
def _approximate_search_query_with_boolean_filter(
query_vector: List[float],
boolean_filter: Dict,
k: int = 4,
vector_field: str = "vector_field",
subquery_clause: str = "must",
) -> Dict:
"""For Approximate k-NN Search, with Boolean Filter."""
return {
"size": k,
"query": {
"bool": {
"filter": boolean_filter,
subquery_clause: [
{"knn": {vector_field: {"vector": query_vector, "k": k}}}
],
}
},
}
def _approximate_search_query_with_lucene_filter(
query_vector: List[float],
lucene_filter: Dict,
k: int = 4,
vector_field: str = "vector_field",
) -> Dict:
"""For Approximate k-NN Search, with Lucene Filter."""
search_query = _default_approximate_search_query(
query_vector, k=k, vector_field=vector_field
)
search_query["query"]["knn"][vector_field]["filter"] = lucene_filter
return search_query
def _default_script_query(
query_vector: List[float],
space_type: str = "l2",
pre_filter: Optional[Dict] = None,
vector_field: str = "vector_field",
) -> Dict:
"""For Script Scoring Search, this is the default query."""
if not pre_filter:
pre_filter = MATCH_ALL_QUERY
return {
"query": {
"script_score": {
"query": pre_filter,
"script": {
"source": "knn_score",
"lang": "knn",
"params": {
"field": vector_field,
"query_value": query_vector,
"space_type": space_type,
},
},
}
}
}
def __get_painless_scripting_source(
space_type: str, query_vector: List[float], vector_field: str = "vector_field"
) -> str:
"""For Painless Scripting, it returns the script source based on space type."""
source_value = (
"(1.0 + "
+ space_type
+ "("
+ str(query_vector)
+ ", doc['"
+ vector_field
+ "']))"
)
if space_type == "cosineSimilarity":
return source_value
else:
return "1/" + source_value
def _default_painless_scripting_query(
query_vector: List[float],
space_type: str = "l2Squared",
pre_filter: Optional[Dict] = None,
vector_field: str = "vector_field",
) -> Dict:
"""For Painless Scripting Search, this is the default query."""
if not pre_filter:
pre_filter = MATCH_ALL_QUERY
source = __get_painless_scripting_source(space_type, query_vector)
return {
"query": {
"script_score": {
"query": pre_filter,
"script": {
"source": source,
"params": {
"field": vector_field,
"query_value": query_vector,
},
},
}
}
}
def _get_kwargs_value(kwargs: Any, key: str, default_value: Any) -> Any:
"""Get the value of the key if present. Else get the default_value."""
if key in kwargs:
return kwargs.get(key)
return default_value
[docs]class OpenSearchVectorSearch(VectorStore):
"""Wrapper around OpenSearch as a vector database.
Example:
.. code-block:: python
from langchain import OpenSearchVectorSearch
opensearch_vector_search = OpenSearchVectorSearch(
"http://localhost:9200",
"embeddings",
embedding_function
)
"""
def __init__(
self,
opensearch_url: str,
index_name: str,
embedding_function: Embeddings,
**kwargs: Any,
):
"""Initialize with necessary components."""
self.embedding_function = embedding_function
self.index_name = index_name
self.client = _get_opensearch_client(opensearch_url, **kwargs)
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
bulk_size: int = 500,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
ids: Optional list of ids to associate with the texts.
bulk_size: Bulk API request count; Default: 500
Returns:
List of ids from adding the texts into the vectorstore.
Optional Args:
vector_field: Document field embeddings are stored in. Defaults to
"vector_field".
text_field: Document field the text of the document is stored in. Defaults
to "text".
"""
embeddings = self.embedding_function.embed_documents(list(texts))
_validate_embeddings_and_bulk_size(len(embeddings), bulk_size)
text_field = _get_kwargs_value(kwargs, "text_field", "text")
dim = len(embeddings[0])
engine = _get_kwargs_value(kwargs, "engine", "nmslib")
space_type = _get_kwargs_value(kwargs, "space_type", "l2")
ef_search = _get_kwargs_value(kwargs, "ef_search", 512)
ef_construction = _get_kwargs_value(kwargs, "ef_construction", 512)
m = _get_kwargs_value(kwargs, "m", 16)
vector_field = _get_kwargs_value(kwargs, "vector_field", "vector_field")
mapping = _default_text_mapping(
dim, engine, space_type, ef_search, ef_construction, m, vector_field
)
return _bulk_ingest_embeddings(
self.client,
self.index_name,
embeddings,
texts,
metadatas=metadatas,
ids=ids,
vector_field=vector_field,
text_field=text_field,
mapping=mapping,
)
[docs] def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to query.
By default, supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query.
Optional Args:
vector_field: Document field embeddings are stored in. Defaults to
"vector_field".
text_field: Document field the text of the document is stored in. Defaults
to "text".
metadata_field: Document field that metadata is stored in. Defaults to
"metadata".
Can be set to a special value "*" to include the entire document.
Optional Args for Approximate Search:
search_type: "approximate_search"; default: "approximate_search"
boolean_filter: A Boolean filter consists of a Boolean query that
contains a k-NN query and a filter.
subquery_clause: Query clause on the knn vector field; default: "must"
lucene_filter: the Lucene algorithm decides whether to perform an exact
k-NN search with pre-filtering or an approximate search with modified
post-filtering.
Optional Args for Script Scoring Search:
search_type: "script_scoring"; default: "approximate_search"
space_type: "l2", "l1", "linf", "cosinesimil", "innerproduct",
"hammingbit"; default: "l2"
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {"match_all": {}}
Optional Args for Painless Scripting Search:
search_type: "painless_scripting"; default: "approximate_search"
space_type: "l2Squared", "l1Norm", "cosineSimilarity"; default: "l2Squared"
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {"match_all": {}}
"""
docs_with_scores = self.similarity_search_with_score(query, k, **kwargs)
return [doc[0] for doc in docs_with_scores]
[docs] def similarity_search_with_score(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Tuple[Document, float]]:
"""Return docs and it's scores most similar to query.
By default, supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents along with its scores most similar to the query.
Optional Args:
same as `similarity_search`
"""
text_field = _get_kwargs_value(kwargs, "text_field", "text")
metadata_field = _get_kwargs_value(kwargs, "metadata_field", "metadata")
hits = self._raw_similarity_search_with_score(query=query, k=k, **kwargs)
documents_with_scores = [
(
Document(
page_content=hit["_source"][text_field],
metadata=hit["_source"]
if metadata_field == "*" or metadata_field not in hit["_source"]
else hit["_source"][metadata_field],
),
hit["_score"],
)
for hit in hits
]
return documents_with_scores
def _raw_similarity_search_with_score(
self, query: str, k: int = 4, **kwargs: Any
) -> List[dict]:
"""Return raw opensearch documents (dict) including vectors,
scores most similar to query.
By default, supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of dict with its scores most similar to the query.
Optional Args:
same as `similarity_search`
"""
embedding = self.embedding_function.embed_query(query)
search_type = _get_kwargs_value(kwargs, "search_type", "approximate_search")
vector_field = _get_kwargs_value(kwargs, "vector_field", "vector_field")
if search_type == "approximate_search":
boolean_filter = _get_kwargs_value(kwargs, "boolean_filter", {})
subquery_clause = _get_kwargs_value(kwargs, "subquery_clause", "must")
lucene_filter = _get_kwargs_value(kwargs, "lucene_filter", {})
if boolean_filter != {} and lucene_filter != {}:
raise ValueError(
"Both `boolean_filter` and `lucene_filter` are provided which "
"is invalid"
)
if boolean_filter != {}:
search_query = _approximate_search_query_with_boolean_filter(
embedding,
boolean_filter,
k=k,
vector_field=vector_field,
subquery_clause=subquery_clause,
)
elif lucene_filter != {}:
search_query = _approximate_search_query_with_lucene_filter(
embedding, lucene_filter, k=k, vector_field=vector_field
)
else:
search_query = _default_approximate_search_query(
embedding, k=k, vector_field=vector_field
)
elif search_type == SCRIPT_SCORING_SEARCH:
space_type = _get_kwargs_value(kwargs, "space_type", "l2")
pre_filter = _get_kwargs_value(kwargs, "pre_filter", MATCH_ALL_QUERY)
search_query = _default_script_query(
embedding, space_type, pre_filter, vector_field
)
elif search_type == PAINLESS_SCRIPTING_SEARCH:
space_type = _get_kwargs_value(kwargs, "space_type", "l2Squared")
pre_filter = _get_kwargs_value(kwargs, "pre_filter", MATCH_ALL_QUERY)
search_query = _default_painless_scripting_query(
embedding, space_type, pre_filter, vector_field
)
else:
raise ValueError("Invalid `search_type` provided as an argument")
response = self.client.search(index=self.index_name, body=search_query)
return [hit for hit in response["hits"]["hits"][:k]]
[docs] def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> list[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
vector_field = _get_kwargs_value(kwargs, "vector_field", "vector_field")
text_field = _get_kwargs_value(kwargs, "text_field", "text")
metadata_field = _get_kwargs_value(kwargs, "metadata_field", "metadata")
# Get embedding of the user query
embedding = self.embedding_function.embed_query(query)
# Do ANN/KNN search to get top fetch_k results where fetch_k >= k
results = self._raw_similarity_search_with_score(query, fetch_k, **kwargs)
embeddings = [result["_source"][vector_field] for result in results]
# Rerank top k results using MMR, (mmr_selected is a list of indices)
mmr_selected = maximal_marginal_relevance(
np.array(embedding), embeddings, k=k, lambda_mult=lambda_mult
)
return [
Document(
page_content=results[i]["_source"][text_field],
metadata=results[i]["_source"][metadata_field],
)
for i in mmr_selected
]
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
bulk_size: int = 500,
**kwargs: Any,
) -> OpenSearchVectorSearch:
"""Construct OpenSearchVectorSearch wrapper from raw documents.
Example:
.. code-block:: python
from langchain import OpenSearchVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
opensearch_vector_search = OpenSearchVectorSearch.from_texts(
texts,
embeddings,
opensearch_url="http://localhost:9200"
)
OpenSearch by default supports Approximate Search powered by nmslib, faiss
and lucene engines recommended for large datasets. Also supports brute force
search through Script Scoring and Painless Scripting.
Optional Args:
vector_field: Document field embeddings are stored in. Defaults to
"vector_field".
text_field: Document field the text of the document is stored in. Defaults
to "text".
Optional Keyword Args for Approximate Search:
engine: "nmslib", "faiss", "lucene"; default: "nmslib"
space_type: "l2", "l1", "cosinesimil", "linf", "innerproduct"; default: "l2"
ef_search: Size of the dynamic list used during k-NN searches. Higher values
lead to more accurate but slower searches; default: 512
ef_construction: Size of the dynamic list used during k-NN graph creation.
Higher values lead to more accurate graph but slower indexing speed;
default: 512
m: Number of bidirectional links created for each new element. Large impact
on memory consumption. Between 2 and 100; default: 16
Keyword Args for Script Scoring or Painless Scripting:
is_appx_search: False
"""
opensearch_url = get_from_dict_or_env(
kwargs, "opensearch_url", "OPENSEARCH_URL"
)
# List of arguments that needs to be removed from kwargs
# before passing kwargs to get opensearch client
keys_list = [
"opensearch_url",
"index_name",
"is_appx_search",
"vector_field",
"text_field",
"engine",
"space_type",
"ef_search",
"ef_construction",
"m",
]
embeddings = embedding.embed_documents(texts)
_validate_embeddings_and_bulk_size(len(embeddings), bulk_size)
dim = len(embeddings[0])
# Get the index name from either from kwargs or ENV Variable
# before falling back to random generation
index_name = get_from_dict_or_env(
kwargs, "index_name", "OPENSEARCH_INDEX_NAME", default=uuid.uuid4().hex
)
is_appx_search = _get_kwargs_value(kwargs, "is_appx_search", True)
vector_field = _get_kwargs_value(kwargs, "vector_field", "vector_field")
text_field = _get_kwargs_value(kwargs, "text_field", "text")
if is_appx_search:
engine = _get_kwargs_value(kwargs, "engine", "nmslib")
space_type = _get_kwargs_value(kwargs, "space_type", "l2")
ef_search = _get_kwargs_value(kwargs, "ef_search", 512)
ef_construction = _get_kwargs_value(kwargs, "ef_construction", 512)
m = _get_kwargs_value(kwargs, "m", 16)
mapping = _default_text_mapping(
dim, engine, space_type, ef_search, ef_construction, m, vector_field
)
else:
mapping = _default_scripting_text_mapping(dim)
[kwargs.pop(key, None) for key in keys_list]
client = _get_opensearch_client(opensearch_url, **kwargs)
_bulk_ingest_embeddings(
client,
index_name,
embeddings,
texts,
metadatas=metadatas,
vector_field=vector_field,
text_field=text_field,
mapping=mapping,
)
return cls(opensearch_url, index_name, embedding, **kwargs) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html |
2add156e-e6f3-4741-96e0-0478fbeeff90 | Source code for langchain.vectorstores.faiss
"""Wrapper around FAISS vector database."""
from __future__ import annotations
import math
import os
import pickle
import uuid
from pathlib import Path
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple
import numpy as np
from langchain.docstore.base import AddableMixin, Docstore
from langchain.docstore.document import Document
from langchain.docstore.in_memory import InMemoryDocstore
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
def dependable_faiss_import(no_avx2: Optional[bool] = None) -> Any:
"""
Import faiss if available, otherwise raise error.
If FAISS_NO_AVX2 environment variable is set, it will be considered
to load FAISS with no AVX2 optimization.
Args:
no_avx2: Load FAISS strictly with no AVX2 optimization
so that the vectorstore is portable and compatible with other devices.
"""
if no_avx2 is None and "FAISS_NO_AVX2" in os.environ:
no_avx2 = bool(os.getenv("FAISS_NO_AVX2"))
try:
if no_avx2:
from faiss import swigfaiss as faiss
else:
import faiss
except ImportError:
raise ValueError(
"Could not import faiss python package. "
"Please install it with `pip install faiss` "
"or `pip install faiss-cpu` (depending on Python version)."
)
return faiss
def _default_relevance_score_fn(score: float) -> float:
"""Return a similarity score on a scale [0, 1]."""
# The 'correct' relevance function
# may differ depending on a few things, including:
# - the distance / similarity metric used by the VectorStore
# - the scale of your embeddings (OpenAI's are unit normed. Many others are not!)
# - embedding dimensionality
# - etc.
# This function converts the euclidean norm of normalized embeddings
# (0 is most similar, sqrt(2) most dissimilar)
# to a similarity function (0 to 1)
return 1.0 - score / math.sqrt(2)
[docs]class FAISS(VectorStore):
"""Wrapper around FAISS vector database.
To use, you should have the ``faiss`` python package installed.
Example:
.. code-block:: python
from langchain import FAISS
faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)
"""
def __init__(
self,
embedding_function: Callable,
index: Any,
docstore: Docstore,
index_to_docstore_id: Dict[int, str],
relevance_score_fn: Optional[
Callable[[float], float]
] = _default_relevance_score_fn,
normalize_L2: bool = False,
):
"""Initialize with necessary components."""
self.embedding_function = embedding_function
self.index = index
self.docstore = docstore
self.index_to_docstore_id = index_to_docstore_id
self.relevance_score_fn = relevance_score_fn
self._normalize_L2 = normalize_L2
def __add(
self,
texts: Iterable[str],
embeddings: Iterable[List[float]],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
if not isinstance(self.docstore, AddableMixin):
raise ValueError(
"If trying to add texts, the underlying docstore should support "
f"adding items, which {self.docstore} does not"
)
documents = []
for i, text in enumerate(texts):
metadata = metadatas[i] if metadatas else {}
documents.append(Document(page_content=text, metadata=metadata))
if ids is None:
ids = [str(uuid.uuid4()) for _ in texts]
# Add to the index, the index_to_id mapping, and the docstore.
starting_len = len(self.index_to_docstore_id)
faiss = dependable_faiss_import()
vector = np.array(embeddings, dtype=np.float32)
if self._normalize_L2:
faiss.normalize_L2(vector)
self.index.add(vector)
# Get list of index, id, and docs.
full_info = [(starting_len + i, ids[i], doc) for i, doc in enumerate(documents)]
# Add information to docstore and index.
self.docstore.add({_id: doc for _, _id, doc in full_info})
index_to_id = {index: _id for index, _id, _ in full_info}
self.index_to_docstore_id.update(index_to_id)
return [_id for _, _id, _ in full_info]
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
ids: Optional list of unique IDs.
Returns:
List of ids from adding the texts into the vectorstore.
"""
if not isinstance(self.docstore, AddableMixin):
raise ValueError(
"If trying to add texts, the underlying docstore should support "
f"adding items, which {self.docstore} does not"
)
# Embed and create the documents.
embeddings = [self.embedding_function(text) for text in texts]
return self.__add(texts, embeddings, metadatas=metadatas, ids=ids, **kwargs)
[docs] def add_embeddings(
self,
text_embeddings: Iterable[Tuple[str, List[float]]],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
text_embeddings: Iterable pairs of string and embedding to
add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
ids: Optional list of unique IDs.
Returns:
List of ids from adding the texts into the vectorstore.
"""
if not isinstance(self.docstore, AddableMixin):
raise ValueError(
"If trying to add texts, the underlying docstore should support "
f"adding items, which {self.docstore} does not"
)
# Embed and create the documents.
texts, embeddings = zip(*text_embeddings)
return self.__add(texts, embeddings, metadatas=metadatas, ids=ids, **kwargs)
[docs] def similarity_search_with_score_by_vector(
self,
embedding: List[float],
k: int = 4,
filter: Optional[Dict[str, Any]] = None,
fetch_k: int = 20,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
embedding: Embedding vector to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, Any]]): Filter by metadata. Defaults to None.
fetch_k: (Optional[int]) Number of Documents to fetch before filtering.
Defaults to 20.
**kwargs: kwargs to be passed to similarity search. Can include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns:
List of documents most similar to the query text and L2 distance
in float for each. Lower score represents more similarity.
"""
faiss = dependable_faiss_import()
vector = np.array([embedding], dtype=np.float32)
if self._normalize_L2:
faiss.normalize_L2(vector)
scores, indices = self.index.search(vector, k if filter is None else fetch_k)
docs = []
for j, i in enumerate(indices[0]):
if i == -1:
# This happens when not enough docs are returned.
continue
_id = self.index_to_docstore_id[i]
doc = self.docstore.search(_id)
if not isinstance(doc, Document):
raise ValueError(f"Could not find document for id {_id}, got {doc}")
if filter is not None:
filter = {
key: [value] if not isinstance(value, list) else value
for key, value in filter.items()
}
if all(doc.metadata.get(key) in value for key, value in filter.items()):
docs.append((doc, scores[0][j]))
else:
docs.append((doc, scores[0][j]))
score_threshold = kwargs.get("score_threshold")
if score_threshold is not None:
docs = [
(doc, similarity)
for doc, similarity in docs
if similarity >= score_threshold
]
return docs[:k]
[docs] def similarity_search_with_score(
self,
query: str,
k: int = 4,
filter: Optional[Dict[str, Any]] = None,
fetch_k: int = 20,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
fetch_k: (Optional[int]) Number of Documents to fetch before filtering.
Defaults to 20.
Returns:
List of documents most similar to the query text with
L2 distance in float. Lower score represents more similarity.
"""
embedding = self.embedding_function(query)
docs = self.similarity_search_with_score_by_vector(
embedding,
k,
filter=filter,
fetch_k=fetch_k,
**kwargs,
)
return docs
[docs] def similarity_search_by_vector(
self,
embedding: List[float],
k: int = 4,
filter: Optional[Dict[str, Any]] = None,
fetch_k: int = 20,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to embedding vector.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
fetch_k: (Optional[int]) Number of Documents to fetch before filtering.
Defaults to 20.
Returns:
List of Documents most similar to the embedding.
"""
docs_and_scores = self.similarity_search_with_score_by_vector(
embedding,
k,
filter=filter,
fetch_k=fetch_k,
**kwargs,
)
return [doc for doc, _ in docs_and_scores]
[docs] def similarity_search(
self,
query: str,
k: int = 4,
filter: Optional[Dict[str, Any]] = None,
fetch_k: int = 20,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter: (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
fetch_k: (Optional[int]) Number of Documents to fetch before filtering.
Defaults to 20.
Returns:
List of Documents most similar to the query.
"""
docs_and_scores = self.similarity_search_with_score(
query, k, filter=filter, fetch_k=fetch_k, **kwargs
)
return [doc for doc, _ in docs_and_scores]
[docs] def max_marginal_relevance_search_by_vector(
self,
embedding: List[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
filter: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch before filtering to
pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
_, indices = self.index.search(
np.array([embedding], dtype=np.float32),
fetch_k if filter is None else fetch_k * 2,
)
if filter is not None:
filtered_indices = []
for i in indices[0]:
if i == -1:
# This happens when not enough docs are returned.
continue
_id = self.index_to_docstore_id[i]
doc = self.docstore.search(_id)
if not isinstance(doc, Document):
raise ValueError(f"Could not find document for id {_id}, got {doc}")
if all(doc.metadata.get(key) == value for key, value in filter.items()):
filtered_indices.append(i)
indices = np.array([filtered_indices])
# -1 happens when not enough docs are returned.
embeddings = [self.index.reconstruct(int(i)) for i in indices[0] if i != -1]
mmr_selected = maximal_marginal_relevance(
np.array([embedding], dtype=np.float32),
embeddings,
k=k,
lambda_mult=lambda_mult,
)
selected_indices = [indices[0][i] for i in mmr_selected]
docs = []
for i in selected_indices:
if i == -1:
# This happens when not enough docs are returned.
continue
_id = self.index_to_docstore_id[i]
doc = self.docstore.search(_id)
if not isinstance(doc, Document):
raise ValueError(f"Could not find document for id {_id}, got {doc}")
docs.append(doc)
return docs
[docs] def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
filter: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch before filtering (if needed) to
pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
embedding = self.embedding_function(query)
docs = self.max_marginal_relevance_search_by_vector(
embedding,
k,
fetch_k,
lambda_mult=lambda_mult,
filter=filter,
**kwargs,
)
return docs
[docs] def merge_from(self, target: FAISS) -> None:
"""Merge another FAISS object with the current one.
Add the target FAISS to the current one.
Args:
target: FAISS object you wish to merge into the current one
Returns:
None.
"""
if not isinstance(self.docstore, AddableMixin):
raise ValueError("Cannot merge with this type of docstore")
# Numerical index for target docs are incremental on existing ones
starting_len = len(self.index_to_docstore_id)
# Merge two IndexFlatL2
self.index.merge_from(target.index)
# Get id and docs from target FAISS object
full_info = []
for i, target_id in target.index_to_docstore_id.items():
doc = target.docstore.search(target_id)
if not isinstance(doc, Document):
raise ValueError("Document should be returned")
full_info.append((starting_len + i, target_id, doc))
# Add information to docstore and index_to_docstore_id.
self.docstore.add({_id: doc for _, _id, doc in full_info})
index_to_id = {index: _id for index, _id, _ in full_info}
self.index_to_docstore_id.update(index_to_id)
@classmethod
def __from(
cls,
texts: List[str],
embeddings: List[List[float]],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
normalize_L2: bool = False,
**kwargs: Any,
) -> FAISS:
faiss = dependable_faiss_import()
index = faiss.IndexFlatL2(len(embeddings[0]))
vector = np.array(embeddings, dtype=np.float32)
if normalize_L2:
faiss.normalize_L2(vector)
index.add(vector)
documents = []
if ids is None:
ids = [str(uuid.uuid4()) for _ in texts]
for i, text in enumerate(texts):
metadata = metadatas[i] if metadatas else {}
documents.append(Document(page_content=text, metadata=metadata))
index_to_id = dict(enumerate(ids))
docstore = InMemoryDocstore(dict(zip(index_to_id.values(), documents)))
return cls(
embedding.embed_query,
index,
docstore,
index_to_id,
normalize_L2=normalize_L2,
**kwargs,
)
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> FAISS:
"""Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
1. Embeds documents.
2. Creates an in memory docstore
3. Initializes the FAISS database
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
faiss = FAISS.from_texts(texts, embeddings)
"""
embeddings = embedding.embed_documents(texts)
return cls.__from(
texts,
embeddings,
embedding,
metadatas=metadatas,
ids=ids,
**kwargs,
)
[docs] @classmethod
def from_embeddings(
cls,
text_embeddings: List[Tuple[str, List[float]]],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> FAISS:
"""Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
1. Embeds documents.
2. Creates an in memory docstore
3. Initializes the FAISS database
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
faiss = FAISS.from_embeddings(text_embedding_pairs, embeddings)
"""
texts = [t[0] for t in text_embeddings]
embeddings = [t[1] for t in text_embeddings]
return cls.__from(
texts,
embeddings,
embedding,
metadatas=metadatas,
ids=ids,
**kwargs,
)
[docs] def save_local(self, folder_path: str, index_name: str = "index") -> None:
"""Save FAISS index, docstore, and index_to_docstore_id to disk.
Args:
folder_path: folder path to save index, docstore,
and index_to_docstore_id to.
index_name: for saving with a specific index file name
"""
path = Path(folder_path)
path.mkdir(exist_ok=True, parents=True)
# save index separately since it is not picklable
faiss = dependable_faiss_import()
faiss.write_index(
self.index, str(path / "{index_name}.faiss".format(index_name=index_name))
)
# save docstore and index_to_docstore_id
with open(path / "{index_name}.pkl".format(index_name=index_name), "wb") as f:
pickle.dump((self.docstore, self.index_to_docstore_id), f)
[docs] @classmethod
def load_local(
cls, folder_path: str, embeddings: Embeddings, index_name: str = "index"
) -> FAISS:
"""Load FAISS index, docstore, and index_to_docstore_id from disk.
Args:
folder_path: folder path to load index, docstore,
and index_to_docstore_id from.
embeddings: Embeddings to use when generating queries
index_name: for saving with a specific index file name
"""
path = Path(folder_path)
# load index separately since it is not picklable
faiss = dependable_faiss_import()
index = faiss.read_index(
str(path / "{index_name}.faiss".format(index_name=index_name))
)
# load docstore and index_to_docstore_id
with open(path / "{index_name}.pkl".format(index_name=index_name), "rb") as f:
docstore, index_to_docstore_id = pickle.load(f)
return cls(embeddings.embed_query, index, docstore, index_to_docstore_id)
def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
filter: Optional[Dict[str, Any]] = None,
fetch_k: int = 20,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and their similarity scores on a scale from 0 to 1."""
if self.relevance_score_fn is None:
raise ValueError(
"normalize_score_fn must be provided to"
" FAISS constructor to normalize scores"
)
docs_and_scores = self.similarity_search_with_score(
query,
k=k,
filter=filter,
fetch_k=fetch_k,
**kwargs,
)
return [(doc, self.relevance_score_fn(score)) for doc, score in docs_and_scores] | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html |
bd885a6c-4316-45c1-8acf-488b52baea1f | Source code for langchain.vectorstores.matching_engine
"""Vertex Matching Engine implementation of the vector store."""
from __future__ import annotations
import json
import logging
import time
import uuid
from typing import TYPE_CHECKING, Any, Iterable, List, Optional, Type
from langchain.docstore.document import Document
from langchain.embeddings import TensorflowHubEmbeddings
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
if TYPE_CHECKING:
from google.cloud import storage
from google.cloud.aiplatform import MatchingEngineIndex, MatchingEngineIndexEndpoint
from google.oauth2.service_account import Credentials
logger = logging.getLogger()
[docs]class MatchingEngine(VectorStore):
"""Vertex Matching Engine implementation of the vector store.
While the embeddings are stored in the Matching Engine, the embedded
documents will be stored in GCS.
An existing Index and corresponding Endpoint are preconditions for
using this module.
See usage in docs/modules/indexes/vectorstores/examples/matchingengine.ipynb
Note that this implementation is mostly meant for reading if you are
planning to do a real time implementation. While reading is a real time
operation, updating the index takes close to one hour."""
def __init__(
self,
project_id: str,
index: MatchingEngineIndex,
endpoint: MatchingEngineIndexEndpoint,
embedding: Embeddings,
gcs_client: storage.Client,
gcs_bucket_name: str,
credentials: Optional[Credentials] = None,
):
"""Vertex Matching Engine implementation of the vector store.
While the embeddings are stored in the Matching Engine, the embedded
documents will be stored in GCS.
An existing Index and corresponding Endpoint are preconditions for
using this module.
See usage in
docs/modules/indexes/vectorstores/examples/matchingengine.ipynb.
Note that this implementation is mostly meant for reading if you are
planning to do a real time implementation. While reading is a real time
operation, updating the index takes close to one hour.
Attributes:
project_id: The GCS project id.
index: The created index class. See
~:func:`MatchingEngine.from_components`.
endpoint: The created endpoint class. See
~:func:`MatchingEngine.from_components`.
embedding: A :class:`Embeddings` that will be used for
embedding the text sent. If none is sent, then the
multilingual Tensorflow Universal Sentence Encoder will be used.
gcs_client: The GCS client.
gcs_bucket_name: The GCS bucket name.
credentials (Optional): Created GCP credentials.
"""
super().__init__()
self._validate_google_libraries_installation()
self.project_id = project_id
self.index = index
self.endpoint = endpoint
self.embedding = embedding
self.gcs_client = gcs_client
self.credentials = credentials
self.gcs_bucket_name = gcs_bucket_name
def _validate_google_libraries_installation(self) -> None:
"""Validates that Google libraries that are needed are installed."""
try:
from google.cloud import aiplatform, storage # noqa: F401
from google.oauth2 import service_account # noqa: F401
except ImportError:
raise ImportError(
"You must run `pip install --upgrade "
"google-cloud-aiplatform google-cloud-storage`"
"to use the MatchingEngine Vectorstore."
)
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
kwargs: vectorstore specific parameters.
Returns:
List of ids from adding the texts into the vectorstore.
"""
logger.debug("Embedding documents.")
embeddings = self.embedding.embed_documents(list(texts))
jsons = []
ids = []
# Could be improved with async.
for embedding, text in zip(embeddings, texts):
id = str(uuid.uuid4())
ids.append(id)
jsons.append({"id": id, "embedding": embedding})
self._upload_to_gcs(text, f"documents/{id}")
logger.debug(f"Uploaded {len(ids)} documents to GCS.")
# Creating json lines from the embedded documents.
result_str = "\n".join([json.dumps(x) for x in jsons])
filename_prefix = f"indexes/{uuid.uuid4()}"
filename = f"{filename_prefix}/{time.time()}.json"
self._upload_to_gcs(result_str, filename)
logger.debug(
f"Uploaded updated json with embeddings to "
f"{self.gcs_bucket_name}/{filename}."
)
self.index = self.index.update_embeddings(
contents_delta_uri=f"gs://{self.gcs_bucket_name}/{filename_prefix}/"
)
logger.debug("Updated index with new configuration.")
return ids
def _upload_to_gcs(self, data: str, gcs_location: str) -> None:
"""Uploads data to gcs_location.
Args:
data: The data that will be stored.
gcs_location: The location where the data will be stored.
"""
bucket = self.gcs_client.get_bucket(self.gcs_bucket_name)
blob = bucket.blob(gcs_location)
blob.upload_from_string(data)
[docs] def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to query.
Args:
query: The string that will be used to search for similar documents.
k: The amount of neighbors that will be retrieved.
Returns:
A list of k matching documents.
"""
logger.debug(f"Embedding query {query}.")
embedding_query = self.embedding.embed_documents([query])
response = self.endpoint.match(
deployed_index_id=self._get_index_id(),
queries=embedding_query,
num_neighbors=k,
)
if len(response) == 0:
return []
logger.debug(f"Found {len(response)} matches for the query {query}.")
results = []
# I'm only getting the first one because queries receives an array
# and the similarity_search method only recevies one query. This
# means that the match method will always return an array with only
# one element.
for doc in response[0]:
page_content = self._download_from_gcs(f"documents/{doc.id}")
results.append(Document(page_content=page_content))
logger.debug("Downloaded documents for query.")
return results
def _get_index_id(self) -> str:
"""Gets the correct index id for the endpoint.
Returns:
The index id if found (which should be found) or throws
ValueError otherwise.
"""
for index in self.endpoint.deployed_indexes:
if index.index == self.index.resource_name:
return index.id
raise ValueError(
f"No index with id {self.index.resource_name} "
f"deployed on endpoint "
f"{self.endpoint.display_name}."
)
def _download_from_gcs(self, gcs_location: str) -> str:
"""Downloads from GCS in text format.
Args:
gcs_location: The location where the file is located.
Returns:
The string contents of the file.
"""
bucket = self.gcs_client.get_bucket(self.gcs_bucket_name)
blob = bucket.blob(gcs_location)
return blob.download_as_string()
[docs] @classmethod
def from_texts(
cls: Type["MatchingEngine"],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> "MatchingEngine":
"""Use from components instead."""
raise NotImplementedError(
"This method is not implemented. Instead, you should initialize the class"
" with `MatchingEngine.from_components(...)` and then call "
"`add_texts`"
)
[docs] @classmethod
def from_components(
cls: Type["MatchingEngine"],
project_id: str,
region: str,
gcs_bucket_name: str,
index_id: str,
endpoint_id: str,
credentials_path: Optional[str] = None,
embedding: Optional[Embeddings] = None,
) -> "MatchingEngine":
"""Takes the object creation out of the constructor.
Args:
project_id: The GCP project id.
region: The default location making the API calls. It must have
the same location as the GCS bucket and must be regional.
gcs_bucket_name: The location where the vectors will be stored in
order for the index to be created.
index_id: The id of the created index.
endpoint_id: The id of the created endpoint.
credentials_path: (Optional) The path of the Google credentials on
the local file system.
embedding: The :class:`Embeddings` that will be used for
embedding the texts.
Returns:
A configured MatchingEngine with the texts added to the index.
"""
gcs_bucket_name = cls._validate_gcs_bucket(gcs_bucket_name)
credentials = cls._create_credentials_from_file(credentials_path)
index = cls._create_index_by_id(index_id, project_id, region, credentials)
endpoint = cls._create_endpoint_by_id(
endpoint_id, project_id, region, credentials
)
gcs_client = cls._get_gcs_client(credentials, project_id)
cls._init_aiplatform(project_id, region, gcs_bucket_name, credentials)
return cls(
project_id=project_id,
index=index,
endpoint=endpoint,
embedding=embedding or cls._get_default_embeddings(),
gcs_client=gcs_client,
credentials=credentials,
gcs_bucket_name=gcs_bucket_name,
)
@classmethod
def _validate_gcs_bucket(cls, gcs_bucket_name: str) -> str:
"""Validates the gcs_bucket_name as a bucket name.
Args:
gcs_bucket_name: The received bucket uri.
Returns:
A valid gcs_bucket_name or throws ValueError if full path is
provided.
"""
gcs_bucket_name = gcs_bucket_name.replace("gs://", "")
if "/" in gcs_bucket_name:
raise ValueError(
f"The argument gcs_bucket_name should only be "
f"the bucket name. Received {gcs_bucket_name}"
)
return gcs_bucket_name
@classmethod
def _create_credentials_from_file(
cls, json_credentials_path: Optional[str]
) -> Optional[Credentials]:
"""Creates credentials for GCP.
Args:
json_credentials_path: The path on the file system where the
credentials are stored.
Returns:
An optional of Credentials or None, in which case the default
will be used.
"""
from google.oauth2 import service_account
credentials = None
if json_credentials_path is not None:
credentials = service_account.Credentials.from_service_account_file(
json_credentials_path
)
return credentials
@classmethod
def _create_index_by_id(
cls, index_id: str, project_id: str, region: str, credentials: "Credentials"
) -> MatchingEngineIndex:
"""Creates a MatchingEngineIndex object by id.
Args:
index_id: The created index id.
project_id: The project to retrieve index from.
region: Location to retrieve index from.
credentials: GCS credentials.
Returns:
A configured MatchingEngineIndex.
"""
from google.cloud import aiplatform
logger.debug(f"Creating matching engine index with id {index_id}.")
return aiplatform.MatchingEngineIndex(
index_name=index_id,
project=project_id,
location=region,
credentials=credentials,
)
@classmethod
def _create_endpoint_by_id(
cls, endpoint_id: str, project_id: str, region: str, credentials: "Credentials"
) -> MatchingEngineIndexEndpoint:
"""Creates a MatchingEngineIndexEndpoint object by id.
Args:
endpoint_id: The created endpoint id.
project_id: The project to retrieve index from.
region: Location to retrieve index from.
credentials: GCS credentials.
Returns:
A configured MatchingEngineIndexEndpoint.
"""
from google.cloud import aiplatform
logger.debug(f"Creating endpoint with id {endpoint_id}.")
return aiplatform.MatchingEngineIndexEndpoint(
index_endpoint_name=endpoint_id,
project=project_id,
location=region,
credentials=credentials,
)
@classmethod
def _get_gcs_client(
cls, credentials: "Credentials", project_id: str
) -> "storage.Client":
"""Lazily creates a GCS client.
Returns:
A configured GCS client.
"""
from google.cloud import storage
return storage.Client(credentials=credentials, project=project_id)
@classmethod
def _init_aiplatform(
cls,
project_id: str,
region: str,
gcs_bucket_name: str,
credentials: "Credentials",
) -> None:
"""Configures the aiplatform library.
Args:
project_id: The GCP project id.
region: The default location making the API calls. It must have
the same location as the GCS bucket and must be regional.
gcs_bucket_name: GCS staging location.
credentials: The GCS Credentials object.
"""
from google.cloud import aiplatform
logger.debug(
f"Initializing AI Platform for project {project_id} on "
f"{region} and for {gcs_bucket_name}."
)
aiplatform.init(
project=project_id,
location=region,
staging_bucket=gcs_bucket_name,
credentials=credentials,
)
@classmethod
def _get_default_embeddings(cls) -> TensorflowHubEmbeddings:
"""This function returns the default embedding.
Returns:
Default TensorflowHubEmbeddings to use.
"""
return TensorflowHubEmbeddings() | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html |
1c2b565f-cb8e-479d-83eb-43d09441a166 | Source code for langchain.vectorstores.tair
"""Wrapper around Tair Vector."""
from __future__ import annotations
import json
import logging
import uuid
from typing import Any, Iterable, List, Optional, Type
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.utils import get_from_dict_or_env
from langchain.vectorstores.base import VectorStore
logger = logging.getLogger(__name__)
def _uuid_key() -> str:
return uuid.uuid4().hex
[docs]class Tair(VectorStore):
"""Wrapper around Tair Vector store."""
def __init__(
self,
embedding_function: Embeddings,
url: str,
index_name: str,
content_key: str = "content",
metadata_key: str = "metadata",
search_params: Optional[dict] = None,
**kwargs: Any,
):
self.embedding_function = embedding_function
self.index_name = index_name
try:
from tair import Tair as TairClient
except ImportError:
raise ImportError(
"Could not import tair python package. "
"Please install it with `pip install tair`."
)
try:
# connect to tair from url
client = TairClient.from_url(url, **kwargs)
except ValueError as e:
raise ValueError(f"Tair failed to connect: {e}")
self.client = client
self.content_key = content_key
self.metadata_key = metadata_key
self.search_params = search_params
[docs] def create_index_if_not_exist(
self,
dim: int,
distance_type: str,
index_type: str,
data_type: str,
**kwargs: Any,
) -> bool:
index = self.client.tvs_get_index(self.index_name)
if index is not None:
logger.info("Index already exists")
return False
self.client.tvs_create_index(
self.index_name,
dim,
distance_type,
index_type,
data_type,
**kwargs,
)
return True
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> List[str]:
"""Add texts data to an existing index."""
ids = []
keys = kwargs.get("keys", None)
# Write data to tair
pipeline = self.client.pipeline(transaction=False)
embeddings = self.embedding_function.embed_documents(list(texts))
for i, text in enumerate(texts):
# Use provided key otherwise use default key
key = keys[i] if keys else _uuid_key()
metadata = metadatas[i] if metadatas else {}
pipeline.tvs_hset(
self.index_name,
key,
embeddings[i],
False,
**{
self.content_key: text,
self.metadata_key: json.dumps(metadata),
},
)
ids.append(key)
pipeline.execute()
return ids
[docs] def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""
Returns the most similar indexed documents to the query text.
Args:
query (str): The query text for which to find similar documents.
k (int): The number of documents to return. Default is 4.
Returns:
List[Document]: A list of documents that are most similar to the query text.
"""
# Creates embedding vector from user query
embedding = self.embedding_function.embed_query(query)
keys_and_scores = self.client.tvs_knnsearch(
self.index_name, k, embedding, False, None, **kwargs
)
pipeline = self.client.pipeline(transaction=False)
for key, _ in keys_and_scores:
pipeline.tvs_hmget(
self.index_name, key, self.metadata_key, self.content_key
)
docs = pipeline.execute()
return [
Document(
page_content=d[1],
metadata=json.loads(d[0]),
)
for d in docs
]
[docs] @classmethod
def from_texts(
cls: Type[Tair],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
index_name: str = "langchain",
content_key: str = "content",
metadata_key: str = "metadata",
**kwargs: Any,
) -> Tair:
try:
from tair import tairvector
except ImportError:
raise ValueError(
"Could not import tair python package. "
"Please install it with `pip install tair`."
)
url = get_from_dict_or_env(kwargs, "tair_url", "TAIR_URL")
if "tair_url" in kwargs:
kwargs.pop("tair_url")
distance_type = tairvector.DistanceMetric.InnerProduct
if "distance_type" in kwargs:
distance_type = kwargs.pop("distance_typ")
index_type = tairvector.IndexType.HNSW
if "index_type" in kwargs:
index_type = kwargs.pop("index_type")
data_type = tairvector.DataType.Float32
if "data_type" in kwargs:
data_type = kwargs.pop("data_type")
index_params = {}
if "index_params" in kwargs:
index_params = kwargs.pop("index_params")
search_params = {}
if "search_params" in kwargs:
search_params = kwargs.pop("search_params")
keys = None
if "keys" in kwargs:
keys = kwargs.pop("keys")
try:
tair_vector_store = cls(
embedding,
url,
index_name,
content_key=content_key,
metadata_key=metadata_key,
search_params=search_params,
**kwargs,
)
except ValueError as e:
raise ValueError(f"tair failed to connect: {e}")
# Create embeddings for documents
embeddings = embedding.embed_documents(texts)
tair_vector_store.create_index_if_not_exist(
len(embeddings[0]),
distance_type,
index_type,
data_type,
**index_params,
)
tair_vector_store.add_texts(texts, metadatas, keys=keys)
return tair_vector_store
[docs] @classmethod
def from_documents(
cls,
documents: List[Document],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
index_name: str = "langchain",
content_key: str = "content",
metadata_key: str = "metadata",
**kwargs: Any,
) -> Tair:
texts = [d.page_content for d in documents]
metadatas = [d.metadata for d in documents]
return cls.from_texts(
texts, embedding, metadatas, index_name, content_key, metadata_key, **kwargs
)
[docs] @staticmethod
def drop_index(
index_name: str = "langchain",
**kwargs: Any,
) -> bool:
"""
Drop an existing index.
Args:
index_name (str): Name of the index to drop.
Returns:
bool: True if the index is dropped successfully.
"""
try:
from tair import Tair as TairClient
except ImportError:
raise ValueError(
"Could not import tair python package. "
"Please install it with `pip install tair`."
)
url = get_from_dict_or_env(kwargs, "tair_url", "TAIR_URL")
try:
if "tair_url" in kwargs:
kwargs.pop("tair_url")
client = TairClient.from_url(url=url, **kwargs)
except ValueError as e:
raise ValueError(f"Tair connection error: {e}")
# delete index
ret = client.tvs_del_index(index_name)
if ret == 0:
# index not exist
logger.info("Index does not exist")
return False
return True
[docs] @classmethod
def from_existing_index(
cls,
embedding: Embeddings,
index_name: str = "langchain",
content_key: str = "content",
metadata_key: str = "metadata",
**kwargs: Any,
) -> Tair:
"""Connect to an existing Tair index."""
url = get_from_dict_or_env(kwargs, "tair_url", "TAIR_URL")
search_params = {}
if "search_params" in kwargs:
search_params = kwargs.pop("search_params")
return cls(
embedding,
url,
index_name,
content_key=content_key,
metadata_key=metadata_key,
search_params=search_params,
**kwargs,
) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html |
cc933f7b-c839-473a-8576-4d098bc34605 | Source code for langchain.vectorstores.atlas
"""Wrapper around Atlas by Nomic."""
from __future__ import annotations
import logging
import uuid
from typing import Any, Iterable, List, Optional, Type
import numpy as np
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
logger = logging.getLogger(__name__)
[docs]class AtlasDB(VectorStore):
"""Wrapper around Atlas: Nomic's neural database and rhizomatic instrument.
To use, you should have the ``nomic`` python package installed.
Example:
.. code-block:: python
from langchain.vectorstores import AtlasDB
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = AtlasDB("my_project", embeddings.embed_query)
"""
_ATLAS_DEFAULT_ID_FIELD = "atlas_id"
def __init__(
self,
name: str,
embedding_function: Optional[Embeddings] = None,
api_key: Optional[str] = None,
description: str = "A description for your project",
is_public: bool = True,
reset_project_if_exists: bool = False,
) -> None:
"""
Initialize the Atlas Client
Args:
name (str): The name of your project. If the project already exists,
it will be loaded.
embedding_function (Optional[Callable]): An optional function used for
embedding your data. If None, data will be embedded with
Nomic's embed model.
api_key (str): Your nomic API key
description (str): A description for your project.
is_public (bool): Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool): Whether to reset this project if it
already exists. Default False.
Generally userful during development and testing.
"""
try:
import nomic
from nomic import AtlasProject
except ImportError:
raise ValueError(
"Could not import nomic python package. "
"Please install it with `pip install nomic`."
)
if api_key is None:
raise ValueError("No API key provided. Sign up at atlas.nomic.ai!")
nomic.login(api_key)
self._embedding_function = embedding_function
modality = "text"
if self._embedding_function is not None:
modality = "embedding"
# Check if the project exists, create it if not
self.project = AtlasProject(
name=name,
description=description,
modality=modality,
is_public=is_public,
reset_project_if_exists=reset_project_if_exists,
unique_id_field=AtlasDB._ATLAS_DEFAULT_ID_FIELD,
)
self.project._latest_project_state()
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
refresh: bool = True,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts (Iterable[str]): Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional): Optional list of metadatas.
ids (Optional[List[str]]): An optional list of ids.
refresh(bool): Whether or not to refresh indices with the updated data.
Default True.
Returns:
List[str]: List of IDs of the added texts.
"""
if (
metadatas is not None
and len(metadatas) > 0
and "text" in metadatas[0].keys()
):
raise ValueError("Cannot accept key text in metadata!")
texts = list(texts)
if ids is None:
ids = [str(uuid.uuid1()) for _ in texts]
# Embedding upload case
if self._embedding_function is not None:
_embeddings = self._embedding_function.embed_documents(texts)
embeddings = np.stack(_embeddings)
if metadatas is None:
data = [
{AtlasDB._ATLAS_DEFAULT_ID_FIELD: ids[i], "text": texts[i]}
for i, _ in enumerate(texts)
]
else:
for i in range(len(metadatas)):
metadatas[i][AtlasDB._ATLAS_DEFAULT_ID_FIELD] = ids[i]
metadatas[i]["text"] = texts[i]
data = metadatas
self.project._validate_map_data_inputs(
[], id_field=AtlasDB._ATLAS_DEFAULT_ID_FIELD, data=data
)
with self.project.wait_for_project_lock():
self.project.add_embeddings(embeddings=embeddings, data=data)
# Text upload case
else:
if metadatas is None:
data = [
{"text": text, AtlasDB._ATLAS_DEFAULT_ID_FIELD: ids[i]}
for i, text in enumerate(texts)
]
else:
for i, text in enumerate(texts):
metadatas[i]["text"] = texts
metadatas[i][AtlasDB._ATLAS_DEFAULT_ID_FIELD] = ids[i]
data = metadatas
self.project._validate_map_data_inputs(
[], id_field=AtlasDB._ATLAS_DEFAULT_ID_FIELD, data=data
)
with self.project.wait_for_project_lock():
self.project.add_text(data)
if refresh:
if len(self.project.indices) > 0:
with self.project.wait_for_project_lock():
self.project.rebuild_maps()
return ids
[docs] def create_index(self, **kwargs: Any) -> Any:
"""Creates an index in your project.
See
https://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index
for full detail.
"""
with self.project.wait_for_project_lock():
return self.project.create_index(**kwargs)
[docs] def similarity_search(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Document]:
"""Run similarity search with AtlasDB
Args:
query (str): Query text to search for.
k (int): Number of results to return. Defaults to 4.
Returns:
List[Document]: List of documents most similar to the query text.
"""
if self._embedding_function is None:
raise NotImplementedError(
"AtlasDB requires an embedding_function for text similarity search!"
)
_embedding = self._embedding_function.embed_documents([query])[0]
embedding = np.array(_embedding).reshape(1, -1)
with self.project.wait_for_project_lock():
neighbors, _ = self.project.projections[0].vector_search(
queries=embedding, k=k
)
datas = self.project.get_data(ids=neighbors[0])
docs = [
Document(page_content=datas[i]["text"], metadata=datas[i])
for i, neighbor in enumerate(neighbors)
]
return docs
[docs] @classmethod
def from_texts(
cls: Type[AtlasDB],
texts: List[str],
embedding: Optional[Embeddings] = None,
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
name: Optional[str] = None,
api_key: Optional[str] = None,
description: str = "A description for your project",
is_public: bool = True,
reset_project_if_exists: bool = False,
index_kwargs: Optional[dict] = None,
**kwargs: Any,
) -> AtlasDB:
"""Create an AtlasDB vectorstore from a raw documents.
Args:
texts (List[str]): The list of texts to ingest.
name (str): Name of the project to create.
api_key (str): Your nomic API key,
embedding (Optional[Embeddings]): Embedding function. Defaults to None.
metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.
ids (Optional[List[str]]): Optional list of document IDs. If None,
ids will be auto created
description (str): A description for your project.
is_public (bool): Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool): Whether to reset this project if it
already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]): Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
Returns:
AtlasDB: Nomic's neural database and finest rhizomatic instrument
"""
if name is None or api_key is None:
raise ValueError("`name` and `api_key` cannot be None.")
# Inject relevant kwargs
all_index_kwargs = {"name": name + "_index", "indexed_field": "text"}
if index_kwargs is not None:
for k, v in index_kwargs.items():
all_index_kwargs[k] = v
# Build project
atlasDB = cls(
name,
embedding_function=embedding,
api_key=api_key,
description="A description for your project",
is_public=is_public,
reset_project_if_exists=reset_project_if_exists,
)
with atlasDB.project.wait_for_project_lock():
atlasDB.add_texts(texts=texts, metadatas=metadatas, ids=ids)
atlasDB.create_index(**all_index_kwargs)
return atlasDB
[docs] @classmethod
def from_documents(
cls: Type[AtlasDB],
documents: List[Document],
embedding: Optional[Embeddings] = None,
ids: Optional[List[str]] = None,
name: Optional[str] = None,
api_key: Optional[str] = None,
persist_directory: Optional[str] = None,
description: str = "A description for your project",
is_public: bool = True,
reset_project_if_exists: bool = False,
index_kwargs: Optional[dict] = None,
**kwargs: Any,
) -> AtlasDB:
"""Create an AtlasDB vectorstore from a list of documents.
Args:
name (str): Name of the collection to create.
api_key (str): Your nomic API key,
documents (List[Document]): List of documents to add to the vectorstore.
embedding (Optional[Embeddings]): Embedding function. Defaults to None.
ids (Optional[List[str]]): Optional list of document IDs. If None,
ids will be auto created
description (str): A description for your project.
is_public (bool): Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool): Whether to reset this project if
it already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]): Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
Returns:
AtlasDB: Nomic's neural database and finest rhizomatic instrument
"""
if name is None or api_key is None:
raise ValueError("`name` and `api_key` cannot be None.")
texts = [doc.page_content for doc in documents]
metadatas = [doc.metadata for doc in documents]
return cls.from_texts(
name=name,
api_key=api_key,
texts=texts,
embedding=embedding,
metadatas=metadatas,
ids=ids,
description=description,
is_public=is_public,
reset_project_if_exists=reset_project_if_exists,
index_kwargs=index_kwargs,
) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html |
021d3dad-6235-48c2-a7ad-042f4258d118 | Source code for langchain.vectorstores.singlestoredb
"""Wrapper around SingleStore DB."""
from __future__ import annotations
import enum
import json
from typing import (
Any,
ClassVar,
Collection,
Iterable,
List,
Optional,
Tuple,
Type,
)
from sqlalchemy.pool import QueuePool
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore, VectorStoreRetriever
class DistanceStrategy(str, enum.Enum):
"""Enumerator of the Distance strategies for SingleStoreDB."""
EUCLIDEAN_DISTANCE = "EUCLIDEAN_DISTANCE"
DOT_PRODUCT = "DOT_PRODUCT"
DEFAULT_DISTANCE_STRATEGY = DistanceStrategy.DOT_PRODUCT
ORDERING_DIRECTIVE: dict = {
DistanceStrategy.EUCLIDEAN_DISTANCE: "",
DistanceStrategy.DOT_PRODUCT: "DESC",
}
[docs]class SingleStoreDB(VectorStore):
"""
This class serves as a Pythonic interface to the SingleStore DB database.
The prerequisite for using this class is the installation of the ``singlestoredb``
Python package.
The SingleStoreDB vectorstore can be created by providing an embedding function and
the relevant parameters for the database connection, connection pool, and
optionally, the names of the table and the fields to use.
"""
def _get_connection(self: SingleStoreDB) -> Any:
try:
import singlestoredb as s2
except ImportError:
raise ImportError(
"Could not import singlestoredb python package. "
"Please install it with `pip install singlestoredb`."
)
return s2.connect(**self.connection_kwargs)
def __init__(
self,
embedding: Embeddings,
*,
distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY,
table_name: str = "embeddings",
content_field: str = "content",
metadata_field: str = "metadata",
vector_field: str = "vector",
pool_size: int = 5,
max_overflow: int = 10,
timeout: float = 30,
**kwargs: Any,
):
"""Initialize with necessary components.
Args:
embedding (Embeddings): A text embedding model.
distance_strategy (DistanceStrategy, optional):
Determines the strategy employed for calculating
the distance between vectors in the embedding space.
Defaults to DOT_PRODUCT.
Available options are:
- DOT_PRODUCT: Computes the scalar product of two vectors.
This is the default behavior
- EUCLIDEAN_DISTANCE: Computes the Euclidean distance between
two vectors. This metric considers the geometric distance in
the vector space, and might be more suitable for embeddings
that rely on spatial relationships.
table_name (str, optional): Specifies the name of the table in use.
Defaults to "embeddings".
content_field (str, optional): Specifies the field to store the content.
Defaults to "content".
metadata_field (str, optional): Specifies the field to store metadata.
Defaults to "metadata".
vector_field (str, optional): Specifies the field to store the vector.
Defaults to "vector".
Following arguments pertain to the connection pool:
pool_size (int, optional): Determines the number of active connections in
the pool. Defaults to 5.
max_overflow (int, optional): Determines the maximum number of connections
allowed beyond the pool_size. Defaults to 10.
timeout (float, optional): Specifies the maximum wait time in seconds for
establishing a connection. Defaults to 30.
Following arguments pertain to the database connection:
host (str, optional): Specifies the hostname, IP address, or URL for the
database connection. The default scheme is "mysql".
user (str, optional): Database username.
password (str, optional): Database password.
port (int, optional): Database port. Defaults to 3306 for non-HTTP
connections, 80 for HTTP connections, and 443 for HTTPS connections.
database (str, optional): Database name.
Additional optional arguments provide further customization over the
database connection:
pure_python (bool, optional): Toggles the connector mode. If True,
operates in pure Python mode.
local_infile (bool, optional): Allows local file uploads.
charset (str, optional): Specifies the character set for string values.
ssl_key (str, optional): Specifies the path of the file containing the SSL
key.
ssl_cert (str, optional): Specifies the path of the file containing the SSL
certificate.
ssl_ca (str, optional): Specifies the path of the file containing the SSL
certificate authority.
ssl_cipher (str, optional): Sets the SSL cipher list.
ssl_disabled (bool, optional): Disables SSL usage.
ssl_verify_cert (bool, optional): Verifies the server's certificate.
Automatically enabled if ``ssl_ca`` is specified.
ssl_verify_identity (bool, optional): Verifies the server's identity.
conv (dict[int, Callable], optional): A dictionary of data conversion
functions.
credential_type (str, optional): Specifies the type of authentication to
use: auth.PASSWORD, auth.JWT, or auth.BROWSER_SSO.
autocommit (bool, optional): Enables autocommits.
results_type (str, optional): Determines the structure of the query results:
tuples, namedtuples, dicts.
results_format (str, optional): Deprecated. This option has been renamed to
results_type.
Examples:
Basic Usage:
.. code-block:: python
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import SingleStoreDB
vectorstore = SingleStoreDB(
OpenAIEmbeddings(),
host="https://user:password@127.0.0.1:3306/database"
)
Advanced Usage:
.. code-block:: python
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import SingleStoreDB
vectorstore = SingleStoreDB(
OpenAIEmbeddings(),
distance_strategy=DistanceStrategy.EUCLIDEAN_DISTANCE,
host="127.0.0.1",
port=3306,
user="user",
password="password",
database="db",
table_name="my_custom_table",
pool_size=10,
timeout=60,
)
Using environment variables:
.. code-block:: python
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import SingleStoreDB
os.environ['SINGLESTOREDB_URL'] = 'me:p455w0rd@s2-host.com/my_db'
vectorstore = SingleStoreDB(OpenAIEmbeddings())
"""
self.embedding = embedding
self.distance_strategy = distance_strategy
self.table_name = table_name
self.content_field = content_field
self.metadata_field = metadata_field
self.vector_field = vector_field
"""Pass the rest of the kwargs to the connection."""
self.connection_kwargs = kwargs
"""Add program name and version to connection attributes."""
if "conn_attrs" not in self.connection_kwargs:
self.connection_kwargs["conn_attrs"] = dict()
if "program_name" not in self.connection_kwargs["conn_attrs"]:
self.connection_kwargs["conn_attrs"][
"program_name"
] = "langchain python sdk"
self.connection_kwargs["conn_attrs"][
"program_version"
] = "0.0.205" # the version of SingleStoreDB VectorStore implementation
"""Create connection pool."""
self.connection_pool = QueuePool(
self._get_connection,
max_overflow=max_overflow,
pool_size=pool_size,
timeout=timeout,
)
self._create_table()
def _create_table(self: SingleStoreDB) -> None:
"""Create table if it doesn't exist."""
conn = self.connection_pool.connect()
try:
cur = conn.cursor()
try:
cur.execute(
"""CREATE TABLE IF NOT EXISTS {}
({} TEXT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci,
{} BLOB, {} JSON);""".format(
self.table_name,
self.content_field,
self.vector_field,
self.metadata_field,
),
)
finally:
cur.close()
finally:
conn.close()
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
embeddings: Optional[List[List[float]]] = None,
**kwargs: Any,
) -> List[str]:
"""Add more texts to the vectorstore.
Args:
texts (Iterable[str]): Iterable of strings/text to add to the vectorstore.
metadatas (Optional[List[dict]], optional): Optional list of metadatas.
Defaults to None.
embeddings (Optional[List[List[float]]], optional): Optional pre-generated
embeddings. Defaults to None.
Returns:
List[str]: empty list
"""
conn = self.connection_pool.connect()
try:
cur = conn.cursor()
try:
# Write data to singlestore db
for i, text in enumerate(texts):
# Use provided values by default or fallback
metadata = metadatas[i] if metadatas else {}
embedding = (
embeddings[i]
if embeddings
else self.embedding.embed_documents([text])[0]
)
cur.execute(
"INSERT INTO {} VALUES (%s, JSON_ARRAY_PACK(%s), %s)".format(
self.table_name
),
(
text,
"[{}]".format(",".join(map(str, embedding))),
json.dumps(metadata),
),
)
finally:
cur.close()
finally:
conn.close()
return []
[docs] def similarity_search(
self, query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any
) -> List[Document]:
"""Returns the most similar indexed documents to the query text.
Uses cosine similarity.
Args:
query (str): The query text for which to find similar documents.
k (int): The number of documents to return. Default is 4.
filter (dict): A dictionary of metadata fields and values to filter by.
Returns:
List[Document]: A list of documents that are most similar to the query text.
Examples:
.. code-block:: python
from langchain.vectorstores import SingleStoreDB
from langchain.embeddings import OpenAIEmbeddings
s2 = SingleStoreDB.from_documents(
docs,
OpenAIEmbeddings(),
host="username:password@localhost:3306/database"
)
s2.similarity_search("query text", 1,
{"metadata_field": "metadata_value"})
"""
docs_and_scores = self.similarity_search_with_score(
query=query, k=k, filter=filter
)
return [doc for doc, _ in docs_and_scores]
[docs] def similarity_search_with_score(
self, query: str, k: int = 4, filter: Optional[dict] = None
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query. Uses cosine similarity.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter: A dictionary of metadata fields and values to filter by.
Defaults to None.
Returns:
List of Documents most similar to the query and score for each
"""
# Creates embedding vector from user query
embedding = self.embedding.embed_query(query)
conn = self.connection_pool.connect()
result = []
where_clause: str = ""
where_clause_values: List[Any] = []
if filter:
where_clause = "WHERE "
arguments = []
def build_where_clause(
where_clause_values: List[Any],
sub_filter: dict,
prefix_args: List[str] = [],
) -> None:
for key in sub_filter.keys():
if isinstance(sub_filter[key], dict):
build_where_clause(
where_clause_values, sub_filter[key], prefix_args + [key]
)
else:
arguments.append(
"JSON_EXTRACT_JSON({}, {}) = %s".format(
self.metadata_field,
", ".join(["%s"] * (len(prefix_args) + 1)),
)
)
where_clause_values += prefix_args + [key]
where_clause_values.append(json.dumps(sub_filter[key]))
build_where_clause(where_clause_values, filter)
where_clause += " AND ".join(arguments)
try:
cur = conn.cursor()
try:
cur.execute(
"""SELECT {}, {}, {}({}, JSON_ARRAY_PACK(%s)) as __score
FROM {} {} ORDER BY __score {} LIMIT %s""".format(
self.content_field,
self.metadata_field,
self.distance_strategy,
self.vector_field,
self.table_name,
where_clause,
ORDERING_DIRECTIVE[self.distance_strategy],
),
("[{}]".format(",".join(map(str, embedding))),)
+ tuple(where_clause_values)
+ (k,),
)
for row in cur.fetchall():
doc = Document(page_content=row[0], metadata=row[1])
result.append((doc, float(row[2])))
finally:
cur.close()
finally:
conn.close()
return result
[docs] @classmethod
def from_texts(
cls: Type[SingleStoreDB],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY,
table_name: str = "embeddings",
content_field: str = "content",
metadata_field: str = "metadata",
vector_field: str = "vector",
pool_size: int = 5,
max_overflow: int = 10,
timeout: float = 30,
**kwargs: Any,
) -> SingleStoreDB:
"""Create a SingleStoreDB vectorstore from raw documents.
This is a user-friendly interface that:
1. Embeds documents.
2. Creates a new table for the embeddings in SingleStoreDB.
3. Adds the documents to the newly created table.
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain.vectorstores import SingleStoreDB
from langchain.embeddings import OpenAIEmbeddings
s2 = SingleStoreDB.from_texts(
texts,
OpenAIEmbeddings(),
host="username:password@localhost:3306/database"
)
"""
instance = cls(
embedding,
distance_strategy=distance_strategy,
table_name=table_name,
content_field=content_field,
metadata_field=metadata_field,
vector_field=vector_field,
pool_size=pool_size,
max_overflow=max_overflow,
timeout=timeout,
**kwargs,
)
instance.add_texts(texts, metadatas, embedding.embed_documents(texts), **kwargs)
return instance
[docs] def as_retriever(self, **kwargs: Any) -> SingleStoreDBRetriever:
return SingleStoreDBRetriever(vectorstore=self, **kwargs)
class SingleStoreDBRetriever(VectorStoreRetriever):
"""Retriever for SingleStoreDB vector stores."""
vectorstore: SingleStoreDB
k: int = 4
allowed_search_types: ClassVar[Collection[str]] = ("similarity",)
def get_relevant_documents(self, query: str) -> List[Document]:
if self.search_type == "similarity":
docs = self.vectorstore.similarity_search(query, k=self.k)
else:
raise ValueError(f"search_type of {self.search_type} not allowed.")
return docs
async def aget_relevant_documents(self, query: str) -> List[Document]:
raise NotImplementedError(
"SingleStoreDBVectorStoreRetriever does not support async"
) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html |
9ed70bfe-e90e-47c2-801b-d021d45adfcc | Source code for langchain.vectorstores.weaviate
"""Wrapper around weaviate vector database."""
from __future__ import annotations
import datetime
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type
from uuid import uuid4
import numpy as np
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.utils import get_from_dict_or_env
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
def _default_schema(index_name: str) -> Dict:
return {
"class": index_name,
"properties": [
{
"name": "text",
"dataType": ["text"],
}
],
}
def _create_weaviate_client(**kwargs: Any) -> Any:
client = kwargs.get("client")
if client is not None:
return client
weaviate_url = get_from_dict_or_env(kwargs, "weaviate_url", "WEAVIATE_URL")
try:
# the weaviate api key param should not be mandatory
weaviate_api_key = get_from_dict_or_env(
kwargs, "weaviate_api_key", "WEAVIATE_API_KEY", None
)
except ValueError:
weaviate_api_key = None
try:
import weaviate
except ImportError:
raise ValueError(
"Could not import weaviate python package. "
"Please install it with `pip install weaviate-client`"
)
auth = (
weaviate.auth.AuthApiKey(api_key=weaviate_api_key)
if weaviate_api_key is not None
else None
)
client = weaviate.Client(weaviate_url, auth_client_secret=auth)
return client
def _default_score_normalizer(val: float) -> float:
return 1 - 1 / (1 + np.exp(val))
def _json_serializable(value: Any) -> Any:
if isinstance(value, datetime.datetime):
return value.isoformat()
return value
[docs]class Weaviate(VectorStore):
"""Wrapper around Weaviate vector database.
To use, you should have the ``weaviate-client`` python package installed.
Example:
.. code-block:: python
import weaviate
from langchain.vectorstores import Weaviate
client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...)
weaviate = Weaviate(client, index_name, text_key)
"""
def __init__(
self,
client: Any,
index_name: str,
text_key: str,
embedding: Optional[Embeddings] = None,
attributes: Optional[List[str]] = None,
relevance_score_fn: Optional[
Callable[[float], float]
] = _default_score_normalizer,
by_text: bool = True,
):
"""Initialize with Weaviate client."""
try:
import weaviate
except ImportError:
raise ValueError(
"Could not import weaviate python package. "
"Please install it with `pip install weaviate-client`."
)
if not isinstance(client, weaviate.Client):
raise ValueError(
f"client should be an instance of weaviate.Client, got {type(client)}"
)
self._client = client
self._index_name = index_name
self._embedding = embedding
self._text_key = text_key
self._query_attrs = [self._text_key]
self._relevance_score_fn = relevance_score_fn
self._by_text = by_text
if attributes is not None:
self._query_attrs.extend(attributes)
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> List[str]:
"""Upload texts with metadata (properties) to Weaviate."""
from weaviate.util import get_valid_uuid
ids = []
with self._client.batch as batch:
for i, text in enumerate(texts):
data_properties = {self._text_key: text}
if metadatas is not None:
for key, val in metadatas[i].items():
data_properties[key] = _json_serializable(val)
# Allow for ids (consistent w/ other methods)
# # Or uuids (backwards compatble w/ existing arg)
# If the UUID of one of the objects already exists
# then the existing object will be replaced by the new object.
_id = get_valid_uuid(uuid4())
if "uuids" in kwargs:
_id = kwargs["uuids"][i]
elif "ids" in kwargs:
_id = kwargs["ids"][i]
if self._embedding is not None:
vector = self._embedding.embed_documents([text])[0]
else:
vector = None
batch.add_data_object(
data_object=data_properties,
class_name=self._index_name,
uuid=_id,
vector=vector,
)
ids.append(_id)
return ids
[docs] def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query.
"""
if self._by_text:
return self.similarity_search_by_text(query, k, **kwargs)
else:
if self._embedding is None:
raise ValueError(
"_embedding cannot be None for similarity_search when "
"_by_text=False"
)
embedding = self._embedding.embed_query(query)
return self.similarity_search_by_vector(embedding, k, **kwargs)
[docs] def similarity_search_by_text(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query.
"""
content: Dict[str, Any] = {"concepts": [query]}
if kwargs.get("search_distance"):
content["certainty"] = kwargs.get("search_distance")
query_obj = self._client.query.get(self._index_name, self._query_attrs)
if kwargs.get("where_filter"):
query_obj = query_obj.with_where(kwargs.get("where_filter"))
if kwargs.get("additional"):
query_obj = query_obj.with_additional(kwargs.get("additional"))
result = query_obj.with_near_text(content).with_limit(k).do()
if "errors" in result:
raise ValueError(f"Error during query: {result['errors']}")
docs = []
for res in result["data"]["Get"][self._index_name]:
text = res.pop(self._text_key)
docs.append(Document(page_content=text, metadata=res))
return docs
[docs] def similarity_search_by_vector(
self, embedding: List[float], k: int = 4, **kwargs: Any
) -> List[Document]:
"""Look up similar documents by embedding vector in Weaviate."""
vector = {"vector": embedding}
query_obj = self._client.query.get(self._index_name, self._query_attrs)
if kwargs.get("where_filter"):
query_obj = query_obj.with_where(kwargs.get("where_filter"))
if kwargs.get("additional"):
query_obj = query_obj.with_additional(kwargs.get("additional"))
result = query_obj.with_near_vector(vector).with_limit(k).do()
if "errors" in result:
raise ValueError(f"Error during query: {result['errors']}")
docs = []
for res in result["data"]["Get"][self._index_name]:
text = res.pop(self._text_key)
docs.append(Document(page_content=text, metadata=res))
return docs
[docs] def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
if self._embedding is not None:
embedding = self._embedding.embed_query(query)
else:
raise ValueError(
"max_marginal_relevance_search requires a suitable Embeddings object"
)
return self.max_marginal_relevance_search_by_vector(
embedding, k=k, fetch_k=fetch_k, lambda_mult=lambda_mult, **kwargs
)
[docs] def max_marginal_relevance_search_by_vector(
self,
embedding: List[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
vector = {"vector": embedding}
query_obj = self._client.query.get(self._index_name, self._query_attrs)
if kwargs.get("where_filter"):
query_obj = query_obj.with_where(kwargs.get("where_filter"))
results = (
query_obj.with_additional("vector")
.with_near_vector(vector)
.with_limit(fetch_k)
.do()
)
payload = results["data"]["Get"][self._index_name]
embeddings = [result["_additional"]["vector"] for result in payload]
mmr_selected = maximal_marginal_relevance(
np.array(embedding), embeddings, k=k, lambda_mult=lambda_mult
)
docs = []
for idx in mmr_selected:
text = payload[idx].pop(self._text_key)
payload[idx].pop("_additional")
meta = payload[idx]
docs.append(Document(page_content=text, metadata=meta))
return docs
[docs] def similarity_search_with_score(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Tuple[Document, float]]:
"""
Return list of documents most similar to the query
text and cosine distance in float for each.
Lower score represents more similarity.
"""
if self._embedding is None:
raise ValueError(
"_embedding cannot be None for similarity_search_with_score"
)
content: Dict[str, Any] = {"concepts": [query]}
if kwargs.get("search_distance"):
content["certainty"] = kwargs.get("search_distance")
query_obj = self._client.query.get(self._index_name, self._query_attrs)
if not self._by_text:
embedding = self._embedding.embed_query(query)
vector = {"vector": embedding}
result = (
query_obj.with_near_vector(vector)
.with_limit(k)
.with_additional("vector")
.do()
)
else:
result = (
query_obj.with_near_text(content)
.with_limit(k)
.with_additional("vector")
.do()
)
if "errors" in result:
raise ValueError(f"Error during query: {result['errors']}")
docs_and_scores = []
for res in result["data"]["Get"][self._index_name]:
text = res.pop(self._text_key)
score = np.dot(
res["_additional"]["vector"], self._embedding.embed_query(query)
)
docs_and_scores.append((Document(page_content=text, metadata=res), score))
return docs_and_scores
def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
"""
if self._relevance_score_fn is None:
raise ValueError(
"relevance_score_fn must be provided to"
" Weaviate constructor to normalize scores"
)
docs_and_scores = self.similarity_search_with_score(query, k=k, **kwargs)
return [
(doc, self._relevance_score_fn(score)) for doc, score in docs_and_scores
]
[docs] @classmethod
def from_texts(
cls: Type[Weaviate],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> Weaviate:
"""Construct Weaviate wrapper from raw documents.
This is a user-friendly interface that:
1. Embeds documents.
2. Creates a new index for the embeddings in the Weaviate instance.
3. Adds the documents to the newly created Weaviate index.
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain.vectorstores.weaviate import Weaviate
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
weaviate = Weaviate.from_texts(
texts,
embeddings,
weaviate_url="http://localhost:8080"
)
"""
client = _create_weaviate_client(**kwargs)
from weaviate.util import get_valid_uuid
index_name = kwargs.get("index_name", f"LangChain_{uuid4().hex}")
embeddings = embedding.embed_documents(texts) if embedding else None
text_key = "text"
schema = _default_schema(index_name)
attributes = list(metadatas[0].keys()) if metadatas else None
# check whether the index already exists
if not client.schema.contains(schema):
client.schema.create_class(schema)
with client.batch as batch:
for i, text in enumerate(texts):
data_properties = {
text_key: text,
}
if metadatas is not None:
for key in metadatas[i].keys():
data_properties[key] = metadatas[i][key]
# If the UUID of one of the objects already exists
# then the existing objectwill be replaced by the new object.
if "uuids" in kwargs:
_id = kwargs["uuids"][i]
else:
_id = get_valid_uuid(uuid4())
# if an embedding strategy is not provided, we let
# weaviate create the embedding. Note that this will only
# work if weaviate has been installed with a vectorizer module
# like text2vec-contextionary for example
params = {
"uuid": _id,
"data_object": data_properties,
"class_name": index_name,
}
if embeddings is not None:
params["vector"] = embeddings[i]
batch.add_data_object(**params)
batch.flush()
relevance_score_fn = kwargs.get("relevance_score_fn")
by_text: bool = kwargs.get("by_text", False)
return cls(
client,
index_name,
text_key,
embedding=embedding,
attributes=attributes,
relevance_score_fn=relevance_score_fn,
by_text=by_text,
)
[docs] def delete(self, ids: List[str]) -> None:
"""Delete by vector IDs.
Args:
ids: List of ids to delete.
"""
# TODO: Check if this can be done in bulk
for id in ids:
self._client.data_object.delete(uuid=id) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html |
6182d2e5-b824-4e42-82d5-957546a1ead0 | Source code for langchain.vectorstores.myscale
"""Wrapper around MyScale vector database."""
from __future__ import annotations
import json
import logging
from hashlib import sha1
from threading import Thread
from typing import Any, Dict, Iterable, List, Optional, Tuple
from pydantic import BaseSettings
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
logger = logging.getLogger()
def has_mul_sub_str(s: str, *args: Any) -> bool:
"""
Check if a string contains multiple substrings.
Args:
s: string to check.
*args: substrings to check.
Returns:
True if all substrings are in the string, False otherwise.
"""
for a in args:
if a not in s:
return False
return True
[docs]class MyScaleSettings(BaseSettings):
"""MyScale Client Configuration
Attribute:
myscale_host (str) : An URL to connect to MyScale backend.
Defaults to 'localhost'.
myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.
username (str) : Username to login. Defaults to None.
password (str) : Password to login. Defaults to None.
index_type (str): index type string.
index_param (dict): index build parameter.
database (str) : Database name to find the table. Defaults to 'default'.
table (str) : Table name to operate on.
Defaults to 'vector_table'.
metric (str) : Metric to compute distance,
supported are ('l2', 'cosine', 'ip'). Defaults to 'cosine'.
column_map (Dict) : Column type map to project column name onto langchain
semantics. Must have keys: `text`, `id`, `vector`,
must be same size to number of columns. For example:
.. code-block:: python
{
'id': 'text_id',
'vector': 'text_embedding',
'text': 'text_plain',
'metadata': 'metadata_dictionary_in_json',
}
Defaults to identity map.
"""
host: str = "localhost"
port: int = 8443
username: Optional[str] = None
password: Optional[str] = None
index_type: str = "IVFFLAT"
index_param: Optional[Dict[str, str]] = None
column_map: Dict[str, str] = {
"id": "id",
"text": "text",
"vector": "vector",
"metadata": "metadata",
}
database: str = "default"
table: str = "langchain"
metric: str = "cosine"
def __getitem__(self, item: str) -> Any:
return getattr(self, item)
class Config:
env_file = ".env"
env_prefix = "myscale_"
env_file_encoding = "utf-8"
[docs]class MyScale(VectorStore):
"""Wrapper around MyScale vector database
You need a `clickhouse-connect` python package, and a valid account
to connect to MyScale.
MyScale can not only search with simple vector indexes,
it also supports complex query with multiple conditions,
constraints and even sub-queries.
For more information, please visit
[myscale official site](https://docs.myscale.com/en/overview/)
"""
def __init__(
self,
embedding: Embeddings,
config: Optional[MyScaleSettings] = None,
**kwargs: Any,
) -> None:
"""MyScale Wrapper to LangChain
embedding_function (Embeddings):
config (MyScaleSettings): Configuration to MyScale Client
Other keyword arguments will pass into
[clickhouse-connect](https://docs.myscale.com/)
"""
try:
from clickhouse_connect import get_client
except ImportError:
raise ValueError(
"Could not import clickhouse connect python package. "
"Please install it with `pip install clickhouse-connect`."
)
try:
from tqdm import tqdm
self.pgbar = tqdm
except ImportError:
# Just in case if tqdm is not installed
self.pgbar = lambda x: x
super().__init__()
if config is not None:
self.config = config
else:
self.config = MyScaleSettings()
assert self.config
assert self.config.host and self.config.port
assert (
self.config.column_map
and self.config.database
and self.config.table
and self.config.metric
)
for k in ["id", "vector", "text", "metadata"]:
assert k in self.config.column_map
assert self.config.metric in ["ip", "cosine", "l2"]
# initialize the schema
dim = len(embedding.embed_query("try this out"))
index_params = (
", " + ",".join([f"'{k}={v}'" for k, v in self.config.index_param.items()])
if self.config.index_param
else ""
)
schema_ = f"""
CREATE TABLE IF NOT EXISTS {self.config.database}.{self.config.table}(
{self.config.column_map['id']} String,
{self.config.column_map['text']} String,
{self.config.column_map['vector']} Array(Float32),
{self.config.column_map['metadata']} JSON,
CONSTRAINT cons_vec_len CHECK length(\
{self.config.column_map['vector']}) = {dim},
VECTOR INDEX vidx {self.config.column_map['vector']} \
TYPE {self.config.index_type}(\
'metric_type={self.config.metric}'{index_params})
) ENGINE = MergeTree ORDER BY {self.config.column_map['id']}
"""
self.dim = dim
self.BS = "\\"
self.must_escape = ("\\", "'")
self.embedding_function = embedding.embed_query
self.dist_order = "ASC" if self.config.metric in ["cosine", "l2"] else "DESC"
# Create a connection to myscale
self.client = get_client(
host=self.config.host,
port=self.config.port,
username=self.config.username,
password=self.config.password,
**kwargs,
)
self.client.command("SET allow_experimental_object_type=1")
self.client.command(schema_)
[docs] def escape_str(self, value: str) -> str:
return "".join(f"{self.BS}{c}" if c in self.must_escape else c for c in value)
def _build_istr(self, transac: Iterable, column_names: Iterable[str]) -> str:
ks = ",".join(column_names)
_data = []
for n in transac:
n = ",".join([f"'{self.escape_str(str(_n))}'" for _n in n])
_data.append(f"({n})")
i_str = f"""
INSERT INTO TABLE
{self.config.database}.{self.config.table}({ks})
VALUES
{','.join(_data)}
"""
return i_str
def _insert(self, transac: Iterable, column_names: Iterable[str]) -> None:
_i_str = self._build_istr(transac, column_names)
self.client.command(_i_str)
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
batch_size: int = 32,
ids: Optional[Iterable[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
ids: Optional list of ids to associate with the texts.
batch_size: Batch size of insertion
metadata: Optional column data to be inserted
Returns:
List of ids from adding the texts into the vectorstore.
"""
# Embed and create the documents
ids = ids or [sha1(t.encode("utf-8")).hexdigest() for t in texts]
colmap_ = self.config.column_map
transac = []
column_names = {
colmap_["id"]: ids,
colmap_["text"]: texts,
colmap_["vector"]: map(self.embedding_function, texts),
}
metadatas = metadatas or [{} for _ in texts]
column_names[colmap_["metadata"]] = map(json.dumps, metadatas)
assert len(set(colmap_) - set(column_names)) >= 0
keys, values = zip(*column_names.items())
try:
t = None
for v in self.pgbar(
zip(*values), desc="Inserting data...", total=len(metadatas)
):
assert len(v[keys.index(self.config.column_map["vector"])]) == self.dim
transac.append(v)
if len(transac) == batch_size:
if t:
t.join()
t = Thread(target=self._insert, args=[transac, keys])
t.start()
transac = []
if len(transac) > 0:
if t:
t.join()
self._insert(transac, keys)
return [i for i in ids]
except Exception as e:
logger.error(f"\033[91m\033[1m{type(e)}\033[0m \033[95m{str(e)}\033[0m")
return []
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[Dict[Any, Any]]] = None,
config: Optional[MyScaleSettings] = None,
text_ids: Optional[Iterable[str]] = None,
batch_size: int = 32,
**kwargs: Any,
) -> MyScale:
"""Create Myscale wrapper with existing texts
Args:
embedding_function (Embeddings): Function to extract text embedding
texts (Iterable[str]): List or tuple of strings to be added
config (MyScaleSettings, Optional): Myscale configuration
text_ids (Optional[Iterable], optional): IDs for the texts.
Defaults to None.
batch_size (int, optional): Batchsize when transmitting data to MyScale.
Defaults to 32.
metadata (List[dict], optional): metadata to texts. Defaults to None.
Other keyword arguments will pass into
[clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)
Returns:
MyScale Index
"""
ctx = cls(embedding, config, **kwargs)
ctx.add_texts(texts, ids=text_ids, batch_size=batch_size, metadatas=metadatas)
return ctx
def __repr__(self) -> str:
"""Text representation for myscale, prints backends, username and schemas.
Easy to use with `str(Myscale())`
Returns:
repr: string to show connection info and data schema
"""
_repr = f"\033[92m\033[1m{self.config.database}.{self.config.table} @ "
_repr += f"{self.config.host}:{self.config.port}\033[0m\n\n"
_repr += f"\033[1musername: {self.config.username}\033[0m\n\nTable Schema:\n"
_repr += "-" * 51 + "\n"
for r in self.client.query(
f"DESC {self.config.database}.{self.config.table}"
).named_results():
_repr += (
f"|\033[94m{r['name']:24s}\033[0m|\033[96m{r['type']:24s}\033[0m|\n"
)
_repr += "-" * 51 + "\n"
return _repr
def _build_qstr(
self, q_emb: List[float], topk: int, where_str: Optional[str] = None
) -> str:
q_emb_str = ",".join(map(str, q_emb))
if where_str:
where_str = f"PREWHERE {where_str}"
else:
where_str = ""
q_str = f"""
SELECT {self.config.column_map['text']},
{self.config.column_map['metadata']}, dist
FROM {self.config.database}.{self.config.table}
{where_str}
ORDER BY distance({self.config.column_map['vector']}, [{q_emb_str}])
AS dist {self.dist_order}
LIMIT {topk}
"""
return q_str
[docs] def similarity_search(
self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any
) -> List[Document]:
"""Perform a similarity search with MyScale
Args:
query (str): query string
k (int, optional): Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional): where condition string.
Defaults to None.
NOTE: Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use `{self.metadata_column}.attribute` instead of `attribute`
alone. The default name for it is `metadata`.
Returns:
List[Document]: List of Documents
"""
return self.similarity_search_by_vector(
self.embedding_function(query), k, where_str, **kwargs
)
[docs] def similarity_search_by_vector(
self,
embedding: List[float],
k: int = 4,
where_str: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Perform a similarity search with MyScale by vectors
Args:
query (str): query string
k (int, optional): Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional): where condition string.
Defaults to None.
NOTE: Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use `{self.metadata_column}.attribute` instead of `attribute`
alone. The default name for it is `metadata`.
Returns:
List[Document]: List of (Document, similarity)
"""
q_str = self._build_qstr(embedding, k, where_str)
try:
return [
Document(
page_content=r[self.config.column_map["text"]],
metadata=r[self.config.column_map["metadata"]],
)
for r in self.client.query(q_str).named_results()
]
except Exception as e:
logger.error(f"\033[91m\033[1m{type(e)}\033[0m \033[95m{str(e)}\033[0m")
return []
[docs] def similarity_search_with_relevance_scores(
self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any
) -> List[Tuple[Document, float]]:
"""Perform a similarity search with MyScale
Args:
query (str): query string
k (int, optional): Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional): where condition string.
Defaults to None.
NOTE: Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use `{self.metadata_column}.attribute` instead of `attribute`
alone. The default name for it is `metadata`.
Returns:
List[Document]: List of documents most similar to the query text
and cosine distance in float for each.
Lower score represents more similarity.
"""
q_str = self._build_qstr(self.embedding_function(query), k, where_str)
try:
return [
(
Document(
page_content=r[self.config.column_map["text"]],
metadata=r[self.config.column_map["metadata"]],
),
r["dist"],
)
for r in self.client.query(q_str).named_results()
]
except Exception as e:
logger.error(f"\033[91m\033[1m{type(e)}\033[0m \033[95m{str(e)}\033[0m")
return []
[docs] def drop(self) -> None:
"""
Helper function: Drop data
"""
self.client.command(
f"DROP TABLE IF EXISTS {self.config.database}.{self.config.table}"
)
@property
def metadata_column(self) -> str:
return self.config.column_map["metadata"] | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html |
e792d280-4f29-4858-a8c7-aaa8d27403f3 | Source code for langchain.vectorstores.deeplake
"""Wrapper around Activeloop Deep Lake."""
from __future__ import annotations
import logging
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Union
import numpy as np
try:
import deeplake
from deeplake.core.fast_forwarding import version_compare
from deeplake.core.vectorstore import DeepLakeVectorStore
_DEEPLAKE_INSTALLED = True
except ImportError:
_DEEPLAKE_INSTALLED = False
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
logger = logging.getLogger(__name__)
[docs]class DeepLake(VectorStore):
"""Wrapper around Deep Lake, a data lake for deep learning applications.
We integrated deeplake's similarity search and filtering for fast prototyping,
Now, it supports Tensor Query Language (TQL) for production use cases
over billion rows.
Why Deep Lake?
- Not only stores embeddings, but also the original data with version control.
- Serverless, doesn't require another service and can be used with major
cloud providers (S3, GCS, etc.)
- More than just a multi-modal vector store. You can use the dataset
to fine-tune your own LLM models.
To use, you should have the ``deeplake`` python package installed.
Example:
.. code-block:: python
from langchain.vectorstores import DeepLake
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = DeepLake("langchain_store", embeddings.embed_query)
"""
_LANGCHAIN_DEFAULT_DEEPLAKE_PATH = "./deeplake/"
def __init__(
self,
dataset_path: str = _LANGCHAIN_DEFAULT_DEEPLAKE_PATH,
token: Optional[str] = None,
embedding_function: Optional[Embeddings] = None,
read_only: bool = False,
ingestion_batch_size: int = 1000,
num_workers: int = 0,
verbose: bool = True,
exec_option: str = "python",
**kwargs: Any,
) -> None:
"""Creates an empty DeepLakeVectorStore or loads an existing one.
The DeepLakeVectorStore is located at the specified ``path``.
Examples:
>>> # Create a vector store with default tensors
>>> deeplake_vectorstore = DeepLake(
... path = <path_for_storing_Data>,
... )
>>>
>>> # Create a vector store in the Deep Lake Managed Tensor Database
>>> data = DeepLake(
... path = "hub://org_id/dataset_name",
... exec_option = "tensor_db",
... )
Args:
dataset_path (str): Path to existing dataset or where to create
a new one. Defaults to _LANGCHAIN_DEFAULT_DEEPLAKE_PATH.
token (str, optional): Activeloop token, for fetching credentials
to the dataset at path if it is a Deep Lake dataset.
Tokens are normally autogenerated. Optional.
embedding_function (str, optional): Function to convert
either documents or query. Optional.
read_only (bool): Open dataset in read-only mode. Default is False.
ingestion_batch_size (int): During data ingestion, data is divided
into batches. Batch size is the size of each batch.
Default is 1000.
num_workers (int): Number of workers to use during data ingestion.
Default is 0.
verbose (bool): Print dataset summary after each operation.
Default is True.
exec_option (str): DeepLakeVectorStore supports 3 ways to perform
searching - "python", "compute_engine", "tensor_db".
Default is "python".
- ``python`` - Pure-python implementation that runs on the client.
WARNING: using this with big datasets can lead to memory
issues. Data can be stored anywhere.
- ``compute_engine`` - C++ implementation of the Deep Lake Compute
Engine that runs on the client. Can be used for any data stored in
or connected to Deep Lake. Not for in-memory or local datasets.
- ``tensor_db`` - Hosted Managed Tensor Database that is
responsible for storage and query execution. Only for data stored in
the Deep Lake Managed Database. Use runtime = {"db_engine": True} during
dataset creation.
**kwargs: Other optional keyword arguments.
Raises:
ValueError: If some condition is not met.
"""
self.ingestion_batch_size = ingestion_batch_size
self.num_workers = num_workers
self.verbose = verbose
if _DEEPLAKE_INSTALLED is False:
raise ValueError(
"Could not import deeplake python package. "
"Please install it with `pip install deeplake`."
)
if version_compare(deeplake.__version__, "3.6.2") == -1:
raise ValueError(
"deeplake version should be >= 3.6.3, but you've installed"
f" {deeplake.__version__}. Consider upgrading deeplake version \
pip install --upgrade deeplake."
)
self.dataset_path = dataset_path
self.vectorstore = DeepLakeVectorStore(
path=self.dataset_path,
embedding_function=embedding_function,
read_only=read_only,
token=token,
exec_option=exec_option,
verbose=verbose,
**kwargs,
)
self._embedding_function = embedding_function
self._id_tensor_name = "ids" if "ids" in self.vectorstore.tensors() else "id"
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Examples:
>>> ids = deeplake_vectorstore.add_texts(
... texts = <list_of_texts>,
... metadatas = <list_of_metadata_jsons>,
... ids = <list_of_ids>,
... )
Args:
texts (Iterable[str]): Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional): Optional list of metadatas.
ids (Optional[List[str]], optional): Optional list of IDs.
**kwargs: other optional keyword arguments.
Returns:
List[str]: List of IDs of the added texts.
"""
kwargs = {}
if ids:
if self._id_tensor_name == "ids": # for backwards compatibility
kwargs["ids"] = ids
else:
kwargs["id"] = ids
if metadatas is None:
metadatas = [{}] * len(list(texts))
return self.vectorstore.add(
text=texts,
metadata=metadatas,
embedding_data=texts,
embedding_tensor="embedding",
embedding_function=kwargs.get("embedding_function")
or self._embedding_function.embed_documents, # type: ignore
return_ids=True,
**kwargs,
)
def _search_tql(
self,
tql_query: Optional[str],
exec_option: Optional[str] = None,
return_score: bool = False,
) -> Any[List[Document], List[Tuple[Document, float]]]:
"""Function for performing tql_search.
Args:
tql_query (str): TQL Query string for direct evaluation.
Available only for `compute_engine` and `tensor_db`.
exec_option (str, optional): Supports 3 ways to search.
Could be "python", "compute_engine" or "tensor_db". Default is "python".
- ``python`` - Pure-python implementation for the client.
WARNING: not recommended for big datasets due to potential memory
issues.
- ``compute_engine`` - C++ implementation of Deep Lake Compute
Engine for the client. Not for in-memory or local datasets.
- ``tensor_db`` - Hosted Managed Tensor Database for storage
and query execution. Only for data in Deep Lake Managed Database.
Use runtime = {"db_engine": True} during dataset creation.
return_score (bool): Return score with document. Default is False.
Returns:
List[Document] - A list of documents
Raises:
ValueError: If return_score is True but some condition is not met.
"""
result = self.vectorstore.search(
query=tql_query,
exec_option=exec_option,
)
metadatas = result["metadata"]
texts = result["text"]
docs = [
Document(
page_content=text,
metadata=metadata,
)
for text, metadata in zip(texts, metadatas)
]
if return_score:
raise ValueError("scores can't be returned with tql search")
return docs
def _search(
self,
query: Optional[str] = None,
embedding: Optional[Union[List[float], np.ndarray]] = None,
embedding_function: Optional[Callable] = None,
k: int = 4,
distance_metric: str = "L2",
use_maximal_marginal_relevance: bool = False,
fetch_k: Optional[int] = 20,
filter: Optional[Union[Dict, Callable]] = None,
return_score: bool = False,
exec_option: Optional[str] = None,
**kwargs: Any,
) -> Any[List[Document], List[Tuple[Document, float]]]:
"""
Return docs similar to query.
Args:
query (str, optional): Text to look up similar docs.
embedding (Union[List[float], np.ndarray], optional): Query's embedding.
embedding_function (Callable, optional): Function to convert `query`
into embedding.
k (int): Number of Documents to return.
distance_metric (str): `L2` for Euclidean, `L1` for Nuclear, `max`
for L-infinity distance, `cos` for cosine similarity, 'dot' for dot
product.
filter (Union[Dict, Callable], optional): Additional filter prior
to the embedding search.
- ``Dict`` - Key-value search on tensors of htype json, on an
AND basis (a sample must satisfy all key-value filters to be True)
Dict = {"tensor_name_1": {"key": value},
"tensor_name_2": {"key": value}}
- ``Function`` - Any function compatible with `deeplake.filter`.
use_maximal_marginal_relevance (bool): Use maximal marginal relevance.
fetch_k (int): Number of Documents for MMR algorithm.
return_score (bool): Return the score.
exec_option (str, optional): Supports 3 ways to perform searching.
Could be "python", "compute_engine" or "tensor_db".
- ``python`` - Pure-python implementation for the client.
WARNING: not recommended for big datasets.
- ``compute_engine`` - C++ implementation of Deep Lake Compute
Engine for the client. Not for in-memory or local datasets.
- ``tensor_db`` - Hosted Managed Tensor Database for storage
and query execution. Only for data in Deep Lake Managed Database.
Use runtime = {"db_engine": True} during dataset creation.
**kwargs: Additional keyword arguments.
Returns:
List of Documents by the specified distance metric,
if return_score True, return a tuple of (Document, score)
Raises:
ValueError: if both `embedding` and `embedding_function` are not specified.
"""
if kwargs.get("tql_query"):
return self._search_tql(
tql_query=kwargs["tql_query"],
exec_option=exec_option,
return_score=return_score,
)
if embedding_function:
if isinstance(embedding_function, Embeddings):
_embedding_function = embedding_function.embed_query
else:
_embedding_function = embedding_function
elif self._embedding_function:
_embedding_function = self._embedding_function.embed_query
else:
_embedding_function = None
if embedding is None:
if _embedding_function is None:
raise ValueError(
"Either `embedding` or `embedding_function` needs to be"
" specified."
)
embedding = _embedding_function(query) if query else None
if isinstance(embedding, list):
embedding = np.array(embedding, dtype=np.float32)
if len(embedding.shape) > 1:
embedding = embedding[0]
result = self.vectorstore.search(
embedding=embedding,
k=fetch_k if use_maximal_marginal_relevance else k,
distance_metric=distance_metric,
filter=filter,
exec_option=exec_option,
return_tensors=["embedding", "metadata", "text"],
)
scores = result["score"]
embeddings = result["embedding"]
metadatas = result["metadata"]
texts = result["text"]
if use_maximal_marginal_relevance:
lambda_mult = kwargs.get("lambda_mult", 0.5)
indices = maximal_marginal_relevance( # type: ignore
embedding, # type: ignore
embeddings,
k=min(k, len(texts)),
lambda_mult=lambda_mult,
)
scores = [scores[i] for i in indices]
texts = [texts[i] for i in indices]
metadatas = [metadatas[i] for i in indices]
docs = [
Document(
page_content=text,
metadata=metadata,
)
for text, metadata in zip(texts, metadatas)
]
if return_score:
return [(doc, score) for doc, score in zip(docs, scores)]
return docs
[docs] def similarity_search(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Document]:
"""
Return docs most similar to query.
Examples:
>>> # Search using an embedding
>>> data = vector_store.similarity_search(
... query=<your_query>,
... k=<num_items>,
... exec_option=<preferred_exec_option>,
... )
>>> # Run tql search:
>>> data = vector_store.tql_search(
... tql_query="SELECT * WHERE id == <id>",
... exec_option="compute_engine",
... )
Args:
k (int): Number of Documents to return. Defaults to 4.
query (str): Text to look up similar documents.
**kwargs: Additional keyword arguments include:
embedding (Callable): Embedding function to use. Defaults to None.
distance_metric (str): 'L2' for Euclidean, 'L1' for Nuclear, 'max'
for L-infinity, 'cos' for cosine, 'dot' for dot product.
Defaults to 'L2'.
filter (Union[Dict, Callable], optional): Additional filter
before embedding search.
- Dict: Key-value search on tensors of htype json,
(sample must satisfy all key-value filters)
Dict = {"tensor_1": {"key": value}, "tensor_2": {"key": value}}
- Function: Compatible with `deeplake.filter`.
Defaults to None.
exec_option (str): Supports 3 ways to perform searching.
'python', 'compute_engine', or 'tensor_db'. Defaults to 'python'.
- 'python': Pure-python implementation for the client.
WARNING: not recommended for big datasets.
- 'compute_engine': C++ implementation of the Compute Engine for
the client. Not for in-memory or local datasets.
- 'tensor_db': Managed Tensor Database for storage and query.
Only for data in Deep Lake Managed Database.
Use `runtime = {"db_engine": True}` during dataset creation.
Returns:
List[Document]: List of Documents most similar to the query vector.
"""
return self._search(
query=query,
k=k,
use_maximal_marginal_relevance=False,
return_score=False,
**kwargs,
)
[docs] def similarity_search_by_vector(
self,
embedding: Union[List[float], np.ndarray],
k: int = 4,
**kwargs: Any,
) -> List[Document]:
"""
Return docs most similar to embedding vector.
Examples:
>>> # Search using an embedding
>>> data = vector_store.similarity_search_by_vector(
... embedding=<your_embedding>,
... k=<num_items_to_return>,
... exec_option=<preferred_exec_option>,
... )
Args:
embedding (Union[List[float], np.ndarray]):
Embedding to find similar docs.
k (int): Number of Documents to return. Defaults to 4.
**kwargs: Additional keyword arguments including:
filter (Union[Dict, Callable], optional):
Additional filter before embedding search.
- ``Dict`` - Key-value search on tensors of htype json. True
if all key-value filters are satisfied.
Dict = {"tensor_name_1": {"key": value},
"tensor_name_2": {"key": value}}
- ``Function`` - Any function compatible with
`deeplake.filter`.
Defaults to None.
exec_option (str): Options for search execution include
"python", "compute_engine", or "tensor_db". Defaults to
"python".
- "python" - Pure-python implementation running on the client.
Can be used for data stored anywhere. WARNING: using this
option with big datasets is discouraged due to potential
memory issues.
- "compute_engine" - Performant C++ implementation of the Deep
Lake Compute Engine. Runs on the client and can be used for
any data stored in or connected to Deep Lake. It cannot be
used with in-memory or local datasets.
- "tensor_db" - Performant, fully-hosted Managed Tensor Database.
Responsible for storage and query execution. Only available
for data stored in the Deep Lake Managed Database.
To store datasets in this database, specify
`runtime = {"db_engine": True}` during dataset creation.
distance_metric (str): `L2` for Euclidean, `L1` for Nuclear,
`max` for L-infinity distance, `cos` for cosine similarity,
'dot' for dot product. Defaults to `L2`.
Returns:
List[Document]: List of Documents most similar to the query vector.
"""
return self._search(
embedding=embedding,
k=k,
use_maximal_marginal_relevance=False,
return_score=False,
**kwargs,
)
[docs] def similarity_search_with_score(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""
Run similarity search with Deep Lake with distance returned.
Examples:
>>> data = vector_store.similarity_search_with_score(
... query=<your_query>,
... embedding=<your_embedding_function>
... k=<number_of_items_to_return>,
... exec_option=<preferred_exec_option>,
... )
Args:
query (str): Query text to search for.
k (int): Number of results to return. Defaults to 4.
**kwargs: Additional keyword arguments. Some of these arguments are:
distance_metric: `L2` for Euclidean, `L1` for Nuclear, `max` L-infinity
distance, `cos` for cosine similarity, 'dot' for dot product.
Defaults to `L2`.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
embedding_function (Callable): Embedding function to use. Defaults
to None.
exec_option (str): DeepLakeVectorStore supports 3 ways to perform
searching. It could be either "python", "compute_engine" or
"tensor_db". Defaults to "python".
- "python" - Pure-python implementation running on the client.
Can be used for data stored anywhere. WARNING: using this
option with big datasets is discouraged due to potential
memory issues.
- "compute_engine" - Performant C++ implementation of the Deep
Lake Compute Engine. Runs on the client and can be used for
any data stored in or connected to Deep Lake. It cannot be used
with in-memory or local datasets.
- "tensor_db" - Performant, fully-hosted Managed Tensor Database.
Responsible for storage and query execution. Only available for
data stored in the Deep Lake Managed Database. To store datasets
in this database, specify `runtime = {"db_engine": True}`
during dataset creation.
Returns:
List[Tuple[Document, float]]: List of documents most similar to the query
text with distance in float."""
return self._search(
query=query,
k=k,
return_score=True,
**kwargs,
)
[docs] def max_marginal_relevance_search_by_vector(
self,
embedding: List[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
exec_option: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""
Return docs selected using the maximal marginal relevance. Maximal marginal
relevance optimizes for similarity to query AND diversity among selected docs.
Examples:
>>> data = vector_store.max_marginal_relevance_search_by_vector(
... embedding=<your_embedding>,
... fetch_k=<elements_to_fetch_before_mmr_search>,
... k=<number_of_items_to_return>,
... exec_option=<preferred_exec_option>,
... )
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch for MMR algorithm.
lambda_mult: Number between 0 and 1 determining the degree of diversity.
0 corresponds to max diversity and 1 to min diversity. Defaults to 0.5.
exec_option (str): DeepLakeVectorStore supports 3 ways for searching.
Could be "python", "compute_engine" or "tensor_db". Defaults to
"python".
- "python" - Pure-python implementation running on the client.
Can be used for data stored anywhere. WARNING: using this
option with big datasets is discouraged due to potential
memory issues.
- "compute_engine" - Performant C++ implementation of the Deep
Lake Compute Engine. Runs on the client and can be used for
any data stored in or connected to Deep Lake. It cannot be used
with in-memory or local datasets.
- "tensor_db" - Performant, fully-hosted Managed Tensor Database.
Responsible for storage and query execution. Only available for
data stored in the Deep Lake Managed Database. To store datasets
in this database, specify `runtime = {"db_engine": True}`
during dataset creation.
**kwargs: Additional keyword arguments.
Returns:
List[Documents] - A list of documents.
"""
return self._search(
embedding=embedding,
k=k,
fetch_k=fetch_k,
use_maximal_marginal_relevance=True,
lambda_mult=lambda_mult,
exec_option=exec_option,
**kwargs,
)
[docs] def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
exec_option: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Examples:
>>> # Search using an embedding
>>> data = vector_store.max_marginal_relevance_search(
... query = <query_to_search>,
... embedding_function = <embedding_function_for_query>,
... k = <number_of_items_to_return>,
... exec_option = <preferred_exec_option>,
... )
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents for MMR algorithm.
lambda_mult: Value between 0 and 1. 0 corresponds
to maximum diversity and 1 to minimum.
Defaults to 0.5.
exec_option (str): Supports 3 ways to perform searching.
- "python" - Pure-python implementation running on the client.
Can be used for data stored anywhere. WARNING: using this
option with big datasets is discouraged due to potential
memory issues.
- "compute_engine" - Performant C++ implementation of the Deep
Lake Compute Engine. Runs on the client and can be used for
any data stored in or connected to Deep Lake. It cannot be
used with in-memory or local datasets.
- "tensor_db" - Performant, fully-hosted Managed Tensor Database.
Responsible for storage and query execution. Only available
for data stored in the Deep Lake Managed Database. To store
datasets in this database, specify
`runtime = {"db_engine": True}` during dataset creation.
**kwargs: Additional keyword arguments
Returns:
List of Documents selected by maximal marginal relevance.
Raises:
ValueError: when MRR search is on but embedding function is
not specified.
"""
embedding_function = kwargs.get("embedding") or self._embedding_function
if embedding_function is None:
raise ValueError(
"For MMR search, you must specify an embedding function on"
" `creation` or during add call."
)
return self._search(
query=query,
k=k,
fetch_k=fetch_k,
use_maximal_marginal_relevance=True,
lambda_mult=lambda_mult,
exec_option=exec_option,
embedding_function=embedding_function, # type: ignore
**kwargs,
)
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Optional[Embeddings] = None,
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
dataset_path: str = _LANGCHAIN_DEFAULT_DEEPLAKE_PATH,
**kwargs: Any,
) -> DeepLake:
"""Create a Deep Lake dataset from a raw documents.
If a dataset_path is specified, the dataset will be persisted in that location,
otherwise by default at `./deeplake`
Examples:
>>> # Search using an embedding
>>> vector_store = DeepLake.from_texts(
... texts = <the_texts_that_you_want_to_embed>,
... embedding_function = <embedding_function_for_query>,
... k = <number_of_items_to_return>,
... exec_option = <preferred_exec_option>,
... )
Args:
dataset_path (str): - The full path to the dataset. Can be:
- Deep Lake cloud path of the form ``hub://username/dataset_name``.
To write to Deep Lake cloud datasets,
ensure that you are logged in to Deep Lake
(use 'activeloop login' from command line)
- AWS S3 path of the form ``s3://bucketname/path/to/dataset``.
Credentials are required in either the environment
- Google Cloud Storage path of the form
``gcs://bucketname/path/to/dataset`` Credentials are required
in either the environment
- Local file system path of the form ``./path/to/dataset`` or
``~/path/to/dataset`` or ``path/to/dataset``.
- In-memory path of the form ``mem://path/to/dataset`` which doesn't
save the dataset, but keeps it in memory instead.
Should be used only for testing as it does not persist.
texts (List[Document]): List of documents to add.
embedding (Optional[Embeddings]): Embedding function. Defaults to None.
Note, in other places, it is called embedding_function.
metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.
ids (Optional[List[str]]): List of document IDs. Defaults to None.
**kwargs: Additional keyword arguments.
Returns:
DeepLake: Deep Lake dataset.
Raises:
ValueError: If 'embedding' is provided in kwargs. This is deprecated,
please use `embedding_function` instead.
"""
if kwargs.get("embedding"):
raise ValueError(
"using embedding as embedidng_functions is deprecated. "
"Please use `embedding_function` instead."
)
deeplake_dataset = cls(
dataset_path=dataset_path, embedding_function=embedding, **kwargs
)
deeplake_dataset.add_texts(
texts=texts,
metadatas=metadatas,
ids=ids,
embedding_function=embedding.embed_documents, # type: ignore
)
return deeplake_dataset
[docs] def delete(
self,
ids: Any[List[str], None] = None,
filter: Any[Dict[str, str], None] = None,
delete_all: Any[bool, None] = None,
) -> bool:
"""Delete the entities in the dataset.
Args:
ids (Optional[List[str]], optional): The document_ids to delete.
Defaults to None.
filter (Optional[Dict[str, str]], optional): The filter to delete by.
Defaults to None.
delete_all (Optional[bool], optional): Whether to drop the dataset.
Defaults to None.
Returns:
bool: Whether the delete operation was successful.
"""
self.vectorstore.delete(
ids=ids,
filter=filter,
delete_all=delete_all,
)
return True
[docs] @classmethod
def force_delete_by_path(cls, path: str) -> None:
"""Force delete dataset by path.
Args:
path (str): path of the dataset to delete.
Raises:
ValueError: if deeplake is not installed.
"""
try:
import deeplake
except ImportError:
raise ValueError(
"Could not import deeplake python package. "
"Please install it with `pip install deeplake`."
)
deeplake.delete(path, large_ok=True, force=True)
[docs] def delete_dataset(self) -> None:
"""Delete the collection."""
self.delete(delete_all=True) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html |
ac87b9c4-9202-4c49-a189-f5178d6556db | Source code for langchain.vectorstores.annoy
"""Wrapper around Annoy vector database."""
from __future__ import annotations
import os
import pickle
import uuid
from configparser import ConfigParser
from pathlib import Path
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple
import numpy as np
from langchain.docstore.base import Docstore
from langchain.docstore.document import Document
from langchain.docstore.in_memory import InMemoryDocstore
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
INDEX_METRICS = frozenset(["angular", "euclidean", "manhattan", "hamming", "dot"])
DEFAULT_METRIC = "angular"
def dependable_annoy_import() -> Any:
"""Import annoy if available, otherwise raise error."""
try:
import annoy
except ImportError:
raise ValueError(
"Could not import annoy python package. "
"Please install it with `pip install --user annoy` "
)
return annoy
[docs]class Annoy(VectorStore):
"""Wrapper around Annoy vector database.
To use, you should have the ``annoy`` python package installed.
Example:
.. code-block:: python
from langchain import Annoy
db = Annoy(embedding_function, index, docstore, index_to_docstore_id)
"""
def __init__(
self,
embedding_function: Callable,
index: Any,
metric: str,
docstore: Docstore,
index_to_docstore_id: Dict[int, str],
):
"""Initialize with necessary components."""
self.embedding_function = embedding_function
self.index = index
self.metric = metric
self.docstore = docstore
self.index_to_docstore_id = index_to_docstore_id
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> List[str]:
raise NotImplementedError(
"Annoy does not allow to add new data once the index is build."
)
[docs] def process_index_results(
self, idxs: List[int], dists: List[float]
) -> List[Tuple[Document, float]]:
"""Turns annoy results into a list of documents and scores.
Args:
idxs: List of indices of the documents in the index.
dists: List of distances of the documents in the index.
Returns:
List of Documents and scores.
"""
docs = []
for idx, dist in zip(idxs, dists):
_id = self.index_to_docstore_id[idx]
doc = self.docstore.search(_id)
if not isinstance(doc, Document):
raise ValueError(f"Could not find document for id {_id}, got {doc}")
docs.append((doc, dist))
return docs
[docs] def similarity_search_with_score_by_vector(
self, embedding: List[float], k: int = 4, search_k: int = -1
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
search_k: inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns:
List of Documents most similar to the query and score for each
"""
idxs, dists = self.index.get_nns_by_vector(
embedding, k, search_k=search_k, include_distances=True
)
return self.process_index_results(idxs, dists)
[docs] def similarity_search_with_score_by_index(
self, docstore_index: int, k: int = 4, search_k: int = -1
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
search_k: inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns:
List of Documents most similar to the query and score for each
"""
idxs, dists = self.index.get_nns_by_item(
docstore_index, k, search_k=search_k, include_distances=True
)
return self.process_index_results(idxs, dists)
[docs] def similarity_search_with_score(
self, query: str, k: int = 4, search_k: int = -1
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
search_k: inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns:
List of Documents most similar to the query and score for each
"""
embedding = self.embedding_function(query)
docs = self.similarity_search_with_score_by_vector(embedding, k, search_k)
return docs
[docs] def similarity_search_by_vector(
self, embedding: List[float], k: int = 4, search_k: int = -1, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to embedding vector.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
search_k: inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns:
List of Documents most similar to the embedding.
"""
docs_and_scores = self.similarity_search_with_score_by_vector(
embedding, k, search_k
)
return [doc for doc, _ in docs_and_scores]
[docs] def similarity_search_by_index(
self, docstore_index: int, k: int = 4, search_k: int = -1, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to docstore_index.
Args:
docstore_index: Index of document in docstore
k: Number of Documents to return. Defaults to 4.
search_k: inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns:
List of Documents most similar to the embedding.
"""
docs_and_scores = self.similarity_search_with_score_by_index(
docstore_index, k, search_k
)
return [doc for doc, _ in docs_and_scores]
[docs] def similarity_search(
self, query: str, k: int = 4, search_k: int = -1, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
search_k: inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns:
List of Documents most similar to the query.
"""
docs_and_scores = self.similarity_search_with_score(query, k, search_k)
return [doc for doc, _ in docs_and_scores]
[docs] def max_marginal_relevance_search_by_vector(
self,
embedding: List[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
embedding: Embedding to look up documents similar to.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
k: Number of Documents to return. Defaults to 4.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
idxs = self.index.get_nns_by_vector(
embedding, fetch_k, search_k=-1, include_distances=False
)
embeddings = [self.index.get_item_vector(i) for i in idxs]
mmr_selected = maximal_marginal_relevance(
np.array([embedding], dtype=np.float32),
embeddings,
k=k,
lambda_mult=lambda_mult,
)
# ignore the -1's if not enough docs are returned/indexed
selected_indices = [idxs[i] for i in mmr_selected if i != -1]
docs = []
for i in selected_indices:
_id = self.index_to_docstore_id[i]
doc = self.docstore.search(_id)
if not isinstance(doc, Document):
raise ValueError(f"Could not find document for id {_id}, got {doc}")
docs.append(doc)
return docs
[docs] def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
embedding = self.embedding_function(query)
docs = self.max_marginal_relevance_search_by_vector(
embedding, k, fetch_k, lambda_mult=lambda_mult
)
return docs
@classmethod
def __from(
cls,
texts: List[str],
embeddings: List[List[float]],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
metric: str = DEFAULT_METRIC,
trees: int = 100,
n_jobs: int = -1,
**kwargs: Any,
) -> Annoy:
if metric not in INDEX_METRICS:
raise ValueError(
(
f"Unsupported distance metric: {metric}. "
f"Expected one of {list(INDEX_METRICS)}"
)
)
annoy = dependable_annoy_import()
if not embeddings:
raise ValueError("embeddings must be provided to build AnnoyIndex")
f = len(embeddings[0])
index = annoy.AnnoyIndex(f, metric=metric)
for i, emb in enumerate(embeddings):
index.add_item(i, emb)
index.build(trees, n_jobs=n_jobs)
documents = []
for i, text in enumerate(texts):
metadata = metadatas[i] if metadatas else {}
documents.append(Document(page_content=text, metadata=metadata))
index_to_id = {i: str(uuid.uuid4()) for i in range(len(documents))}
docstore = InMemoryDocstore(
{index_to_id[i]: doc for i, doc in enumerate(documents)}
)
return cls(embedding.embed_query, index, metric, docstore, index_to_id)
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
metric: str = DEFAULT_METRIC,
trees: int = 100,
n_jobs: int = -1,
**kwargs: Any,
) -> Annoy:
"""Construct Annoy wrapper from raw documents.
Args:
texts: List of documents to index.
embedding: Embedding function to use.
metadatas: List of metadata dictionaries to associate with documents.
metric: Metric to use for indexing. Defaults to "angular".
trees: Number of trees to use for indexing. Defaults to 100.
n_jobs: Number of jobs to use for indexing. Defaults to -1.
This is a user friendly interface that:
1. Embeds documents.
2. Creates an in memory docstore
3. Initializes the Annoy database
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
index = Annoy.from_texts(texts, embeddings)
"""
embeddings = embedding.embed_documents(texts)
return cls.__from(
texts, embeddings, embedding, metadatas, metric, trees, n_jobs, **kwargs
)
[docs] @classmethod
def from_embeddings(
cls,
text_embeddings: List[Tuple[str, List[float]]],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
metric: str = DEFAULT_METRIC,
trees: int = 100,
n_jobs: int = -1,
**kwargs: Any,
) -> Annoy:
"""Construct Annoy wrapper from embeddings.
Args:
text_embeddings: List of tuples of (text, embedding)
embedding: Embedding function to use.
metadatas: List of metadata dictionaries to associate with documents.
metric: Metric to use for indexing. Defaults to "angular".
trees: Number of trees to use for indexing. Defaults to 100.
n_jobs: Number of jobs to use for indexing. Defaults to -1
This is a user friendly interface that:
1. Creates an in memory docstore with provided embeddings
2. Initializes the Annoy database
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
db = Annoy.from_embeddings(text_embedding_pairs, embeddings)
"""
texts = [t[0] for t in text_embeddings]
embeddings = [t[1] for t in text_embeddings]
return cls.__from(
texts, embeddings, embedding, metadatas, metric, trees, n_jobs, **kwargs
)
[docs] def save_local(self, folder_path: str, prefault: bool = False) -> None:
"""Save Annoy index, docstore, and index_to_docstore_id to disk.
Args:
folder_path: folder path to save index, docstore,
and index_to_docstore_id to.
prefault: Whether to pre-load the index into memory.
"""
path = Path(folder_path)
os.makedirs(path, exist_ok=True)
# save index, index config, docstore and index_to_docstore_id
config_object = ConfigParser()
config_object["ANNOY"] = {
"f": self.index.f,
"metric": self.metric,
}
self.index.save(str(path / "index.annoy"), prefault=prefault)
with open(path / "index.pkl", "wb") as file:
pickle.dump((self.docstore, self.index_to_docstore_id, config_object), file)
[docs] @classmethod
def load_local(
cls,
folder_path: str,
embeddings: Embeddings,
) -> Annoy:
"""Load Annoy index, docstore, and index_to_docstore_id to disk.
Args:
folder_path: folder path to load index, docstore,
and index_to_docstore_id from.
embeddings: Embeddings to use when generating queries.
"""
path = Path(folder_path)
# load index separately since it is not picklable
annoy = dependable_annoy_import()
# load docstore and index_to_docstore_id
with open(path / "index.pkl", "rb") as file:
docstore, index_to_docstore_id, config_object = pickle.load(file)
f = int(config_object["ANNOY"]["f"])
metric = config_object["ANNOY"]["metric"]
index = annoy.AnnoyIndex(f, metric=metric)
index.load(str(path / "index.annoy"))
return cls(
embeddings.embed_query, index, metric, docstore, index_to_docstore_id
) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html |
482a9cdc-7488-4b33-b2cd-472c6c9b817b | Source code for langchain.vectorstores.typesense
"""Wrapper around Typesense vector search"""
from __future__ import annotations
import uuid
from typing import TYPE_CHECKING, Any, Iterable, List, Optional, Tuple, Union
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.utils import get_from_env
from langchain.vectorstores.base import VectorStore
if TYPE_CHECKING:
from typesense.client import Client
from typesense.collection import Collection
[docs]class Typesense(VectorStore):
"""Wrapper around Typesense vector search.
To use, you should have the ``typesense`` python package installed.
Example:
.. code-block:: python
from langchain.embedding.openai import OpenAIEmbeddings
from langchain.vectorstores import Typesense
import typesense
node = {
"host": "localhost", # For Typesense Cloud use xxx.a1.typesense.net
"port": "8108", # For Typesense Cloud use 443
"protocol": "http" # For Typesense Cloud use https
}
typesense_client = typesense.Client(
{
"nodes": [node],
"api_key": "<API_KEY>",
"connection_timeout_seconds": 2
}
)
typesense_collection_name = "langchain-memory"
embedding = OpenAIEmbeddings()
vectorstore = Typesense(
typesense_client=typesense_client,
embedding=embedding,
typesense_collection_name=typesense_collection_name,
text_key="text",
)
"""
def __init__(
self,
typesense_client: Client,
embedding: Embeddings,
*,
typesense_collection_name: Optional[str] = None,
text_key: str = "text",
):
"""Initialize with Typesense client."""
try:
from typesense import Client
except ImportError:
raise ValueError(
"Could not import typesense python package. "
"Please install it with `pip install typesense`."
)
if not isinstance(typesense_client, Client):
raise ValueError(
f"typesense_client should be an instance of typesense.Client, "
f"got {type(typesense_client)}"
)
self._typesense_client = typesense_client
self._embedding = embedding
self._typesense_collection_name = (
typesense_collection_name or f"langchain-{str(uuid.uuid4())}"
)
self._text_key = text_key
@property
def _collection(self) -> Collection:
return self._typesense_client.collections[self._typesense_collection_name]
def _prep_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]],
ids: Optional[List[str]],
) -> List[dict]:
"""Embed and create the documents"""
_ids = ids or (str(uuid.uuid4()) for _ in texts)
_metadatas: Iterable[dict] = metadatas or ({} for _ in texts)
embedded_texts = self._embedding.embed_documents(list(texts))
return [
{"id": _id, "vec": vec, f"{self._text_key}": text, "metadata": metadata}
for _id, vec, text, metadata in zip(_ids, embedded_texts, texts, _metadatas)
]
def _create_collection(self, num_dim: int) -> None:
fields = [
{"name": "vec", "type": "float[]", "num_dim": num_dim},
{"name": f"{self._text_key}", "type": "string"},
{"name": ".*", "type": "auto"},
]
self._typesense_client.collections.create(
{"name": self._typesense_collection_name, "fields": fields}
)
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embedding and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
ids: Optional list of ids to associate with the texts.
Returns:
List of ids from adding the texts into the vectorstore.
"""
from typesense.exceptions import ObjectNotFound
docs = self._prep_texts(texts, metadatas, ids)
try:
self._collection.documents.import_(docs, {"action": "upsert"})
except ObjectNotFound:
# Create the collection if it doesn't already exist
self._create_collection(len(docs[0]["vec"]))
self._collection.documents.import_(docs, {"action": "upsert"})
return [doc["id"] for doc in docs]
[docs] def similarity_search_with_score(
self,
query: str,
k: int = 10,
filter: Optional[str] = "",
) -> List[Tuple[Document, float]]:
"""Return typesense documents most similar to query, along with scores.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 10.
Minimum 10 results would be returned.
filter: typesense filter_by expression to filter documents on
Returns:
List of Documents most similar to the query and score for each
"""
embedded_query = [str(x) for x in self._embedding.embed_query(query)]
query_obj = {
"q": "*",
"vector_query": f'vec:([{",".join(embedded_query)}], k:{k})',
"filter_by": filter,
"collection": self._typesense_collection_name,
}
docs = []
response = self._typesense_client.multi_search.perform(
{"searches": [query_obj]}, {}
)
for hit in response["results"][0]["hits"]:
document = hit["document"]
metadata = document["metadata"]
text = document[self._text_key]
score = hit["vector_distance"]
docs.append((Document(page_content=text, metadata=metadata), score))
return docs
[docs] def similarity_search(
self,
query: str,
k: int = 10,
filter: Optional[str] = "",
**kwargs: Any,
) -> List[Document]:
"""Return typesense documents most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 10.
Minimum 10 results would be returned.
filter: typesense filter_by expression to filter documents on
Returns:
List of Documents most similar to the query and score for each
"""
docs_and_score = self.similarity_search_with_score(query, k=k, filter=filter)
return [doc for doc, _ in docs_and_score]
[docs] @classmethod
def from_client_params(
cls,
embedding: Embeddings,
*,
host: str = "localhost",
port: Union[str, int] = "8108",
protocol: str = "http",
typesense_api_key: Optional[str] = None,
connection_timeout_seconds: int = 2,
**kwargs: Any,
) -> Typesense:
"""Initialize Typesense directly from client parameters.
Example:
.. code-block:: python
from langchain.embedding.openai import OpenAIEmbeddings
from langchain.vectorstores import Typesense
# Pass in typesense_api_key as kwarg or set env var "TYPESENSE_API_KEY".
vectorstore = Typesense(
OpenAIEmbeddings(),
host="localhost",
port="8108",
protocol="http",
typesense_collection_name="langchain-memory",
)
"""
try:
from typesense import Client
except ImportError:
raise ValueError(
"Could not import typesense python package. "
"Please install it with `pip install typesense`."
)
node = {
"host": host,
"port": str(port),
"protocol": protocol,
}
typesense_api_key = typesense_api_key or get_from_env(
"typesense_api_key", "TYPESENSE_API_KEY"
)
client_config = {
"nodes": [node],
"api_key": typesense_api_key,
"connection_timeout_seconds": connection_timeout_seconds,
}
return cls(Client(client_config), embedding, **kwargs)
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
typesense_client: Optional[Client] = None,
typesense_client_params: Optional[dict] = None,
typesense_collection_name: Optional[str] = None,
text_key: str = "text",
**kwargs: Any,
) -> Typesense:
"""Construct Typesense wrapper from raw text."""
if typesense_client:
vectorstore = cls(typesense_client, embedding, **kwargs)
elif typesense_client_params:
vectorstore = cls.from_client_params(
embedding, **typesense_client_params, **kwargs
)
else:
raise ValueError(
"Must specify one of typesense_client or typesense_client_params."
)
vectorstore.add_texts(texts, metadatas=metadatas, ids=ids)
return vectorstore | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html |
f44e9ce0-6cbc-49e2-87b7-86b774a21511 | Source code for langchain.vectorstores.pinecone
"""Wrapper around Pinecone vector database."""
from __future__ import annotations
import logging
import uuid
from typing import Any, Callable, Iterable, List, Optional, Tuple
import numpy as np
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
logger = logging.getLogger(__name__)
[docs]class Pinecone(VectorStore):
"""Wrapper around Pinecone vector database.
To use, you should have the ``pinecone-client`` python package installed.
Example:
.. code-block:: python
from langchain.vectorstores import Pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
import pinecone
# The environment should be the one specified next to the API key
# in your Pinecone console
pinecone.init(api_key="***", environment="...")
index = pinecone.Index("langchain-demo")
embeddings = OpenAIEmbeddings()
vectorstore = Pinecone(index, embeddings.embed_query, "text")
"""
def __init__(
self,
index: Any,
embedding_function: Callable,
text_key: str,
namespace: Optional[str] = None,
):
"""Initialize with Pinecone client."""
try:
import pinecone
except ImportError:
raise ValueError(
"Could not import pinecone python package. "
"Please install it with `pip install pinecone-client`."
)
if not isinstance(index, pinecone.index.Index):
raise ValueError(
f"client should be an instance of pinecone.index.Index, "
f"got {type(index)}"
)
self._index = index
self._embedding_function = embedding_function
self._text_key = text_key
self._namespace = namespace
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
namespace: Optional[str] = None,
batch_size: int = 32,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
ids: Optional list of ids to associate with the texts.
namespace: Optional pinecone namespace to add the texts to.
Returns:
List of ids from adding the texts into the vectorstore.
"""
if namespace is None:
namespace = self._namespace
# Embed and create the documents
docs = []
ids = ids or [str(uuid.uuid4()) for _ in texts]
for i, text in enumerate(texts):
embedding = self._embedding_function(text)
metadata = metadatas[i] if metadatas else {}
metadata[self._text_key] = text
docs.append((ids[i], embedding, metadata))
# upsert to Pinecone
self._index.upsert(vectors=docs, namespace=namespace, batch_size=batch_size)
return ids
[docs] def similarity_search_with_score(
self,
query: str,
k: int = 4,
filter: Optional[dict] = None,
namespace: Optional[str] = None,
) -> List[Tuple[Document, float]]:
"""Return pinecone documents most similar to query, along with scores.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter: Dictionary of argument(s) to filter on metadata
namespace: Namespace to search in. Default will search in '' namespace.
Returns:
List of Documents most similar to the query and score for each
"""
if namespace is None:
namespace = self._namespace
query_obj = self._embedding_function(query)
docs = []
results = self._index.query(
[query_obj],
top_k=k,
include_metadata=True,
namespace=namespace,
filter=filter,
)
for res in results["matches"]:
metadata = res["metadata"]
if self._text_key in metadata:
text = metadata.pop(self._text_key)
score = res["score"]
docs.append((Document(page_content=text, metadata=metadata), score))
else:
logger.warning(
f"Found document with no `{self._text_key}` key. Skipping."
)
return docs
[docs] def similarity_search(
self,
query: str,
k: int = 4,
filter: Optional[dict] = None,
namespace: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Return pinecone documents most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter: Dictionary of argument(s) to filter on metadata
namespace: Namespace to search in. Default will search in '' namespace.
Returns:
List of Documents most similar to the query and score for each
"""
docs_and_scores = self.similarity_search_with_score(
query, k=k, filter=filter, namespace=namespace, **kwargs
)
return [doc for doc, _ in docs_and_scores]
def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
return self.similarity_search_with_score(query, k)
[docs] def max_marginal_relevance_search_by_vector(
self,
embedding: List[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
filter: Optional[dict] = None,
namespace: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
if namespace is None:
namespace = self._namespace
results = self._index.query(
[embedding],
top_k=fetch_k,
include_values=True,
include_metadata=True,
namespace=namespace,
filter=filter,
)
mmr_selected = maximal_marginal_relevance(
np.array([embedding], dtype=np.float32),
[item["values"] for item in results["matches"]],
k=k,
lambda_mult=lambda_mult,
)
selected = [results["matches"][i]["metadata"] for i in mmr_selected]
return [
Document(page_content=metadata.pop((self._text_key)), metadata=metadata)
for metadata in selected
]
[docs] def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
filter: Optional[dict] = None,
namespace: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
embedding = self._embedding_function(query)
return self.max_marginal_relevance_search_by_vector(
embedding, k, fetch_k, lambda_mult, filter, namespace
)
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
batch_size: int = 32,
text_key: str = "text",
index_name: Optional[str] = None,
namespace: Optional[str] = None,
**kwargs: Any,
) -> Pinecone:
"""Construct Pinecone wrapper from raw documents.
This is a user friendly interface that:
1. Embeds documents.
2. Adds the documents to a provided Pinecone index
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain import Pinecone
from langchain.embeddings import OpenAIEmbeddings
import pinecone
# The environment should be the one specified next to the API key
# in your Pinecone console
pinecone.init(api_key="***", environment="...")
embeddings = OpenAIEmbeddings()
pinecone = Pinecone.from_texts(
texts,
embeddings,
index_name="langchain-demo"
)
"""
try:
import pinecone
except ImportError:
raise ValueError(
"Could not import pinecone python package. "
"Please install it with `pip install pinecone-client`."
)
indexes = pinecone.list_indexes() # checks if provided index exists
if index_name in indexes:
index = pinecone.Index(index_name)
elif len(indexes) == 0:
raise ValueError(
"No active indexes found in your Pinecone project, "
"are you sure you're using the right API key and environment?"
)
else:
raise ValueError(
f"Index '{index_name}' not found in your Pinecone project. "
f"Did you mean one of the following indexes: {', '.join(indexes)}"
)
for i in range(0, len(texts), batch_size):
# set end position of batch
i_end = min(i + batch_size, len(texts))
# get batch of texts and ids
lines_batch = texts[i:i_end]
# create ids if not provided
if ids:
ids_batch = ids[i:i_end]
else:
ids_batch = [str(uuid.uuid4()) for n in range(i, i_end)]
# create embeddings
embeds = embedding.embed_documents(lines_batch)
# prep metadata and upsert batch
if metadatas:
metadata = metadatas[i:i_end]
else:
metadata = [{} for _ in range(i, i_end)]
for j, line in enumerate(lines_batch):
metadata[j][text_key] = line
to_upsert = zip(ids_batch, embeds, metadata)
# upsert to Pinecone
index.upsert(vectors=list(to_upsert), namespace=namespace)
return cls(index, embedding.embed_query, text_key, namespace)
[docs] @classmethod
def from_existing_index(
cls,
index_name: str,
embedding: Embeddings,
text_key: str = "text",
namespace: Optional[str] = None,
) -> Pinecone:
"""Load pinecone vectorstore from index name."""
try:
import pinecone
except ImportError:
raise ValueError(
"Could not import pinecone python package. "
"Please install it with `pip install pinecone-client`."
)
return cls(
pinecone.Index(index_name), embedding.embed_query, text_key, namespace
)
[docs] def delete(self, ids: List[str]) -> None:
"""Delete by vector IDs.
Args:
ids: List of ids to delete.
"""
# This is the maximum number of IDs that can be deleted
chunk_size = 1000
for i in range(0, len(ids), chunk_size):
chunk = ids[i : i + chunk_size]
self._index.delete(ids=chunk) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html |
e50fe63e-768d-4199-a74a-dfc314f1633e | Source code for langchain.vectorstores.tigris
from __future__ import annotations
import itertools
from typing import TYPE_CHECKING, Any, Iterable, List, Optional, Tuple
from langchain.embeddings.base import Embeddings
from langchain.schema import Document
from langchain.vectorstores import VectorStore
if TYPE_CHECKING:
from tigrisdb import TigrisClient
from tigrisdb import VectorStore as TigrisVectorStore
from tigrisdb.types.filters import Filter as TigrisFilter
from tigrisdb.types.vector import Document as TigrisDocument
[docs]class Tigris(VectorStore):
def __init__(self, client: TigrisClient, embeddings: Embeddings, index_name: str):
"""Initialize Tigris vector store"""
try:
import tigrisdb # noqa: F401
except ImportError:
raise ValueError(
"Could not import tigrisdb python package. "
"Please install it with `pip install tigrisdb`"
)
self._embed_fn = embeddings
self._vector_store = TigrisVectorStore(client.get_search(), index_name)
@property
def search_index(self) -> TigrisVectorStore:
return self._vector_store
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
ids: Optional list of ids for documents.
Ids will be autogenerated if not provided.
kwargs: vectorstore specific parameters
Returns:
List of ids from adding the texts into the vectorstore.
"""
docs = self._prep_docs(texts, metadatas, ids)
result = self.search_index.add_documents(docs)
return [r.id for r in result]
[docs] def similarity_search(
self,
query: str,
k: int = 4,
filter: Optional[TigrisFilter] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to query."""
docs_with_scores = self.similarity_search_with_score(query, k, filter)
return [doc for doc, _ in docs_with_scores]
[docs] def similarity_search_with_score(
self,
query: str,
k: int = 4,
filter: Optional[TigrisFilter] = None,
) -> List[Tuple[Document, float]]:
"""Run similarity search with Chroma with distance.
Args:
query (str): Query text to search for.
k (int): Number of results to return. Defaults to 4.
filter (Optional[TigrisFilter]): Filter by metadata. Defaults to None.
Returns:
List[Tuple[Document, float]]: List of documents most similar to the query
text with distance in float.
"""
vector = self._embed_fn.embed_query(query)
result = self.search_index.similarity_search(
vector=vector, k=k, filter_by=filter
)
docs: List[Tuple[Document, float]] = []
for r in result:
docs.append(
(
Document(
page_content=r.doc["text"], metadata=r.doc.get("metadata")
),
r.score,
)
)
return docs
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
client: Optional[TigrisClient] = None,
index_name: Optional[str] = None,
**kwargs: Any,
) -> Tigris:
"""Return VectorStore initialized from texts and embeddings."""
if not index_name:
raise ValueError("`index_name` is required")
if not client:
client = TigrisClient()
store = cls(client, embedding, index_name)
store.add_texts(texts=texts, metadatas=metadatas, ids=ids)
return store
def _prep_docs(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]],
ids: Optional[List[str]],
) -> List[TigrisDocument]:
embeddings: List[List[float]] = self._embed_fn.embed_documents(list(texts))
docs: List[TigrisDocument] = []
for t, m, e, _id in itertools.zip_longest(
texts, metadatas or [], embeddings or [], ids or []
):
doc: TigrisDocument = {
"text": t,
"embeddings": e or [],
"metadata": m or {},
}
if _id:
doc["id"] = _id
docs.append(doc)
return docs | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tigris.html |
442359b8-1b63-4d9a-bfc7-8924d6a16625 | Source code for langchain.vectorstores.starrocks
"""Wrapper around open source StarRocks VectorSearch capability."""
from __future__ import annotations
import json
import logging
from hashlib import sha1
from threading import Thread
from typing import Any, Dict, Iterable, List, Optional, Tuple
from pydantic import BaseSettings
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
logger = logging.getLogger()
DEBUG = False
def has_mul_sub_str(s: str, *args: Any) -> bool:
"""
Check if a string has multiple substrings.
Args:
s: The string to check
*args: The substrings to check for in the string
Returns:
bool: True if all substrings are present in the string, False otherwise
"""
for a in args:
if a not in s:
return False
return True
def debug_output(s: Any) -> None:
"""
Print a debug message if DEBUG is True.
Args:
s: The message to print
"""
if DEBUG:
print(s)
def get_named_result(connection: Any, query: str) -> List[dict[str, Any]]:
"""
Get a named result from a query.
Args:
connection: The connection to the database
query: The query to execute
Returns:
List[dict[str, Any]]: The result of the query
"""
cursor = connection.cursor()
cursor.execute(query)
columns = cursor.description
result = []
for value in cursor.fetchall():
r = {}
for idx, datum in enumerate(value):
k = columns[idx][0]
r[k] = datum
result.append(r)
debug_output(result)
cursor.close()
return result
class StarRocksSettings(BaseSettings):
"""StarRocks Client Configuration
Attribute:
StarRocks_host (str) : An URL to connect to MyScale backend.
Defaults to 'localhost'.
StarRocks_port (int) : URL port to connect with HTTP. Defaults to 8443.
username (str) : Username to login. Defaults to None.
password (str) : Password to login. Defaults to None.
database (str) : Database name to find the table. Defaults to 'default'.
table (str) : Table name to operate on.
Defaults to 'vector_table'.
column_map (Dict) : Column type map to project column name onto langchain
semantics. Must have keys: `text`, `id`, `vector`,
must be same size to number of columns. For example:
.. code-block:: python
{
'id': 'text_id',
'embedding': 'text_embedding',
'document': 'text_plain',
'metadata': 'metadata_dictionary_in_json',
}
Defaults to identity map.
"""
host: str = "localhost"
port: int = 9030
username: str = "root"
password: str = ""
column_map: Dict[str, str] = {
"id": "id",
"document": "document",
"embedding": "embedding",
"metadata": "metadata",
}
database: str = "default"
table: str = "langchain"
def __getitem__(self, item: str) -> Any:
return getattr(self, item)
class Config:
env_file = ".env"
env_prefix = "starrocks_"
env_file_encoding = "utf-8"
[docs]class StarRocks(VectorStore):
"""Wrapper around StarRocks vector database
You need a `pymysql` python package, and a valid account
to connect to StarRocks.
Right now StarRocks has only implemented `cosine_similarity` function to
compute distance between two vectors. And there is no vector inside right now,
so we have to iterate all vectors and compute spatial distance.
For more information, please visit
[StarRocks official site](https://www.starrocks.io/)
[StarRocks github](https://github.com/StarRocks/starrocks)
"""
def __init__(
self,
embedding: Embeddings,
config: Optional[StarRocksSettings] = None,
**kwargs: Any,
) -> None:
"""StarRocks Wrapper to LangChain
embedding_function (Embeddings):
config (StarRocksSettings): Configuration to StarRocks Client
"""
try:
import pymysql # type: ignore[import]
except ImportError:
raise ImportError(
"Could not import pymysql python package. "
"Please install it with `pip install pymysql`."
)
try:
from tqdm import tqdm
self.pgbar = tqdm
except ImportError:
# Just in case if tqdm is not installed
self.pgbar = lambda x, **kwargs: x
super().__init__()
if config is not None:
self.config = config
else:
self.config = StarRocksSettings()
assert self.config
assert self.config.host and self.config.port
assert self.config.column_map and self.config.database and self.config.table
for k in ["id", "embedding", "document", "metadata"]:
assert k in self.config.column_map
# initialize the schema
dim = len(embedding.embed_query("test"))
self.schema = f"""\
CREATE TABLE IF NOT EXISTS {self.config.database}.{self.config.table}(
{self.config.column_map['id']} string,
{self.config.column_map['document']} string,
{self.config.column_map['embedding']} array<float>,
{self.config.column_map['metadata']} string
) ENGINE = OLAP PRIMARY KEY(id) DISTRIBUTED BY HASH(id) \
PROPERTIES ("replication_num" = "1")\
"""
self.dim = dim
self.BS = "\\"
self.must_escape = ("\\", "'")
self.embedding_function = embedding
self.dist_order = "DESC"
debug_output(self.config)
# Create a connection to StarRocks
self.connection = pymysql.connect(
host=self.config.host,
port=self.config.port,
user=self.config.username,
password=self.config.password,
database=self.config.database,
**kwargs,
)
debug_output(self.schema)
get_named_result(self.connection, self.schema)
[docs] def escape_str(self, value: str) -> str:
return "".join(f"{self.BS}{c}" if c in self.must_escape else c for c in value)
def _build_insert_sql(self, transac: Iterable, column_names: Iterable[str]) -> str:
ks = ",".join(column_names)
embed_tuple_index = tuple(column_names).index(
self.config.column_map["embedding"]
)
_data = []
for n in transac:
n = ",".join(
[
f"'{self.escape_str(str(_n))}'"
if idx != embed_tuple_index
else f"array<float>{str(_n)}"
for (idx, _n) in enumerate(n)
]
)
_data.append(f"({n})")
i_str = f"""
INSERT INTO
{self.config.database}.{self.config.table}({ks})
VALUES
{','.join(_data)}
"""
return i_str
def _insert(self, transac: Iterable, column_names: Iterable[str]) -> None:
_insert_query = self._build_insert_sql(transac, column_names)
debug_output(_insert_query)
get_named_result(self.connection, _insert_query)
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
batch_size: int = 32,
ids: Optional[Iterable[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Insert more texts through the embeddings and add to the VectorStore.
Args:
texts: Iterable of strings to add to the VectorStore.
ids: Optional list of ids to associate with the texts.
batch_size: Batch size of insertion
metadata: Optional column data to be inserted
Returns:
List of ids from adding the texts into the VectorStore.
"""
# Embed and create the documents
ids = ids or [sha1(t.encode("utf-8")).hexdigest() for t in texts]
colmap_ = self.config.column_map
transac = []
column_names = {
colmap_["id"]: ids,
colmap_["document"]: texts,
colmap_["embedding"]: self.embedding_function.embed_documents(list(texts)),
}
metadatas = metadatas or [{} for _ in texts]
column_names[colmap_["metadata"]] = map(json.dumps, metadatas)
assert len(set(colmap_) - set(column_names)) >= 0
keys, values = zip(*column_names.items())
try:
t = None
for v in self.pgbar(
zip(*values), desc="Inserting data...", total=len(metadatas)
):
assert (
len(v[keys.index(self.config.column_map["embedding"])]) == self.dim
)
transac.append(v)
if len(transac) == batch_size:
if t:
t.join()
t = Thread(target=self._insert, args=[transac, keys])
t.start()
transac = []
if len(transac) > 0:
if t:
t.join()
self._insert(transac, keys)
return [i for i in ids]
except Exception as e:
logger.error(f"\033[91m\033[1m{type(e)}\033[0m \033[95m{str(e)}\033[0m")
return []
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[Dict[Any, Any]]] = None,
config: Optional[StarRocksSettings] = None,
text_ids: Optional[Iterable[str]] = None,
batch_size: int = 32,
**kwargs: Any,
) -> StarRocks:
"""Create StarRocks wrapper with existing texts
Args:
embedding_function (Embeddings): Function to extract text embedding
texts (Iterable[str]): List or tuple of strings to be added
config (StarRocksSettings, Optional): StarRocks configuration
text_ids (Optional[Iterable], optional): IDs for the texts.
Defaults to None.
batch_size (int, optional): Batchsize when transmitting data to StarRocks.
Defaults to 32.
metadata (List[dict], optional): metadata to texts. Defaults to None.
Returns:
StarRocks Index
"""
ctx = cls(embedding, config, **kwargs)
ctx.add_texts(texts, ids=text_ids, batch_size=batch_size, metadatas=metadatas)
return ctx
def __repr__(self) -> str:
"""Text representation for StarRocks Vector Store, prints backends, username
and schemas. Easy to use with `str(StarRocks())`
Returns:
repr: string to show connection info and data schema
"""
_repr = f"\033[92m\033[1m{self.config.database}.{self.config.table} @ "
_repr += f"{self.config.host}:{self.config.port}\033[0m\n\n"
_repr += f"\033[1musername: {self.config.username}\033[0m\n\nTable Schema:\n"
width = 25
fields = 3
_repr += "-" * (width * fields + 1) + "\n"
columns = ["name", "type", "key"]
_repr += f"|\033[94m{columns[0]:24s}\033[0m|\033[96m{columns[1]:24s}"
_repr += f"\033[0m|\033[96m{columns[2]:24s}\033[0m|\n"
_repr += "-" * (width * fields + 1) + "\n"
q_str = f"DESC {self.config.database}.{self.config.table}"
debug_output(q_str)
rs = get_named_result(self.connection, q_str)
for r in rs:
_repr += f"|\033[94m{r['Field']:24s}\033[0m|\033[96m{r['Type']:24s}"
_repr += f"\033[0m|\033[96m{r['Key']:24s}\033[0m|\n"
_repr += "-" * (width * fields + 1) + "\n"
return _repr
def _build_query_sql(
self, q_emb: List[float], topk: int, where_str: Optional[str] = None
) -> str:
q_emb_str = ",".join(map(str, q_emb))
if where_str:
where_str = f"WHERE {where_str}"
else:
where_str = ""
q_str = f"""
SELECT {self.config.column_map['document']},
{self.config.column_map['metadata']},
cosine_similarity_norm(array<float>[{q_emb_str}],
{self.config.column_map['embedding']}) as dist
FROM {self.config.database}.{self.config.table}
{where_str}
ORDER BY dist {self.dist_order}
LIMIT {topk}
"""
debug_output(q_str)
return q_str
[docs] def similarity_search(
self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any
) -> List[Document]:
"""Perform a similarity search with StarRocks
Args:
query (str): query string
k (int, optional): Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional): where condition string.
Defaults to None.
NOTE: Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use `{self.metadata_column}.attribute` instead of `attribute`
alone. The default name for it is `metadata`.
Returns:
List[Document]: List of Documents
"""
return self.similarity_search_by_vector(
self.embedding_function.embed_query(query), k, where_str, **kwargs
)
[docs] def similarity_search_by_vector(
self,
embedding: List[float],
k: int = 4,
where_str: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Perform a similarity search with StarRocks by vectors
Args:
query (str): query string
k (int, optional): Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional): where condition string.
Defaults to None.
NOTE: Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use `{self.metadata_column}.attribute` instead of `attribute`
alone. The default name for it is `metadata`.
Returns:
List[Document]: List of (Document, similarity)
"""
q_str = self._build_query_sql(embedding, k, where_str)
try:
return [
Document(
page_content=r[self.config.column_map["document"]],
metadata=json.loads(r[self.config.column_map["metadata"]]),
)
for r in get_named_result(self.connection, q_str)
]
except Exception as e:
logger.error(f"\033[91m\033[1m{type(e)}\033[0m \033[95m{str(e)}\033[0m")
return []
[docs] def similarity_search_with_relevance_scores(
self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any
) -> List[Tuple[Document, float]]:
"""Perform a similarity search with StarRocks
Args:
query (str): query string
k (int, optional): Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional): where condition string.
Defaults to None.
NOTE: Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use `{self.metadata_column}.attribute` instead of `attribute`
alone. The default name for it is `metadata`.
Returns:
List[Document]: List of documents
"""
q_str = self._build_query_sql(
self.embedding_function.embed_query(query), k, where_str
)
try:
return [
(
Document(
page_content=r[self.config.column_map["document"]],
metadata=json.loads(r[self.config.column_map["metadata"]]),
),
r["dist"],
)
for r in get_named_result(self.connection, q_str)
]
except Exception as e:
logger.error(f"\033[91m\033[1m{type(e)}\033[0m \033[95m{str(e)}\033[0m")
return []
[docs] def drop(self) -> None:
"""
Helper function: Drop data
"""
get_named_result(
self.connection,
f"DROP TABLE IF EXISTS {self.config.database}.{self.config.table}",
)
@property
def metadata_column(self) -> str:
return self.config.column_map["metadata"] | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html |
42a885e8-e2c4-471f-ad50-0e72902b6ef1 | Source code for langchain.vectorstores.vectara
"""Wrapper around Vectara vector database."""
from __future__ import annotations
import json
import logging
import os
from hashlib import md5
from typing import Any, Iterable, List, Optional, Tuple, Type
import requests
from pydantic import Field
from langchain.embeddings.base import Embeddings
from langchain.schema import Document
from langchain.vectorstores.base import VectorStore, VectorStoreRetriever
[docs]class Vectara(VectorStore):
"""Implementation of Vector Store using Vectara (https://vectara.com).
Example:
.. code-block:: python
from langchain.vectorstores import Vectara
vectorstore = Vectara(
vectara_customer_id=vectara_customer_id,
vectara_corpus_id=vectara_corpus_id,
vectara_api_key=vectara_api_key
)
"""
def __init__(
self,
vectara_customer_id: Optional[str] = None,
vectara_corpus_id: Optional[str] = None,
vectara_api_key: Optional[str] = None,
):
"""Initialize with Vectara API."""
self._vectara_customer_id = vectara_customer_id or os.environ.get(
"VECTARA_CUSTOMER_ID"
)
self._vectara_corpus_id = vectara_corpus_id or os.environ.get(
"VECTARA_CORPUS_ID"
)
self._vectara_api_key = vectara_api_key or os.environ.get("VECTARA_API_KEY")
if (
self._vectara_customer_id is None
or self._vectara_corpus_id is None
or self._vectara_api_key is None
):
logging.warning(
"Cant find Vectara credentials, customer_id or corpus_id in "
"environment."
)
else:
logging.debug(f"Using corpus id {self._vectara_corpus_id}")
self._session = requests.Session() # to reuse connections
adapter = requests.adapters.HTTPAdapter(max_retries=3)
self._session.mount("http://", adapter)
def _get_post_headers(self) -> dict:
"""Returns headers that should be attached to each post request."""
return {
"x-api-key": self._vectara_api_key,
"customer-id": self._vectara_customer_id,
"Content-Type": "application/json",
}
def _delete_doc(self, doc_id: str) -> bool:
"""
Delete a document from the Vectara corpus.
Args:
url (str): URL of the page to delete.
doc_id (str): ID of the document to delete.
Returns:
bool: True if deletion was successful, False otherwise.
"""
body = {
"customer_id": self._vectara_customer_id,
"corpus_id": self._vectara_corpus_id,
"document_id": doc_id,
}
response = self._session.post(
"https://api.vectara.io/v1/delete-doc",
data=json.dumps(body),
verify=True,
headers=self._get_post_headers(),
)
if response.status_code != 200:
logging.error(
f"Delete request failed for doc_id = {doc_id} with status code "
f"{response.status_code}, reason {response.reason}, text "
f"{response.text}"
)
return False
return True
def _index_doc(self, doc: dict) -> bool:
request: dict[str, Any] = {}
request["customer_id"] = self._vectara_customer_id
request["corpus_id"] = self._vectara_corpus_id
request["document"] = doc
response = self._session.post(
headers=self._get_post_headers(),
url="https://api.vectara.io/v1/core/index",
data=json.dumps(request),
timeout=30,
verify=True,
)
status_code = response.status_code
result = response.json()
status_str = result["status"]["code"] if "status" in result else None
if status_code == 409 or (status_str and status_str == "ALREADY_EXISTS"):
return False
else:
return True
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
Returns:
List of ids from adding the texts into the vectorstore.
"""
doc_hash = md5()
for t in texts:
doc_hash.update(t.encode())
doc_id = doc_hash.hexdigest()
if metadatas is None:
metadatas = [{} for _ in texts]
doc = {
"document_id": doc_id,
"metadataJson": json.dumps({"source": "langchain"}),
"parts": [
{"text": text, "metadataJson": json.dumps(md)}
for text, md in zip(texts, metadatas)
],
}
succeeded = self._index_doc(doc)
if not succeeded:
self._delete_doc(doc_id)
self._index_doc(doc)
return [doc_id]
[docs] def similarity_search_with_score(
self,
query: str,
k: int = 5,
lambda_val: float = 0.025,
filter: Optional[str] = None,
n_sentence_context: int = 0,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return Vectara documents most similar to query, along with scores.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 5.
lambda_val: lexical match parameter for hybrid search.
filter: Dictionary of argument(s) to filter on metadata. For example a
filter can be "doc.rating > 3.0 and part.lang = 'deu'"} see
https://docs.vectara.com/docs/search-apis/sql/filter-overview
for more details.
n_sentence_context: number of sentences before/after the matching segment
to add
Returns:
List of Documents most similar to the query and score for each.
"""
data = json.dumps(
{
"query": [
{
"query": query,
"start": 0,
"num_results": k,
"context_config": {
"sentences_before": n_sentence_context,
"sentences_after": n_sentence_context,
},
"corpus_key": [
{
"customer_id": self._vectara_customer_id,
"corpus_id": self._vectara_corpus_id,
"metadataFilter": filter,
"lexical_interpolation_config": {"lambda": lambda_val},
}
],
}
]
}
)
response = self._session.post(
headers=self._get_post_headers(),
url="https://api.vectara.io/v1/query",
data=data,
timeout=10,
)
if response.status_code != 200:
logging.error(
"Query failed %s",
f"(code {response.status_code}, reason {response.reason}, details "
f"{response.text})",
)
return []
result = response.json()
responses = result["responseSet"][0]["response"]
vectara_default_metadata = ["lang", "len", "offset"]
docs = [
(
Document(
page_content=x["text"],
metadata={
m["name"]: m["value"]
for m in x["metadata"]
if m["name"] not in vectara_default_metadata
},
),
x["score"],
)
for x in responses
]
return docs
[docs] def similarity_search(
self,
query: str,
k: int = 5,
lambda_val: float = 0.025,
filter: Optional[str] = None,
n_sentence_context: int = 0,
**kwargs: Any,
) -> List[Document]:
"""Return Vectara documents most similar to query, along with scores.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 5.
filter: Dictionary of argument(s) to filter on metadata. For example a
filter can be "doc.rating > 3.0 and part.lang = 'deu'"} see
https://docs.vectara.com/docs/search-apis/sql/filter-overview for more
details.
n_sentence_context: number of sentences before/after the matching segment
to add
Returns:
List of Documents most similar to the query
"""
docs_and_scores = self.similarity_search_with_score(
query,
k=k,
lambda_val=lambda_val,
filter=filter,
n_sentence_context=n_sentence_context,
**kwargs,
)
return [doc for doc, _ in docs_and_scores]
[docs] @classmethod
def from_texts(
cls: Type[Vectara],
texts: List[str],
embedding: Optional[Embeddings] = None,
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> Vectara:
"""Construct Vectara wrapper from raw documents.
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain import Vectara
vectara = Vectara.from_texts(
texts,
vectara_customer_id=customer_id,
vectara_corpus_id=corpus_id,
vectara_api_key=api_key,
)
"""
# Note: Vectara generates its own embeddings, so we ignore the provided
# embeddings (required by interface)
vectara = cls(**kwargs)
vectara.add_texts(texts, metadatas)
return vectara
[docs] def as_retriever(self, **kwargs: Any) -> VectaraRetriever:
return VectaraRetriever(vectorstore=self, **kwargs)
class VectaraRetriever(VectorStoreRetriever):
vectorstore: Vectara
search_kwargs: dict = Field(
default_factory=lambda: {
"lambda_val": 0.025,
"k": 5,
"filter": "",
"n_sentence_context": "0",
}
)
"""Search params.
k: Number of Documents to return. Defaults to 5.
lambda_val: lexical match parameter for hybrid search.
filter: Dictionary of argument(s) to filter on metadata. For example a
filter can be "doc.rating > 3.0 and part.lang = 'deu'"} see
https://docs.vectara.com/docs/search-apis/sql/filter-overview
for more details.
n_sentence_context: number of sentences before/after the matching segment to add
"""
def add_texts(
self, texts: List[str], metadatas: Optional[List[dict]] = None
) -> None:
"""Add text to the Vectara vectorstore.
Args:
texts (List[str]): The text
metadatas (List[dict]): Metadata dicts, must line up with existing store
"""
self.vectorstore.add_texts(texts, metadatas) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html |
4bdb6b26-7ece-428a-be62-8e5e7ea525aa | Source code for langchain.vectorstores.hologres
"""VectorStore wrapper around a Hologres database."""
from __future__ import annotations
import json
import logging
import uuid
from typing import Any, Dict, Iterable, List, Optional, Tuple, Type
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.utils import get_from_dict_or_env
from langchain.vectorstores.base import VectorStore
ADA_TOKEN_COUNT = 1536
_LANGCHAIN_DEFAULT_TABLE_NAME = "langchain_pg_embedding"
class HologresWrapper:
def __init__(self, connection_string: str, ndims: int, table_name: str) -> None:
import psycopg2
self.table_name = table_name
self.conn = psycopg2.connect(connection_string)
self.cursor = self.conn.cursor()
self.conn.autocommit = False
self.ndims = ndims
def create_vector_extension(self) -> None:
self.cursor.execute("create extension if not exists proxima")
self.conn.commit()
def create_table(self, drop_if_exist: bool = True) -> None:
if drop_if_exist:
self.cursor.execute(f"drop table if exists {self.table_name}")
self.conn.commit()
self.cursor.execute(
f"""create table if not exists {self.table_name} (
id text,
embedding float4[] check(array_ndims(embedding) = 1 and \
array_length(embedding, 1) = {self.ndims}),
metadata json,
document text);"""
)
self.cursor.execute(
f"call set_table_property('{self.table_name}'"
+ """, 'proxima_vectors',
'{"embedding":{"algorithm":"Graph",
"distance_method":"SquaredEuclidean",
"build_params":{"min_flush_proxima_row_count" : 1,
"min_compaction_proxima_row_count" : 1,
"max_total_size_to_merge_mb" : 2000}}}');"""
)
self.conn.commit()
def get_by_id(self, id: str) -> List[Tuple]:
statement = (
f"select id, embedding, metadata, "
f"document from {self.table_name} where id = %s;"
)
self.cursor.execute(
statement,
(id),
)
self.conn.commit()
return self.cursor.fetchall()
def insert(
self,
embedding: List[float],
metadata: dict,
document: str,
id: Optional[str] = None,
) -> None:
self.cursor.execute(
f'insert into "{self.table_name}" '
f"values (%s, array{json.dumps(embedding)}::float4[], %s, %s)",
(id if id is not None else "null", json.dumps(metadata), document),
)
self.conn.commit()
def query_nearest_neighbours(
self, embedding: List[float], k: int, filter: Optional[Dict[str, str]] = None
) -> List[Tuple[str, str, float]]:
params = []
filter_clause = ""
if filter is not None:
conjuncts = []
for key, val in filter.items():
conjuncts.append("metadata->>%s=%s")
params.append(key)
params.append(val)
filter_clause = "where " + " and ".join(conjuncts)
sql = (
f"select document, metadata::text, "
f"pm_approx_squared_euclidean_distance(array{json.dumps(embedding)}"
f"::float4[], embedding) as distance from"
f" {self.table_name} {filter_clause} order by distance asc limit {k};"
)
self.cursor.execute(sql, tuple(params))
self.conn.commit()
return self.cursor.fetchall()
[docs]class Hologres(VectorStore):
"""VectorStore implementation using Hologres.
- `connection_string` is a hologres connection string.
- `embedding_function` any embedding function implementing
`langchain.embeddings.base.Embeddings` interface.
- `ndims` is the number of dimensions of the embedding output.
- `table_name` is the name of the table to store embeddings and data.
(default: langchain_pg_embedding)
- NOTE: The table will be created when initializing the store (if not exists)
So, make sure the user has the right permissions to create tables.
- `pre_delete_table` if True, will delete the table if it exists.
(default: False)
- Useful for testing.
"""
def __init__(
self,
connection_string: str,
embedding_function: Embeddings,
ndims: int = ADA_TOKEN_COUNT,
table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,
pre_delete_table: bool = False,
logger: Optional[logging.Logger] = None,
) -> None:
self.connection_string = connection_string
self.ndims = ndims
self.table_name = table_name
self.embedding_function = embedding_function
self.pre_delete_table = pre_delete_table
self.logger = logger or logging.getLogger(__name__)
self.__post_init__()
def __post_init__(
self,
) -> None:
"""
Initialize the store.
"""
self.storage = HologresWrapper(
self.connection_string, self.ndims, self.table_name
)
self.create_vector_extension()
self.create_table()
[docs] def create_vector_extension(self) -> None:
try:
self.storage.create_vector_extension()
except Exception as e:
self.logger.exception(e)
raise e
[docs] def create_table(self) -> None:
self.storage.create_table(self.pre_delete_table)
@classmethod
def __from(
cls,
texts: List[str],
embeddings: List[List[float]],
embedding_function: Embeddings,
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
ndims: int = ADA_TOKEN_COUNT,
table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,
pre_delete_table: bool = False,
**kwargs: Any,
) -> Hologres:
if ids is None:
ids = [str(uuid.uuid1()) for _ in texts]
if not metadatas:
metadatas = [{} for _ in texts]
connection_string = cls.get_connection_string(kwargs)
store = cls(
connection_string=connection_string,
embedding_function=embedding_function,
ndims=ndims,
table_name=table_name,
pre_delete_table=pre_delete_table,
)
store.add_embeddings(
texts=texts, embeddings=embeddings, metadatas=metadatas, ids=ids, **kwargs
)
return store
[docs] def add_embeddings(
self,
texts: Iterable[str],
embeddings: List[List[float]],
metadatas: List[dict],
ids: List[str],
**kwargs: Any,
) -> None:
"""Add embeddings to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
embeddings: List of list of embedding vectors.
metadatas: List of metadatas associated with the texts.
kwargs: vectorstore specific parameters
"""
try:
for text, metadata, embedding, id in zip(texts, metadatas, embeddings, ids):
self.storage.insert(embedding, metadata, text, id)
except Exception as e:
self.logger.exception(e)
self.storage.conn.commit()
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
kwargs: vectorstore specific parameters
Returns:
List of ids from adding the texts into the vectorstore.
"""
if ids is None:
ids = [str(uuid.uuid1()) for _ in texts]
embeddings = self.embedding_function.embed_documents(list(texts))
if not metadatas:
metadatas = [{} for _ in texts]
self.add_embeddings(texts, embeddings, metadatas, ids, **kwargs)
return ids
[docs] def similarity_search(
self,
query: str,
k: int = 4,
filter: Optional[dict] = None,
**kwargs: Any,
) -> List[Document]:
"""Run similarity search with Hologres with distance.
Args:
query (str): Query text to search for.
k (int): Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List of Documents most similar to the query.
"""
embedding = self.embedding_function.embed_query(text=query)
return self.similarity_search_by_vector(
embedding=embedding,
k=k,
filter=filter,
)
[docs] def similarity_search_by_vector(
self,
embedding: List[float],
k: int = 4,
filter: Optional[dict] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to embedding vector.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List of Documents most similar to the query vector.
"""
docs_and_scores = self.similarity_search_with_score_by_vector(
embedding=embedding, k=k, filter=filter
)
return [doc for doc, _ in docs_and_scores]
[docs] def similarity_search_with_score(
self,
query: str,
k: int = 4,
filter: Optional[dict] = None,
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List of Documents most similar to the query and score for each
"""
embedding = self.embedding_function.embed_query(query)
docs = self.similarity_search_with_score_by_vector(
embedding=embedding, k=k, filter=filter
)
return docs
[docs] def similarity_search_with_score_by_vector(
self,
embedding: List[float],
k: int = 4,
filter: Optional[dict] = None,
) -> List[Tuple[Document, float]]:
results: List[Tuple[str, str, float]] = self.storage.query_nearest_neighbours(
embedding, k, filter
)
docs = [
(
Document(
page_content=result[0],
metadata=json.loads(result[1]),
),
result[2],
)
for result in results
]
return docs
[docs] @classmethod
def from_texts(
cls: Type[Hologres],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
ndims: int = ADA_TOKEN_COUNT,
table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,
ids: Optional[List[str]] = None,
pre_delete_table: bool = False,
**kwargs: Any,
) -> Hologres:
"""
Return VectorStore initialized from texts and embeddings.
Postgres connection string is required
"Either pass it as a parameter
or set the HOLOGRES_CONNECTION_STRING environment variable.
"""
embeddings = embedding.embed_documents(list(texts))
return cls.__from(
texts,
embeddings,
embedding,
metadatas=metadatas,
ids=ids,
ndims=ndims,
table_name=table_name,
pre_delete_table=pre_delete_table,
**kwargs,
)
[docs] @classmethod
def from_embeddings(
cls,
text_embeddings: List[Tuple[str, List[float]]],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
ndims: int = ADA_TOKEN_COUNT,
table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,
ids: Optional[List[str]] = None,
pre_delete_table: bool = False,
**kwargs: Any,
) -> Hologres:
"""Construct Hologres wrapper from raw documents and pre-
generated embeddings.
Return VectorStore initialized from documents and embeddings.
Postgres connection string is required
"Either pass it as a parameter
or set the HOLOGRES_CONNECTION_STRING environment variable.
Example:
.. code-block:: python
from langchain import Hologres
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
faiss = Hologres.from_embeddings(text_embedding_pairs, embeddings)
"""
texts = [t[0] for t in text_embeddings]
embeddings = [t[1] for t in text_embeddings]
return cls.__from(
texts,
embeddings,
embedding,
metadatas=metadatas,
ids=ids,
ndims=ndims,
table_name=table_name,
pre_delete_table=pre_delete_table,
**kwargs,
)
[docs] @classmethod
def from_existing_index(
cls: Type[Hologres],
embedding: Embeddings,
ndims: int = ADA_TOKEN_COUNT,
table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,
pre_delete_table: bool = False,
**kwargs: Any,
) -> Hologres:
"""
Get intsance of an existing Hologres store.This method will
return the instance of the store without inserting any new
embeddings
"""
connection_string = cls.get_connection_string(kwargs)
store = cls(
connection_string=connection_string,
ndims=ndims,
table_name=table_name,
embedding_function=embedding,
pre_delete_table=pre_delete_table,
)
return store
[docs] @classmethod
def get_connection_string(cls, kwargs: Dict[str, Any]) -> str:
connection_string: str = get_from_dict_or_env(
data=kwargs,
key="connection_string",
env_key="HOLOGRES_CONNECTION_STRING",
)
if not connection_string:
raise ValueError(
"Postgres connection string is required"
"Either pass it as a parameter"
"or set the HOLOGRES_CONNECTION_STRING environment variable."
)
return connection_string
[docs] @classmethod
def from_documents(
cls: Type[Hologres],
documents: List[Document],
embedding: Embeddings,
ndims: int = ADA_TOKEN_COUNT,
table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,
ids: Optional[List[str]] = None,
pre_delete_collection: bool = False,
**kwargs: Any,
) -> Hologres:
"""
Return VectorStore initialized from documents and embeddings.
Postgres connection string is required
"Either pass it as a parameter
or set the HOLOGRES_CONNECTION_STRING environment variable.
"""
texts = [d.page_content for d in documents]
metadatas = [d.metadata for d in documents]
connection_string = cls.get_connection_string(kwargs)
kwargs["connection_string"] = connection_string
return cls.from_texts(
texts=texts,
pre_delete_collection=pre_delete_collection,
embedding=embedding,
metadatas=metadatas,
ids=ids,
ndims=ndims,
table_name=table_name,
**kwargs,
)
[docs] @classmethod
def connection_string_from_db_params(
cls,
host: str,
port: int,
database: str,
user: str,
password: str,
) -> str:
"""Return connection string from database parameters."""
return (
f"dbname={database} user={user} password={password} host={host} port={port}"
) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html |
954a436d-7f80-49e2-945b-995b28e13c49 | Source code for langchain.vectorstores.redis
"""Wrapper around Redis vector database."""
from __future__ import annotations
import json
import logging
import uuid
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
Iterable,
List,
Literal,
Mapping,
Optional,
Tuple,
Type,
)
import numpy as np
from pydantic import BaseModel, root_validator
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.utils import get_from_dict_or_env
from langchain.vectorstores.base import VectorStore, VectorStoreRetriever
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from redis.client import Redis as RedisType
from redis.commands.search.query import Query
# required modules
REDIS_REQUIRED_MODULES = [
{"name": "search", "ver": 20400},
{"name": "searchlight", "ver": 20400},
]
# distance mmetrics
REDIS_DISTANCE_METRICS = Literal["COSINE", "IP", "L2"]
def _check_redis_module_exist(client: RedisType, required_modules: List[dict]) -> None:
"""Check if the correct Redis modules are installed."""
installed_modules = client.module_list()
installed_modules = {
module[b"name"].decode("utf-8"): module for module in installed_modules
}
for module in required_modules:
if module["name"] in installed_modules and int(
installed_modules[module["name"]][b"ver"]
) >= int(module["ver"]):
return
# otherwise raise error
error_message = (
"Redis cannot be used as a vector database without RediSearch >=2.4"
"Please head to https://redis.io/docs/stack/search/quick_start/"
"to know more about installing the RediSearch module within Redis Stack."
)
logging.error(error_message)
raise ValueError(error_message)
def _check_index_exists(client: RedisType, index_name: str) -> bool:
"""Check if Redis index exists."""
try:
client.ft(index_name).info()
except: # noqa: E722
logger.info("Index does not exist")
return False
logger.info("Index already exists")
return True
def _redis_key(prefix: str) -> str:
"""Redis key schema for a given prefix."""
return f"{prefix}:{uuid.uuid4().hex}"
def _redis_prefix(index_name: str) -> str:
"""Redis key prefix for a given index."""
return f"doc:{index_name}"
def _default_relevance_score(val: float) -> float:
return 1 - val
[docs]class Redis(VectorStore):
"""Wrapper around Redis vector database.
To use, you should have the ``redis`` python package installed.
Example:
.. code-block:: python
from langchain.vectorstores import Redis
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Redis(
redis_url="redis://username:password@localhost:6379"
index_name="my-index",
embedding_function=embeddings.embed_query,
)
"""
def __init__(
self,
redis_url: str,
index_name: str,
embedding_function: Callable,
content_key: str = "content",
metadata_key: str = "metadata",
vector_key: str = "content_vector",
relevance_score_fn: Optional[
Callable[[float], float]
] = _default_relevance_score,
**kwargs: Any,
):
"""Initialize with necessary components."""
try:
import redis
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis>=4.1.0`."
)
self.embedding_function = embedding_function
self.index_name = index_name
try:
# connect to redis from url
redis_client = redis.from_url(redis_url, **kwargs)
# check if redis has redisearch module installed
_check_redis_module_exist(redis_client, REDIS_REQUIRED_MODULES)
except ValueError as e:
raise ValueError(f"Redis failed to connect: {e}")
self.client = redis_client
self.content_key = content_key
self.metadata_key = metadata_key
self.vector_key = vector_key
self.relevance_score_fn = relevance_score_fn
def _create_index(
self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE"
) -> None:
try:
from redis.commands.search.field import TextField, VectorField
from redis.commands.search.indexDefinition import IndexDefinition, IndexType
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
# Check if index exists
if not _check_index_exists(self.client, self.index_name):
# Define schema
schema = (
TextField(name=self.content_key),
TextField(name=self.metadata_key),
VectorField(
self.vector_key,
"FLAT",
{
"TYPE": "FLOAT32",
"DIM": dim,
"DISTANCE_METRIC": distance_metric,
},
),
)
prefix = _redis_prefix(self.index_name)
# Create Redis Index
self.client.ft(self.index_name).create_index(
fields=schema,
definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH),
)
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
embeddings: Optional[List[List[float]]] = None,
batch_size: int = 1000,
**kwargs: Any,
) -> List[str]:
"""Add more texts to the vectorstore.
Args:
texts (Iterable[str]): Iterable of strings/text to add to the vectorstore.
metadatas (Optional[List[dict]], optional): Optional list of metadatas.
Defaults to None.
embeddings (Optional[List[List[float]]], optional): Optional pre-generated
embeddings. Defaults to None.
keys (List[str]) or ids (List[str]): Identifiers of entries.
Defaults to None.
batch_size (int, optional): Batch size to use for writes. Defaults to 1000.
Returns:
List[str]: List of ids added to the vectorstore
"""
ids = []
prefix = _redis_prefix(self.index_name)
# Get keys or ids from kwargs
# Other vectorstores use ids
keys_or_ids = kwargs.get("keys", kwargs.get("ids"))
# Write data to redis
pipeline = self.client.pipeline(transaction=False)
for i, text in enumerate(texts):
# Use provided values by default or fallback
key = keys_or_ids[i] if keys_or_ids else _redis_key(prefix)
metadata = metadatas[i] if metadatas else {}
embedding = embeddings[i] if embeddings else self.embedding_function(text)
pipeline.hset(
key,
mapping={
self.content_key: text,
self.vector_key: np.array(embedding, dtype=np.float32).tobytes(),
self.metadata_key: json.dumps(metadata),
},
)
ids.append(key)
# Write batch
if i % batch_size == 0:
pipeline.execute()
# Cleanup final batch
pipeline.execute()
return ids
[docs] def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""
Returns the most similar indexed documents to the query text.
Args:
query (str): The query text for which to find similar documents.
k (int): The number of documents to return. Default is 4.
Returns:
List[Document]: A list of documents that are most similar to the query text.
"""
docs_and_scores = self.similarity_search_with_score(query, k=k)
return [doc for doc, _ in docs_and_scores]
[docs] def similarity_search_limit_score(
self, query: str, k: int = 4, score_threshold: float = 0.2, **kwargs: Any
) -> List[Document]:
"""
Returns the most similar indexed documents to the query text within the
score_threshold range.
Args:
query (str): The query text for which to find similar documents.
k (int): The number of documents to return. Default is 4.
score_threshold (float): The minimum matching score required for a document
to be considered a match. Defaults to 0.2.
Because the similarity calculation algorithm is based on cosine similarity,
the smaller the angle, the higher the similarity.
Returns:
List[Document]: A list of documents that are most similar to the query text,
including the match score for each document.
Note:
If there are no documents that satisfy the score_threshold value,
an empty list is returned.
"""
docs_and_scores = self.similarity_search_with_score(query, k=k)
return [doc for doc, score in docs_and_scores if score < score_threshold]
def _prepare_query(self, k: int) -> Query:
try:
from redis.commands.search.query import Query
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
# Prepare the Query
hybrid_fields = "*"
base_query = (
f"{hybrid_fields}=>[KNN {k} @{self.vector_key} $vector AS vector_score]"
)
return_fields = [self.metadata_key, self.content_key, "vector_score"]
return (
Query(base_query)
.return_fields(*return_fields)
.sort_by("vector_score")
.paging(0, k)
.dialect(2)
)
[docs] def similarity_search_with_score(
self, query: str, k: int = 4
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query and score for each
"""
# Creates embedding vector from user query
embedding = self.embedding_function(query)
# Creates Redis query
redis_query = self._prepare_query(k)
params_dict: Mapping[str, str] = {
"vector": np.array(embedding) # type: ignore
.astype(dtype=np.float32)
.tobytes()
}
# Perform vector search
results = self.client.ft(self.index_name).search(redis_query, params_dict)
# Prepare document results
docs = [
(
Document(
page_content=result.content, metadata=json.loads(result.metadata)
),
float(result.vector_score),
)
for result in results.docs
]
return docs
def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
"""
if self.relevance_score_fn is None:
raise ValueError(
"relevance_score_fn must be provided to"
" Redis constructor to normalize scores"
)
docs_and_scores = self.similarity_search_with_score(query, k=k)
return [(doc, self.relevance_score_fn(score)) for doc, score in docs_and_scores]
[docs] @classmethod
def from_texts_return_keys(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
index_name: Optional[str] = None,
content_key: str = "content",
metadata_key: str = "metadata",
vector_key: str = "content_vector",
distance_metric: REDIS_DISTANCE_METRICS = "COSINE",
**kwargs: Any,
) -> Tuple[Redis, List[str]]:
"""Create a Redis vectorstore from raw documents.
This is a user-friendly interface that:
1. Embeds documents.
2. Creates a new index for the embeddings in Redis.
3. Adds the documents to the newly created Redis index.
4. Returns the keys of the newly created documents.
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain.vectorstores import Redis
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
redisearch, keys = RediSearch.from_texts_return_keys(
texts,
embeddings,
redis_url="redis://username:password@localhost:6379"
)
"""
redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL")
if "redis_url" in kwargs:
kwargs.pop("redis_url")
# Name of the search index if not given
if not index_name:
index_name = uuid.uuid4().hex
# Create instance
instance = cls(
redis_url,
index_name,
embedding.embed_query,
content_key=content_key,
metadata_key=metadata_key,
vector_key=vector_key,
**kwargs,
)
# Create embeddings over documents
embeddings = embedding.embed_documents(texts)
# Create the search index
instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric)
# Add data to Redis
keys = instance.add_texts(texts, metadatas, embeddings)
return instance, keys
[docs] @classmethod
def from_texts(
cls: Type[Redis],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
index_name: Optional[str] = None,
content_key: str = "content",
metadata_key: str = "metadata",
vector_key: str = "content_vector",
**kwargs: Any,
) -> Redis:
"""Create a Redis vectorstore from raw documents.
This is a user-friendly interface that:
1. Embeds documents.
2. Creates a new index for the embeddings in Redis.
3. Adds the documents to the newly created Redis index.
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain.vectorstores import Redis
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
redisearch = RediSearch.from_texts(
texts,
embeddings,
redis_url="redis://username:password@localhost:6379"
)
"""
instance, _ = cls.from_texts_return_keys(
texts,
embedding,
metadatas=metadatas,
index_name=index_name,
content_key=content_key,
metadata_key=metadata_key,
vector_key=vector_key,
**kwargs,
)
return instance
[docs] @staticmethod
def delete(
ids: List[str],
**kwargs: Any,
) -> bool:
"""
Delete a Redis entry.
Args:
ids: List of ids (keys) to delete.
Returns:
bool: Whether or not the deletions were successful.
"""
redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL")
if ids is None:
raise ValueError("'ids' (keys)() were not provided.")
try:
import redis
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
try:
# We need to first remove redis_url from kwargs,
# otherwise passing it to Redis will result in an error.
if "redis_url" in kwargs:
kwargs.pop("redis_url")
client = redis.from_url(url=redis_url, **kwargs)
except ValueError as e:
raise ValueError(f"Your redis connected error: {e}")
# Check if index exists
try:
client.delete(*ids)
logger.info("Entries deleted")
return True
except: # noqa: E722
# ids does not exist
return False
[docs] @staticmethod
def drop_index(
index_name: str,
delete_documents: bool,
**kwargs: Any,
) -> bool:
"""
Drop a Redis search index.
Args:
index_name (str): Name of the index to drop.
delete_documents (bool): Whether to drop the associated documents.
Returns:
bool: Whether or not the drop was successful.
"""
redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL")
try:
import redis
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
try:
# We need to first remove redis_url from kwargs,
# otherwise passing it to Redis will result in an error.
if "redis_url" in kwargs:
kwargs.pop("redis_url")
client = redis.from_url(url=redis_url, **kwargs)
except ValueError as e:
raise ValueError(f"Your redis connected error: {e}")
# Check if index exists
try:
client.ft(index_name).dropindex(delete_documents)
logger.info("Drop index")
return True
except: # noqa: E722
# Index not exist
return False
[docs] @classmethod
def from_existing_index(
cls,
embedding: Embeddings,
index_name: str,
content_key: str = "content",
metadata_key: str = "metadata",
vector_key: str = "content_vector",
**kwargs: Any,
) -> Redis:
"""Connect to an existing Redis index."""
redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL")
try:
import redis
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
try:
# We need to first remove redis_url from kwargs,
# otherwise passing it to Redis will result in an error.
if "redis_url" in kwargs:
kwargs.pop("redis_url")
client = redis.from_url(url=redis_url, **kwargs)
# check if redis has redisearch module installed
_check_redis_module_exist(client, REDIS_REQUIRED_MODULES)
# ensure that the index already exists
assert _check_index_exists(
client, index_name
), f"Index {index_name} does not exist"
except Exception as e:
raise ValueError(f"Redis failed to connect: {e}")
return cls(
redis_url,
index_name,
embedding.embed_query,
content_key=content_key,
metadata_key=metadata_key,
vector_key=vector_key,
**kwargs,
)
[docs] def as_retriever(self, **kwargs: Any) -> RedisVectorStoreRetriever:
return RedisVectorStoreRetriever(vectorstore=self, **kwargs)
class RedisVectorStoreRetriever(VectorStoreRetriever, BaseModel):
vectorstore: Redis
search_type: str = "similarity"
k: int = 4
score_threshold: float = 0.4
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
@root_validator()
def validate_search_type(cls, values: Dict) -> Dict:
"""Validate search type."""
if "search_type" in values:
search_type = values["search_type"]
if search_type not in ("similarity", "similarity_limit"):
raise ValueError(f"search_type of {search_type} not allowed.")
return values
def get_relevant_documents(self, query: str) -> List[Document]:
if self.search_type == "similarity":
docs = self.vectorstore.similarity_search(query, k=self.k)
elif self.search_type == "similarity_limit":
docs = self.vectorstore.similarity_search_limit_score(
query, k=self.k, score_threshold=self.score_threshold
)
else:
raise ValueError(f"search_type of {self.search_type} not allowed.")
return docs
async def aget_relevant_documents(self, query: str) -> List[Document]:
raise NotImplementedError("RedisVectorStoreRetriever does not support async")
def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:
"""Add documents to vectorstore."""
return self.vectorstore.add_documents(documents, **kwargs)
async def aadd_documents(
self, documents: List[Document], **kwargs: Any
) -> List[str]:
"""Add documents to vectorstore."""
return await self.vectorstore.aadd_documents(documents, **kwargs) | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html |
7fbfe242-ade3-4417-9a1d-07739c573194 | Source code for langchain.vectorstores.zilliz
from __future__ import annotations
import logging
from typing import Any, List, Optional
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.milvus import Milvus
logger = logging.getLogger(__name__)
[docs]class Zilliz(Milvus):
def _create_index(self) -> None:
"""Create a index on the collection"""
from pymilvus import Collection, MilvusException
if isinstance(self.col, Collection) and self._get_index() is None:
try:
# If no index params, use a default AutoIndex based one
if self.index_params is None:
self.index_params = {
"metric_type": "L2",
"index_type": "AUTOINDEX",
"params": {},
}
try:
self.col.create_index(
self._vector_field,
index_params=self.index_params,
using=self.alias,
)
# If default did not work, most likely Milvus self-hosted
except MilvusException:
# Use HNSW based index
self.index_params = {
"metric_type": "L2",
"index_type": "HNSW",
"params": {"M": 8, "efConstruction": 64},
}
self.col.create_index(
self._vector_field,
index_params=self.index_params,
using=self.alias,
)
logger.debug(
"Successfully created an index on collection: %s",
self.collection_name,
)
except MilvusException as e:
logger.error(
"Failed to create an index on collection: %s", self.collection_name
)
raise e
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
collection_name: str = "LangChainCollection",
connection_args: dict[str, Any] = {},
consistency_level: str = "Session",
index_params: Optional[dict] = None,
search_params: Optional[dict] = None,
drop_old: bool = False,
**kwargs: Any,
) -> Zilliz:
"""Create a Zilliz collection, indexes it with HNSW, and insert data.
Args:
texts (List[str]): Text data.
embedding (Embeddings): Embedding function.
metadatas (Optional[List[dict]]): Metadata for each text if it exists.
Defaults to None.
collection_name (str, optional): Collection name to use. Defaults to
"LangChainCollection".
connection_args (dict[str, Any], optional): Connection args to use. Defaults
to DEFAULT_MILVUS_CONNECTION.
consistency_level (str, optional): Which consistency level to use. Defaults
to "Session".
index_params (Optional[dict], optional): Which index_params to use.
Defaults to None.
search_params (Optional[dict], optional): Which search params to use.
Defaults to None.
drop_old (Optional[bool], optional): Whether to drop the collection with
that name if it exists. Defaults to False.
Returns:
Zilliz: Zilliz Vector Store
"""
vector_db = cls(
embedding_function=embedding,
collection_name=collection_name,
connection_args=connection_args,
consistency_level=consistency_level,
index_params=index_params,
search_params=search_params,
drop_old=drop_old,
**kwargs,
)
vector_db.add_texts(texts=texts, metadatas=metadatas)
return vector_db | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/zilliz.html |
1d702ba7-c4c3-4606-92ee-5e4697ce140d | Source code for langchain.vectorstores.supabase
from __future__ import annotations
import uuid
from itertools import repeat
from typing import (
TYPE_CHECKING,
Any,
Iterable,
List,
Optional,
Tuple,
Type,
Union,
)
import numpy as np
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
if TYPE_CHECKING:
import supabase
[docs]class SupabaseVectorStore(VectorStore):
"""VectorStore for a Supabase postgres database. Assumes you have the `pgvector`
extension installed and a `match_documents` (or similar) function. For more details:
https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabase
You can implement your own `match_documents` function in order to limit the search
space to a subset of documents based on your own authorization or business logic.
Note that the Supabase Python client does not yet support async operations.
If you'd like to use `max_marginal_relevance_search`, please review the instructions
below on modifying the `match_documents` function to return matched embeddings.
"""
_client: supabase.client.Client
# This is the embedding function. Don't confuse with the embedding vectors.
# We should perhaps rename the underlying Embedding base class to EmbeddingFunction
# or something
_embedding: Embeddings
table_name: str
query_name: str
def __init__(
self,
client: supabase.client.Client,
embedding: Embeddings,
table_name: str,
query_name: Union[str, None] = None,
) -> None:
"""Initialize with supabase client."""
try:
import supabase # noqa: F401
except ImportError:
raise ValueError(
"Could not import supabase python package. "
"Please install it with `pip install supabase`."
)
self._client = client
self._embedding: Embeddings = embedding
self.table_name = table_name or "documents"
self.query_name = query_name or "match_documents"
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict[Any, Any]]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
ids = ids or [str(uuid.uuid4()) for _ in texts]
docs = self._texts_to_documents(texts, metadatas)
vectors = self._embedding.embed_documents(list(texts))
return self.add_vectors(vectors, docs, ids)
[docs] @classmethod
def from_texts(
cls: Type["SupabaseVectorStore"],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
client: Optional[supabase.client.Client] = None,
table_name: Optional[str] = "documents",
query_name: Union[str, None] = "match_documents",
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> "SupabaseVectorStore":
"""Return VectorStore initialized from texts and embeddings."""
if not client:
raise ValueError("Supabase client is required.")
if not table_name:
raise ValueError("Supabase document table_name is required.")
embeddings = embedding.embed_documents(texts)
ids = [str(uuid.uuid4()) for _ in texts]
docs = cls._texts_to_documents(texts, metadatas)
_ids = cls._add_vectors(client, table_name, embeddings, docs, ids)
return cls(
client=client,
embedding=embedding,
table_name=table_name,
query_name=query_name,
)
[docs] def add_vectors(
self,
vectors: List[List[float]],
documents: List[Document],
ids: List[str],
) -> List[str]:
return self._add_vectors(self._client, self.table_name, vectors, documents, ids)
[docs] def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
vectors = self._embedding.embed_documents([query])
return self.similarity_search_by_vector(vectors[0], k)
[docs] def similarity_search_by_vector(
self, embedding: List[float], k: int = 4, **kwargs: Any
) -> List[Document]:
result = self.similarity_search_by_vector_with_relevance_scores(embedding, k)
documents = [doc for doc, _ in result]
return documents
[docs] def similarity_search_with_relevance_scores(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Tuple[Document, float]]:
vectors = self._embedding.embed_documents([query])
return self.similarity_search_by_vector_with_relevance_scores(vectors[0], k)
[docs] def similarity_search_by_vector_with_relevance_scores(
self, query: List[float], k: int
) -> List[Tuple[Document, float]]:
match_documents_params = dict(query_embedding=query, match_count=k)
res = self._client.rpc(self.query_name, match_documents_params).execute()
match_result = [
(
Document(
metadata=search.get("metadata", {}), # type: ignore
page_content=search.get("content", ""),
),
search.get("similarity", 0.0),
)
for search in res.data
if search.get("content")
]
return match_result
[docs] def similarity_search_by_vector_returning_embeddings(
self, query: List[float], k: int
) -> List[Tuple[Document, float, np.ndarray[np.float32, Any]]]:
match_documents_params = dict(query_embedding=query, match_count=k)
res = self._client.rpc(self.query_name, match_documents_params).execute()
match_result = [
(
Document(
metadata=search.get("metadata", {}), # type: ignore
page_content=search.get("content", ""),
),
search.get("similarity", 0.0),
# Supabase returns a vector type as its string represation (!).
# This is a hack to convert the string to numpy array.
np.fromstring(
search.get("embedding", "").strip("[]"), np.float32, sep=","
),
)
for search in res.data
if search.get("content")
]
return match_result
@staticmethod
def _texts_to_documents(
texts: Iterable[str],
metadatas: Optional[Iterable[dict[Any, Any]]] = None,
) -> List[Document]:
"""Return list of Documents from list of texts and metadatas."""
if metadatas is None:
metadatas = repeat({})
docs = [
Document(page_content=text, metadata=metadata)
for text, metadata in zip(texts, metadatas)
]
return docs
@staticmethod
def _add_vectors(
client: supabase.client.Client,
table_name: str,
vectors: List[List[float]],
documents: List[Document],
ids: List[str],
) -> List[str]:
"""Add vectors to Supabase table."""
rows: List[dict[str, Any]] = [
{
"id": ids[idx],
"content": documents[idx].page_content,
"embedding": embedding,
"metadata": documents[idx].metadata, # type: ignore
}
for idx, embedding in enumerate(vectors)
]
# According to the SupabaseVectorStore JS implementation, the best chunk size
# is 500
chunk_size = 500
id_list: List[str] = []
for i in range(0, len(rows), chunk_size):
chunk = rows[i : i + chunk_size]
result = client.from_(table_name).upsert(chunk).execute() # type: ignore
if len(result.data) == 0:
raise Exception("Error inserting: No rows added")
# VectorStore.add_vectors returns ids as strings
ids = [str(i.get("id")) for i in result.data if i.get("id")]
id_list.extend(ids)
return id_list
[docs] def max_marginal_relevance_search_by_vector(
self,
embedding: List[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
result = self.similarity_search_by_vector_returning_embeddings(
embedding, fetch_k
)
matched_documents = [doc_tuple[0] for doc_tuple in result]
matched_embeddings = [doc_tuple[2] for doc_tuple in result]
mmr_selected = maximal_marginal_relevance(
np.array([embedding], dtype=np.float32),
matched_embeddings,
k=k,
lambda_mult=lambda_mult,
)
filtered_documents = [matched_documents[i] for i in mmr_selected]
return filtered_documents
[docs] def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
`max_marginal_relevance_search` requires that `query_name` returns matched
embeddings alongside the match documents. The following function
demonstrates how to do this:
```sql
CREATE FUNCTION match_documents_embeddings(query_embedding vector(1536),
match_count int)
RETURNS TABLE(
id bigint,
content text,
metadata jsonb,
embedding vector(1536),
similarity float)
LANGUAGE plpgsql
AS $$
# variable_conflict use_column
BEGIN
RETURN query
SELECT
id,
content,
metadata,
embedding,
1 -(docstore.embedding <=> query_embedding) AS similarity
FROM
docstore
ORDER BY
docstore.embedding <=> query_embedding
LIMIT match_count;
END;
$$;
```
"""
embedding = self._embedding.embed_documents([query])
docs = self.max_marginal_relevance_search_by_vector(
embedding[0], k, fetch_k, lambda_mult=lambda_mult
)
return docs
[docs] def delete(self, ids: List[str]) -> None:
"""Delete by vector IDs.
Args:
ids: List of ids to delete.
"""
rows: List[dict[str, Any]] = [
{
"id": id,
}
for id in ids
]
# TODO: Check if this can be done in bulk
for row in rows:
self._client.from_(self.table_name).delete().eq("id", row["id"]).execute() | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html |
4751880f-f062-4611-965a-3ca4d41952a3 | Source code for langchain.vectorstores.docarray.in_memory
"""Wrapper around in-memory storage."""
from __future__ import annotations
from typing import Any, Dict, List, Literal, Optional
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.docarray.base import (
DocArrayIndex,
_check_docarray_import,
)
[docs]class DocArrayInMemorySearch(DocArrayIndex):
"""Wrapper around in-memory storage for exact search.
To use it, you should have the ``docarray`` package with version >=0.32.0 installed.
You can install it with `pip install "langchain[docarray]"`.
"""
[docs] @classmethod
def from_params(
cls,
embedding: Embeddings,
metric: Literal[
"cosine_sim", "euclidian_dist", "sgeuclidean_dist"
] = "cosine_sim",
**kwargs: Any,
) -> DocArrayInMemorySearch:
"""Initialize DocArrayInMemorySearch store.
Args:
embedding (Embeddings): Embedding function.
metric (str): metric for exact nearest-neighbor search.
Can be one of: "cosine_sim", "euclidean_dist" and "sqeuclidean_dist".
Defaults to "cosine_sim".
**kwargs: Other keyword arguments to be passed to the get_doc_cls method.
"""
_check_docarray_import()
from docarray.index import InMemoryExactNNIndex
doc_cls = cls._get_doc_cls(space=metric, **kwargs)
doc_index = InMemoryExactNNIndex[doc_cls]() # type: ignore
return cls(doc_index, embedding)
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[Dict[Any, Any]]] = None,
**kwargs: Any,
) -> DocArrayInMemorySearch:
"""Create an DocArrayInMemorySearch store and insert data.
Args:
texts (List[str]): Text data.
embedding (Embeddings): Embedding function.
metadatas (Optional[List[Dict[Any, Any]]]): Metadata for each text
if it exists. Defaults to None.
metric (str): metric for exact nearest-neighbor search.
Can be one of: "cosine_sim", "euclidean_dist" and "sqeuclidean_dist".
Defaults to "cosine_sim".
Returns:
DocArrayInMemorySearch Vector Store
"""
store = cls.from_params(embedding, **kwargs)
store.add_texts(texts=texts, metadatas=metadatas)
return store | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/in_memory.html |
5d4858bb-0ed9-4df8-8f00-e2fc6a6085ac | Source code for langchain.vectorstores.docarray.hnsw
"""Wrapper around Hnswlib store."""
from __future__ import annotations
from typing import Any, List, Literal, Optional
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.docarray.base import (
DocArrayIndex,
_check_docarray_import,
)
[docs]class DocArrayHnswSearch(DocArrayIndex):
"""Wrapper around HnswLib storage.
To use it, you should have the ``docarray`` package with version >=0.32.0 installed.
You can install it with `pip install "langchain[docarray]"`.
"""
[docs] @classmethod
def from_params(
cls,
embedding: Embeddings,
work_dir: str,
n_dim: int,
dist_metric: Literal["cosine", "ip", "l2"] = "cosine",
max_elements: int = 1024,
index: bool = True,
ef_construction: int = 200,
ef: int = 10,
M: int = 16,
allow_replace_deleted: bool = True,
num_threads: int = 1,
**kwargs: Any,
) -> DocArrayHnswSearch:
"""Initialize DocArrayHnswSearch store.
Args:
embedding (Embeddings): Embedding function.
work_dir (str): path to the location where all the data will be stored.
n_dim (int): dimension of an embedding.
dist_metric (str): Distance metric for DocArrayHnswSearch can be one of:
"cosine", "ip", and "l2". Defaults to "cosine".
max_elements (int): Maximum number of vectors that can be stored.
Defaults to 1024.
index (bool): Whether an index should be built for this field.
Defaults to True.
ef_construction (int): defines a construction time/accuracy trade-off.
Defaults to 200.
ef (int): parameter controlling query time/accuracy trade-off.
Defaults to 10.
M (int): parameter that defines the maximum number of outgoing
connections in the graph. Defaults to 16.
allow_replace_deleted (bool): Enables replacing of deleted elements
with new added ones. Defaults to True.
num_threads (int): Sets the number of cpu threads to use. Defaults to 1.
**kwargs: Other keyword arguments to be passed to the get_doc_cls method.
"""
_check_docarray_import()
from docarray.index import HnswDocumentIndex
doc_cls = cls._get_doc_cls(
dim=n_dim,
space=dist_metric,
max_elements=max_elements,
index=index,
ef_construction=ef_construction,
ef=ef,
M=M,
allow_replace_deleted=allow_replace_deleted,
num_threads=num_threads,
**kwargs,
)
doc_index = HnswDocumentIndex[doc_cls](work_dir=work_dir) # type: ignore
return cls(doc_index, embedding)
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
work_dir: Optional[str] = None,
n_dim: Optional[int] = None,
**kwargs: Any,
) -> DocArrayHnswSearch:
"""Create an DocArrayHnswSearch store and insert data.
Args:
texts (List[str]): Text data.
embedding (Embeddings): Embedding function.
metadatas (Optional[List[dict]]): Metadata for each text if it exists.
Defaults to None.
work_dir (str): path to the location where all the data will be stored.
n_dim (int): dimension of an embedding.
**kwargs: Other keyword arguments to be passed to the __init__ method.
Returns:
DocArrayHnswSearch Vector Store
"""
if work_dir is None:
raise ValueError("`work_dir` parameter has not been set.")
if n_dim is None:
raise ValueError("`n_dim` parameter has not been set.")
store = cls.from_params(embedding, work_dir, n_dim, **kwargs)
store.add_texts(texts=texts, metadatas=metadatas)
return store | https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/hnsw.html |
662495a0-0c33-4b77-8265-773409a68b15 | Source code for langchain.utilities.powerbi
"""Wrapper around a Power BI endpoint."""
from __future__ import annotations
import asyncio
import logging
import os
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Union
import aiohttp
import requests
from aiohttp import ServerTimeoutError
from pydantic import BaseModel, Field, root_validator, validator
from requests.exceptions import Timeout
_LOGGER = logging.getLogger(__name__)
BASE_URL = os.getenv("POWERBI_BASE_URL", "https://api.powerbi.com/v1.0/myorg")
if TYPE_CHECKING:
from azure.core.credentials import TokenCredential
[docs]class PowerBIDataset(BaseModel):
"""Create PowerBI engine from dataset ID and credential or token.
Use either the credential or a supplied token to authenticate.
If both are supplied the credential is used to generate a token.
The impersonated_user_name is the UPN of a user to be impersonated.
If the model is not RLS enabled, this will be ignored.
"""
dataset_id: str
table_names: List[str]
group_id: Optional[str] = None
credential: Optional[TokenCredential] = None
token: Optional[str] = None
impersonated_user_name: Optional[str] = None
sample_rows_in_table_info: int = Field(default=1, gt=0, le=10)
schemas: Dict[str, str] = Field(default_factory=dict)
aiosession: Optional[aiohttp.ClientSession] = None
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
@validator("table_names", allow_reuse=True)
def fix_table_names(cls, table_names: List[str]) -> List[str]:
"""Fix the table names."""
return [fix_table_name(table) for table in table_names]
@root_validator(pre=True, allow_reuse=True)
def token_or_credential_present(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Validate that at least one of token and credentials is present."""
if "token" in values or "credential" in values:
return values
raise ValueError("Please provide either a credential or a token.")
@property
def request_url(self) -> str:
"""Get the request url."""
if self.group_id:
return f"{BASE_URL}/groups/{self.group_id}/datasets/{self.dataset_id}/executeQueries" # noqa: E501 # pylint: disable=C0301
return f"{BASE_URL}/datasets/{self.dataset_id}/executeQueries" # noqa: E501 # pylint: disable=C0301
@property
def headers(self) -> Dict[str, str]:
"""Get the token."""
if self.token:
return {
"Content-Type": "application/json",
"Authorization": "Bearer " + self.token,
}
from azure.core.exceptions import (
ClientAuthenticationError, # pylint: disable=import-outside-toplevel
)
if self.credential:
try:
token = self.credential.get_token(
"https://analysis.windows.net/powerbi/api/.default"
).token
return {
"Content-Type": "application/json",
"Authorization": "Bearer " + token,
}
except Exception as exc: # pylint: disable=broad-exception-caught
raise ClientAuthenticationError(
"Could not get a token from the supplied credentials."
) from exc
raise ClientAuthenticationError("No credential or token supplied.")
[docs] def get_table_names(self) -> Iterable[str]:
"""Get names of tables available."""
return self.table_names
[docs] def get_schemas(self) -> str:
"""Get the available schema's."""
if self.schemas:
return ", ".join([f"{key}: {value}" for key, value in self.schemas.items()])
return "No known schema's yet. Use the schema_powerbi tool first."
@property
def table_info(self) -> str:
"""Information about all tables in the database."""
return self.get_table_info()
def _get_tables_to_query(
self, table_names: Optional[Union[List[str], str]] = None
) -> Optional[List[str]]:
"""Get the tables names that need to be queried, after checking they exist."""
if table_names is not None:
if (
isinstance(table_names, list)
and len(table_names) > 0
and table_names[0] != ""
):
fixed_tables = [fix_table_name(table) for table in table_names]
non_existing_tables = [
table for table in fixed_tables if table not in self.table_names
]
if non_existing_tables:
_LOGGER.warning(
"Table(s) %s not found in dataset.",
", ".join(non_existing_tables),
)
tables = [
table for table in fixed_tables if table not in non_existing_tables
]
return tables if tables else None
if isinstance(table_names, str) and table_names != "":
if table_names not in self.table_names:
_LOGGER.warning("Table %s not found in dataset.", table_names)
return None
return [fix_table_name(table_names)]
return self.table_names
def _get_tables_todo(self, tables_todo: List[str]) -> List[str]:
"""Get the tables that still need to be queried."""
return [table for table in tables_todo if table not in self.schemas]
def _get_schema_for_tables(self, table_names: List[str]) -> str:
"""Create a string of the table schemas for the supplied tables."""
schemas = [
schema for table, schema in self.schemas.items() if table in table_names
]
return ", ".join(schemas)
[docs] def get_table_info(
self, table_names: Optional[Union[List[str], str]] = None
) -> str:
"""Get information about specified tables."""
tables_requested = self._get_tables_to_query(table_names)
if tables_requested is None:
return "No (valid) tables requested."
tables_todo = self._get_tables_todo(tables_requested)
for table in tables_todo:
self._get_schema(table)
return self._get_schema_for_tables(tables_requested)
[docs] async def aget_table_info(
self, table_names: Optional[Union[List[str], str]] = None
) -> str:
"""Get information about specified tables."""
tables_requested = self._get_tables_to_query(table_names)
if tables_requested is None:
return "No (valid) tables requested."
tables_todo = self._get_tables_todo(tables_requested)
await asyncio.gather(*[self._aget_schema(table) for table in tables_todo])
return self._get_schema_for_tables(tables_requested)
def _get_schema(self, table: str) -> None:
"""Get the schema for a table."""
try:
result = self.run(
f"EVALUATE TOPN({self.sample_rows_in_table_info}, {table})"
)
self.schemas[table] = json_to_md(result["results"][0]["tables"][0]["rows"])
except Timeout:
_LOGGER.warning("Timeout while getting table info for %s", table)
self.schemas[table] = "unknown"
except Exception as exc: # pylint: disable=broad-exception-caught
_LOGGER.warning("Error while getting table info for %s: %s", table, exc)
self.schemas[table] = "unknown"
async def _aget_schema(self, table: str) -> None:
"""Get the schema for a table."""
try:
result = await self.arun(
f"EVALUATE TOPN({self.sample_rows_in_table_info}, {table})"
)
self.schemas[table] = json_to_md(result["results"][0]["tables"][0]["rows"])
except ServerTimeoutError:
_LOGGER.warning("Timeout while getting table info for %s", table)
self.schemas[table] = "unknown"
except Exception as exc: # pylint: disable=broad-exception-caught
_LOGGER.warning("Error while getting table info for %s: %s", table, exc)
self.schemas[table] = "unknown"
def _create_json_content(self, command: str) -> dict[str, Any]:
"""Create the json content for the request."""
return {
"queries": [{"query": rf"{command}"}],
"impersonatedUserName": self.impersonated_user_name,
"serializerSettings": {"includeNulls": True},
}
[docs] def run(self, command: str) -> Any:
"""Execute a DAX command and return a json representing the results."""
_LOGGER.debug("Running command: %s", command)
result = requests.post(
self.request_url,
json=self._create_json_content(command),
headers=self.headers,
timeout=10,
)
return result.json()
[docs] async def arun(self, command: str) -> Any:
"""Execute a DAX command and return the result asynchronously."""
_LOGGER.debug("Running command: %s", command)
if self.aiosession:
async with self.aiosession.post(
self.request_url,
headers=self.headers,
json=self._create_json_content(command),
timeout=10,
) as response:
response_json = await response.json(content_type=response.content_type)
return response_json
async with aiohttp.ClientSession() as session:
async with session.post(
self.request_url,
headers=self.headers,
json=self._create_json_content(command),
timeout=10,
) as response:
response_json = await response.json(content_type=response.content_type)
return response_json
def json_to_md(
json_contents: List[Dict[str, Union[str, int, float]]],
table_name: Optional[str] = None,
) -> str:
"""Converts a JSON object to a markdown table."""
output_md = ""
headers = json_contents[0].keys()
for header in headers:
header.replace("[", ".").replace("]", "")
if table_name:
header.replace(f"{table_name}.", "")
output_md += f"| {header} "
output_md += "|\n"
for row in json_contents:
for value in row.values():
output_md += f"| {value} "
output_md += "|\n"
return output_md
def fix_table_name(table: str) -> str:
"""Add single quotes around table names that contain spaces."""
if " " in table and not table.startswith("'") and not table.endswith("'"):
return f"'{table}'"
return table | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html |
c1806ebd-1bb9-4b13-9431-274a5f229635 | Source code for langchain.utilities.bing_search
"""Util that calls Bing Search.
In order to set this up, follow instructions at:
https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e
"""
from typing import Dict, List
import requests
from pydantic import BaseModel, Extra, root_validator
from langchain.utils import get_from_dict_or_env
[docs]class BingSearchAPIWrapper(BaseModel):
"""Wrapper for Bing Search API.
In order to set this up, follow instructions at:
https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e
"""
bing_subscription_key: str
bing_search_url: str
k: int = 10
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
def _bing_search_results(self, search_term: str, count: int) -> List[dict]:
headers = {"Ocp-Apim-Subscription-Key": self.bing_subscription_key}
params = {
"q": search_term,
"count": count,
"textDecorations": True,
"textFormat": "HTML",
}
response = requests.get(
self.bing_search_url, headers=headers, params=params # type: ignore
)
response.raise_for_status()
search_results = response.json()
return search_results["webPages"]["value"]
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and endpoint exists in environment."""
bing_subscription_key = get_from_dict_or_env(
values, "bing_subscription_key", "BING_SUBSCRIPTION_KEY"
)
values["bing_subscription_key"] = bing_subscription_key
bing_search_url = get_from_dict_or_env(
values,
"bing_search_url",
"BING_SEARCH_URL",
# default="https://api.bing.microsoft.com/v7.0/search",
)
values["bing_search_url"] = bing_search_url
return values
[docs] def run(self, query: str) -> str:
"""Run query through BingSearch and parse result."""
snippets = []
results = self._bing_search_results(query, count=self.k)
if len(results) == 0:
return "No good Bing Search Result was found"
for result in results:
snippets.append(result["snippet"])
return " ".join(snippets)
[docs] def results(self, query: str, num_results: int) -> List[Dict]:
"""Run query through BingSearch and return metadata.
Args:
query: The query to search for.
num_results: The number of results to return.
Returns:
A list of dictionaries with the following keys:
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
"""
metadata_results = []
results = self._bing_search_results(query, count=num_results)
if len(results) == 0:
return [{"Result": "No good Bing Search Result was found"}]
for result in results:
metadata_result = {
"snippet": result["snippet"],
"title": result["name"],
"link": result["url"],
}
metadata_results.append(metadata_result)
return metadata_results | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/bing_search.html |
473737ff-a320-4c4c-911d-5c3006abf4ae | Source code for langchain.utilities.serpapi
"""Chain that calls SerpAPI.
Heavily borrowed from https://github.com/ofirpress/self-ask
"""
import os
import sys
from typing import Any, Dict, Optional, Tuple
import aiohttp
from pydantic import BaseModel, Extra, Field, root_validator
from langchain.utils import get_from_dict_or_env
class HiddenPrints:
"""Context manager to hide prints."""
def __enter__(self) -> None:
"""Open file to pipe stdout to."""
self._original_stdout = sys.stdout
sys.stdout = open(os.devnull, "w")
def __exit__(self, *_: Any) -> None:
"""Close file that stdout was piped to."""
sys.stdout.close()
sys.stdout = self._original_stdout
[docs]class SerpAPIWrapper(BaseModel):
"""Wrapper around SerpAPI.
To use, you should have the ``google-search-results`` python package installed,
and the environment variable ``SERPAPI_API_KEY`` set with your API key, or pass
`serpapi_api_key` as a named parameter to the constructor.
Example:
.. code-block:: python
from langchain import SerpAPIWrapper
serpapi = SerpAPIWrapper()
"""
search_engine: Any #: :meta private:
params: dict = Field(
default={
"engine": "google",
"google_domain": "google.com",
"gl": "us",
"hl": "en",
}
)
serpapi_api_key: Optional[str] = None
aiosession: Optional[aiohttp.ClientSession] = None
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
serpapi_api_key = get_from_dict_or_env(
values, "serpapi_api_key", "SERPAPI_API_KEY"
)
values["serpapi_api_key"] = serpapi_api_key
try:
from serpapi import GoogleSearch
values["search_engine"] = GoogleSearch
except ImportError:
raise ValueError(
"Could not import serpapi python package. "
"Please install it with `pip install google-search-results`."
)
return values
[docs] async def arun(self, query: str, **kwargs: Any) -> str:
"""Run query through SerpAPI and parse result async."""
return self._process_response(await self.aresults(query))
[docs] def run(self, query: str, **kwargs: Any) -> str:
"""Run query through SerpAPI and parse result."""
return self._process_response(self.results(query))
[docs] def results(self, query: str) -> dict:
"""Run query through SerpAPI and return the raw result."""
params = self.get_params(query)
with HiddenPrints():
search = self.search_engine(params)
res = search.get_dict()
return res
[docs] async def aresults(self, query: str) -> dict:
"""Use aiohttp to run query through SerpAPI and return the results async."""
def construct_url_and_params() -> Tuple[str, Dict[str, str]]:
params = self.get_params(query)
params["source"] = "python"
if self.serpapi_api_key:
params["serp_api_key"] = self.serpapi_api_key
params["output"] = "json"
url = "https://serpapi.com/search"
return url, params
url, params = construct_url_and_params()
if not self.aiosession:
async with aiohttp.ClientSession() as session:
async with session.get(url, params=params) as response:
res = await response.json()
else:
async with self.aiosession.get(url, params=params) as response:
res = await response.json()
return res
[docs] def get_params(self, query: str) -> Dict[str, str]:
"""Get parameters for SerpAPI."""
_params = {
"api_key": self.serpapi_api_key,
"q": query,
}
params = {**self.params, **_params}
return params
@staticmethod
def _process_response(res: dict) -> str:
"""Process response from SerpAPI."""
if "error" in res.keys():
raise ValueError(f"Got error from SerpAPI: {res['error']}")
if "answer_box" in res.keys() and type(res["answer_box"]) == list:
res["answer_box"] = res["answer_box"][0]
if "answer_box" in res.keys() and "answer" in res["answer_box"].keys():
toret = res["answer_box"]["answer"]
elif "answer_box" in res.keys() and "snippet" in res["answer_box"].keys():
toret = res["answer_box"]["snippet"]
elif (
"answer_box" in res.keys()
and "snippet_highlighted_words" in res["answer_box"].keys()
):
toret = res["answer_box"]["snippet_highlighted_words"][0]
elif (
"sports_results" in res.keys()
and "game_spotlight" in res["sports_results"].keys()
):
toret = res["sports_results"]["game_spotlight"]
elif (
"shopping_results" in res.keys()
and "title" in res["shopping_results"][0].keys()
):
toret = res["shopping_results"][:3]
elif (
"knowledge_graph" in res.keys()
and "description" in res["knowledge_graph"].keys()
):
toret = res["knowledge_graph"]["description"]
elif "snippet" in res["organic_results"][0].keys():
toret = res["organic_results"][0]["snippet"]
elif "link" in res["organic_results"][0].keys():
toret = res["organic_results"][0]["link"]
else:
toret = "No good search result found"
return toret | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html |
3bc9c5af-9b0b-4e0b-96ef-6845cad761b5 | Source code for langchain.utilities.awslambda
"""Util that calls Lambda."""
import json
from typing import Any, Dict, Optional
from pydantic import BaseModel, Extra, root_validator
[docs]class LambdaWrapper(BaseModel):
"""Wrapper for AWS Lambda SDK.
Docs for using:
1. pip install boto3
2. Create a lambda function using the AWS Console or CLI
3. Run `aws configure` and enter your AWS credentials
"""
lambda_client: Any #: :meta private:
function_name: Optional[str] = None
awslambda_tool_name: Optional[str] = None
awslambda_tool_description: Optional[str] = None
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that python package exists in environment."""
try:
import boto3
except ImportError:
raise ImportError(
"boto3 is not installed. Please install it with `pip install boto3`"
)
values["lambda_client"] = boto3.client("lambda")
values["function_name"] = values["function_name"]
return values
[docs] def run(self, query: str) -> str:
"""Invoke Lambda function and parse result."""
res = self.lambda_client.invoke(
FunctionName=self.function_name,
InvocationType="RequestResponse",
Payload=json.dumps({"body": query}),
)
try:
payload_stream = res["Payload"]
payload_string = payload_stream.read().decode("utf-8")
answer = json.loads(payload_string)["body"]
except StopIteration:
return "Failed to parse response from Lambda"
if answer is None or answer == "":
# We don't want to return the assumption alone if answer is empty
return "Request failed."
else:
return f"Result: {answer}" | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/awslambda.html |
c99077d1-6383-4987-b1ed-9ea508a704bb | Source code for langchain.utilities.bash
"""Wrapper around subprocess to run commands."""
from __future__ import annotations
import platform
import re
import subprocess
from typing import TYPE_CHECKING, List, Union
from uuid import uuid4
if TYPE_CHECKING:
import pexpect
def _lazy_import_pexpect() -> pexpect:
"""Import pexpect only when needed."""
if platform.system() == "Windows":
raise ValueError("Persistent bash processes are not yet supported on Windows.")
try:
import pexpect
except ImportError:
raise ImportError(
"pexpect required for persistent bash processes."
" To install, run `pip install pexpect`."
)
return pexpect
[docs]class BashProcess:
"""Executes bash commands and returns the output."""
def __init__(
self,
strip_newlines: bool = False,
return_err_output: bool = False,
persistent: bool = False,
):
"""Initialize with stripping newlines."""
self.strip_newlines = strip_newlines
self.return_err_output = return_err_output
self.prompt = ""
self.process = None
if persistent:
self.prompt = str(uuid4())
self.process = self._initialize_persistent_process(self.prompt)
@staticmethod
def _initialize_persistent_process(prompt: str) -> pexpect.spawn:
# Start bash in a clean environment
# Doesn't work on windows
pexpect = _lazy_import_pexpect()
process = pexpect.spawn(
"env", ["-i", "bash", "--norc", "--noprofile"], encoding="utf-8"
)
# Set the custom prompt
process.sendline("PS1=" + prompt)
process.expect_exact(prompt, timeout=10)
return process
[docs] def run(self, commands: Union[str, List[str]]) -> str:
"""Run commands and return final output."""
if isinstance(commands, str):
commands = [commands]
commands = ";".join(commands)
if self.process is not None:
return self._run_persistent(
commands,
)
else:
return self._run(commands)
def _run(self, command: str) -> str:
"""Run commands and return final output."""
try:
output = subprocess.run(
command,
shell=True,
check=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
).stdout.decode()
except subprocess.CalledProcessError as error:
if self.return_err_output:
return error.stdout.decode()
return str(error)
if self.strip_newlines:
output = output.strip()
return output
[docs] def process_output(self, output: str, command: str) -> str:
# Remove the command from the output using a regular expression
pattern = re.escape(command) + r"\s*\n"
output = re.sub(pattern, "", output, count=1)
return output.strip()
def _run_persistent(self, command: str) -> str:
"""Run commands and return final output."""
pexpect = _lazy_import_pexpect()
if self.process is None:
raise ValueError("Process not initialized")
self.process.sendline(command)
# Clear the output with an empty string
self.process.expect(self.prompt, timeout=10)
self.process.sendline("")
try:
self.process.expect([self.prompt, pexpect.EOF], timeout=10)
except pexpect.TIMEOUT:
return f"Timeout error while executing command {command}"
if self.process.after == pexpect.EOF:
return f"Exited with error status: {self.process.exitstatus}"
output = self.process.before
output = self.process_output(output, command)
if self.strip_newlines:
return output.strip()
return output | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/bash.html |
0ff2eaf8-e2f2-4547-8de6-3ab87920b44f | Source code for langchain.utilities.google_search
"""Util that calls Google Search."""
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Extra, root_validator
from langchain.utils import get_from_dict_or_env
[docs]class GoogleSearchAPIWrapper(BaseModel):
"""Wrapper for Google Search API.
Adapted from: Instructions adapted from https://stackoverflow.com/questions/
37083058/
programmatically-searching-google-in-python-using-custom-search
TODO: DOCS for using it
1. Install google-api-python-client
- If you don't already have a Google account, sign up.
- If you have never created a Google APIs Console project,
read the Managing Projects page and create a project in the Google API Console.
- Install the library using pip install google-api-python-client
The current version of the library is 2.70.0 at this time
2. To create an API key:
- Navigate to the APIs & ServicesβCredentials panel in Cloud Console.
- Select Create credentials, then select API key from the drop-down menu.
- The API key created dialog box displays your newly created key.
- You now have an API_KEY
3. Setup Custom Search Engine so you can search the entire web
- Create a custom search engine in this link.
- In Sites to search, add any valid URL (i.e. www.stackoverflow.com).
- Thatβs all you have to fill up, the rest doesnβt matter.
In the left-side menu, click Edit search engine β {your search engine name}
β Setup Set Search the entire web to ON. Remove the URL you added from
the list of Sites to search.
- Under Search engine ID youβll find the search-engine-ID.
4. Enable the Custom Search API
- Navigate to the APIs & ServicesβDashboard panel in Cloud Console.
- Click Enable APIs and Services.
- Search for Custom Search API and click on it.
- Click Enable.
URL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis
.com
"""
search_engine: Any #: :meta private:
google_api_key: Optional[str] = None
google_cse_id: Optional[str] = None
k: int = 10
siterestrict: bool = False
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
def _google_search_results(self, search_term: str, **kwargs: Any) -> List[dict]:
cse = self.search_engine.cse()
if self.siterestrict:
cse = cse.siterestrict()
res = cse.list(q=search_term, cx=self.google_cse_id, **kwargs).execute()
return res.get("items", [])
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
google_api_key = get_from_dict_or_env(
values, "google_api_key", "GOOGLE_API_KEY"
)
values["google_api_key"] = google_api_key
google_cse_id = get_from_dict_or_env(values, "google_cse_id", "GOOGLE_CSE_ID")
values["google_cse_id"] = google_cse_id
try:
from googleapiclient.discovery import build
except ImportError:
raise ImportError(
"google-api-python-client is not installed. "
"Please install it with `pip install google-api-python-client`"
)
service = build("customsearch", "v1", developerKey=google_api_key)
values["search_engine"] = service
return values
[docs] def run(self, query: str) -> str:
"""Run query through GoogleSearch and parse result."""
snippets = []
results = self._google_search_results(query, num=self.k)
if len(results) == 0:
return "No good Google Search Result was found"
for result in results:
if "snippet" in result:
snippets.append(result["snippet"])
return " ".join(snippets)
[docs] def results(self, query: str, num_results: int) -> List[Dict]:
"""Run query through GoogleSearch and return metadata.
Args:
query: The query to search for.
num_results: The number of results to return.
Returns:
A list of dictionaries with the following keys:
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
"""
metadata_results = []
results = self._google_search_results(query, num=num_results)
if len(results) == 0:
return [{"Result": "No good Google Search Result was found"}]
for result in results:
metadata_result = {
"title": result["title"],
"link": result["link"],
}
if "snippet" in result:
metadata_result["snippet"] = result["snippet"]
metadata_results.append(metadata_result)
return metadata_results | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html |
b22c87b1-aa9b-4db4-85d5-dd4a54ed4e04 | Source code for langchain.utilities.max_compute
from __future__ import annotations
from typing import TYPE_CHECKING, Iterator, List, Optional
from langchain.utils import get_from_env
if TYPE_CHECKING:
from odps import ODPS
[docs]class MaxComputeAPIWrapper:
"""Interface for querying Alibaba Cloud MaxCompute tables."""
def __init__(self, client: ODPS):
"""Initialize MaxCompute document loader.
Args:
client: odps.ODPS MaxCompute client object.
"""
self.client = client
[docs] @classmethod
def from_params(
cls,
endpoint: str,
project: str,
*,
access_id: Optional[str] = None,
secret_access_key: Optional[str] = None,
) -> MaxComputeAPIWrapper:
"""Convenience constructor that builds the odsp.ODPS MaxCompute client from
given parameters.
Args:
endpoint: MaxCompute endpoint.
project: A project is a basic organizational unit of MaxCompute, which is
similar to a database.
access_id: MaxCompute access ID. Should be passed in directly or set as the
environment variable `MAX_COMPUTE_ACCESS_ID`.
secret_access_key: MaxCompute secret access key. Should be passed in
directly or set as the environment variable
`MAX_COMPUTE_SECRET_ACCESS_KEY`.
"""
try:
from odps import ODPS
except ImportError as ex:
raise ImportError(
"Could not import pyodps python package. "
"Please install it with `pip install pyodps` or refer to "
"https://pyodps.readthedocs.io/."
) from ex
access_id = access_id or get_from_env("access_id", "MAX_COMPUTE_ACCESS_ID")
secret_access_key = secret_access_key or get_from_env(
"secret_access_key", "MAX_COMPUTE_SECRET_ACCESS_KEY"
)
client = ODPS(
access_id=access_id,
secret_access_key=secret_access_key,
project=project,
endpoint=endpoint,
)
if not client.exist_project(project):
raise ValueError(f'The project "{project}" does not exist.')
return cls(client)
[docs] def lazy_query(self, query: str) -> Iterator[dict]:
# Execute SQL query.
with self.client.execute_sql(query).open_reader() as reader:
if reader.count == 0:
raise ValueError("Table contains no data.")
for record in reader:
yield {k: v for k, v in record}
[docs] def query(self, query: str) -> List[dict]:
return list(self.lazy_query(query)) | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/max_compute.html |
1770fe1a-05eb-4786-800b-46e436fe2b30 | Source code for langchain.utilities.arxiv
"""Util that calls Arxiv."""
import logging
import os
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Extra, root_validator
from langchain.schema import Document
logger = logging.getLogger(__name__)
[docs]class ArxivAPIWrapper(BaseModel):
"""Wrapper around ArxivAPI.
To use, you should have the ``arxiv`` python package installed.
https://lukasschwab.me/arxiv.py/index.html
This wrapper will use the Arxiv API to conduct searches and
fetch document summaries. By default, it will return the document summaries
of the top-k results.
It limits the Document content by doc_content_chars_max.
Set doc_content_chars_max=None if you don't want to limit the content size.
Parameters:
top_k_results: number of the top-scored document used for the arxiv tool
ARXIV_MAX_QUERY_LENGTH: the cut limit on the query used for the arxiv tool.
load_max_docs: a limit to the number of loaded documents
load_all_available_meta:
if True: the `metadata` of the loaded Documents gets all available meta info
(see https://lukasschwab.me/arxiv.py/index.html#Result),
if False: the `metadata` gets only the most informative fields.
"""
arxiv_search: Any #: :meta private:
arxiv_exceptions: Any # :meta private:
top_k_results: int = 3
ARXIV_MAX_QUERY_LENGTH = 300
load_max_docs: int = 100
load_all_available_meta: bool = False
doc_content_chars_max: Optional[int] = 4000
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that the python package exists in environment."""
try:
import arxiv
values["arxiv_search"] = arxiv.Search
values["arxiv_exceptions"] = (
arxiv.ArxivError,
arxiv.UnexpectedEmptyPageError,
arxiv.HTTPError,
)
values["arxiv_result"] = arxiv.Result
except ImportError:
raise ImportError(
"Could not import arxiv python package. "
"Please install it with `pip install arxiv`."
)
return values
[docs] def run(self, query: str) -> str:
"""
Run Arxiv search and get the article meta information.
See https://lukasschwab.me/arxiv.py/index.html#Search
See https://lukasschwab.me/arxiv.py/index.html#Result
It uses only the most informative fields of article meta information.
"""
try:
results = self.arxiv_search( # type: ignore
query[: self.ARXIV_MAX_QUERY_LENGTH], max_results=self.top_k_results
).results()
except self.arxiv_exceptions as ex:
return f"Arxiv exception: {ex}"
docs = [
f"Published: {result.updated.date()}\nTitle: {result.title}\n"
f"Authors: {', '.join(a.name for a in result.authors)}\n"
f"Summary: {result.summary}"
for result in results
]
if docs:
return "\n\n".join(docs)[: self.doc_content_chars_max]
else:
return "No good Arxiv Result was found"
[docs] def load(self, query: str) -> List[Document]:
"""
Run Arxiv search and get the article texts plus the article meta information.
See https://lukasschwab.me/arxiv.py/index.html#Search
Returns: a list of documents with the document.page_content in text format
"""
try:
import fitz
except ImportError:
raise ImportError(
"PyMuPDF package not found, please install it with "
"`pip install pymupdf`"
)
try:
results = self.arxiv_search( # type: ignore
query[: self.ARXIV_MAX_QUERY_LENGTH], max_results=self.load_max_docs
).results()
except self.arxiv_exceptions as ex:
logger.debug("Error on arxiv: %s", ex)
return []
docs: List[Document] = []
for result in results:
try:
doc_file_name: str = result.download_pdf()
with fitz.open(doc_file_name) as doc_file:
text: str = "".join(page.get_text() for page in doc_file)
except FileNotFoundError as f_ex:
logger.debug(f_ex)
continue
if self.load_all_available_meta:
extra_metadata = {
"entry_id": result.entry_id,
"published_first_time": str(result.published.date()),
"comment": result.comment,
"journal_ref": result.journal_ref,
"doi": result.doi,
"primary_category": result.primary_category,
"categories": result.categories,
"links": [link.href for link in result.links],
}
else:
extra_metadata = {}
metadata = {
"Published": str(result.updated.date()),
"Title": result.title,
"Authors": ", ".join(a.name for a in result.authors),
"Summary": result.summary,
**extra_metadata,
}
doc = Document(
page_content=text[: self.doc_content_chars_max], metadata=metadata
)
docs.append(doc)
os.remove(doc_file_name)
return docs | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/arxiv.html |
56b5fa74-4ad1-483e-b944-0d3d4021e5f1 | Source code for langchain.utilities.python
import sys
from io import StringIO
from typing import Dict, Optional
from pydantic import BaseModel, Field
[docs]class PythonREPL(BaseModel):
"""Simulates a standalone Python REPL."""
globals: Optional[Dict] = Field(default_factory=dict, alias="_globals")
locals: Optional[Dict] = Field(default_factory=dict, alias="_locals")
[docs] def run(self, command: str) -> str:
"""Run command with own globals/locals and returns anything printed."""
old_stdout = sys.stdout
sys.stdout = mystdout = StringIO()
try:
exec(command, self.globals, self.locals)
sys.stdout = old_stdout
output = mystdout.getvalue()
except Exception as e:
sys.stdout = old_stdout
output = repr(e)
return output | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/python.html |
47a04a73-5e12-42bf-bafb-afdfd2fcff8a | Source code for langchain.utilities.openweathermap
"""Util that calls OpenWeatherMap using PyOWM."""
from typing import Any, Dict, Optional
from pydantic import Extra, root_validator
from langchain.tools.base import BaseModel
from langchain.utils import get_from_dict_or_env
[docs]class OpenWeatherMapAPIWrapper(BaseModel):
"""Wrapper for OpenWeatherMap API using PyOWM.
Docs for using:
1. Go to OpenWeatherMap and sign up for an API key
2. Save your API KEY into OPENWEATHERMAP_API_KEY env variable
3. pip install pyowm
"""
owm: Any
openweathermap_api_key: Optional[str] = None
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key exists in environment."""
openweathermap_api_key = get_from_dict_or_env(
values, "openweathermap_api_key", "OPENWEATHERMAP_API_KEY"
)
try:
import pyowm
except ImportError:
raise ImportError(
"pyowm is not installed. Please install it with `pip install pyowm`"
)
owm = pyowm.OWM(openweathermap_api_key)
values["owm"] = owm
return values
def _format_weather_info(self, location: str, w: Any) -> str:
detailed_status = w.detailed_status
wind = w.wind()
humidity = w.humidity
temperature = w.temperature("celsius")
rain = w.rain
heat_index = w.heat_index
clouds = w.clouds
return (
f"In {location}, the current weather is as follows:\n"
f"Detailed status: {detailed_status}\n"
f"Wind speed: {wind['speed']} m/s, direction: {wind['deg']}Β°\n"
f"Humidity: {humidity}%\n"
f"Temperature: \n"
f" - Current: {temperature['temp']}Β°C\n"
f" - High: {temperature['temp_max']}Β°C\n"
f" - Low: {temperature['temp_min']}Β°C\n"
f" - Feels like: {temperature['feels_like']}Β°C\n"
f"Rain: {rain}\n"
f"Heat index: {heat_index}\n"
f"Cloud cover: {clouds}%"
)
[docs] def run(self, location: str) -> str:
"""Get the current weather information for a specified location."""
mgr = self.owm.weather_manager()
observation = mgr.weather_at_place(location)
w = observation.weather
return self._format_weather_info(location, w) | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openweathermap.html |
7c933dd6-6979-4e97-b0dd-d28277e46393 | Source code for langchain.utilities.metaphor_search
"""Util that calls Metaphor Search API.
In order to set this up, follow instructions at:
"""
import json
from typing import Dict, List, Optional
import aiohttp
import requests
from pydantic import BaseModel, Extra, root_validator
from langchain.utils import get_from_dict_or_env
METAPHOR_API_URL = "https://api.metaphor.systems"
[docs]class MetaphorSearchAPIWrapper(BaseModel):
"""Wrapper for Metaphor Search API."""
metaphor_api_key: str
k: int = 10
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
def _metaphor_search_results(
self,
query: str,
num_results: int,
include_domains: Optional[List[str]] = None,
exclude_domains: Optional[List[str]] = None,
start_crawl_date: Optional[str] = None,
end_crawl_date: Optional[str] = None,
start_published_date: Optional[str] = None,
end_published_date: Optional[str] = None,
) -> List[dict]:
headers = {"X-Api-Key": self.metaphor_api_key}
params = {
"numResults": num_results,
"query": query,
"includeDomains": include_domains,
"excludeDomains": exclude_domains,
"startCrawlDate": start_crawl_date,
"endCrawlDate": end_crawl_date,
"startPublishedDate": start_published_date,
"endPublishedDate": end_published_date,
}
response = requests.post(
# type: ignore
f"{METAPHOR_API_URL}/search",
headers=headers,
json=params,
)
response.raise_for_status()
search_results = response.json()
print(search_results)
return search_results["results"]
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and endpoint exists in environment."""
metaphor_api_key = get_from_dict_or_env(
values, "metaphor_api_key", "METAPHOR_API_KEY"
)
values["metaphor_api_key"] = metaphor_api_key
return values
[docs] def results(
self,
query: str,
num_results: int,
include_domains: Optional[List[str]] = None,
exclude_domains: Optional[List[str]] = None,
start_crawl_date: Optional[str] = None,
end_crawl_date: Optional[str] = None,
start_published_date: Optional[str] = None,
end_published_date: Optional[str] = None,
) -> List[Dict]:
"""Run query through Metaphor Search and return metadata.
Args:
query: The query to search for.
num_results: The number of results to return.
Returns:
A list of dictionaries with the following keys:
title - The title of the
url - The url
author - Author of the content, if applicable. Otherwise, None.
published_date - Estimated date published
in YYYY-MM-DD format. Otherwise, None.
"""
raw_search_results = self._metaphor_search_results(
query,
num_results=num_results,
include_domains=include_domains,
exclude_domains=exclude_domains,
start_crawl_date=start_crawl_date,
end_crawl_date=end_crawl_date,
start_published_date=start_published_date,
end_published_date=end_published_date,
)
return self._clean_results(raw_search_results)
[docs] async def results_async(
self,
query: str,
num_results: int,
include_domains: Optional[List[str]] = None,
exclude_domains: Optional[List[str]] = None,
start_crawl_date: Optional[str] = None,
end_crawl_date: Optional[str] = None,
start_published_date: Optional[str] = None,
end_published_date: Optional[str] = None,
) -> List[Dict]:
"""Get results from the Metaphor Search API asynchronously."""
# Function to perform the API call
async def fetch() -> str:
headers = {"X-Api-Key": self.metaphor_api_key}
params = {
"numResults": num_results,
"query": query,
"includeDomains": include_domains,
"excludeDomains": exclude_domains,
"startCrawlDate": start_crawl_date,
"endCrawlDate": end_crawl_date,
"startPublishedDate": start_published_date,
"endPublishedDate": end_published_date,
}
async with aiohttp.ClientSession() as session:
async with session.post(
f"{METAPHOR_API_URL}/search", json=params, headers=headers
) as res:
if res.status == 200:
data = await res.text()
return data
else:
raise Exception(f"Error {res.status}: {res.reason}")
results_json_str = await fetch()
results_json = json.loads(results_json_str)
return self._clean_results(results_json["results"])
def _clean_results(self, raw_search_results: List[Dict]) -> List[Dict]:
cleaned_results = []
for result in raw_search_results:
cleaned_results.append(
{
"title": result["title"],
"url": result["url"],
"author": result["author"],
"published_date": result["publishedDate"],
}
)
return cleaned_results | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/metaphor_search.html |
cfacfae8-e824-429e-9346-631f7197f444 | Source code for langchain.utilities.bibtex
"""Util that calls bibtexparser."""
import logging
from typing import Any, Dict, List, Mapping
from pydantic import BaseModel, Extra, root_validator
logger = logging.getLogger(__name__)
OPTIONAL_FIELDS = [
"annotate",
"booktitle",
"editor",
"howpublished",
"journal",
"keywords",
"note",
"organization",
"publisher",
"school",
"series",
"type",
"doi",
"issn",
"isbn",
]
[docs]class BibtexparserWrapper(BaseModel):
"""Wrapper around bibtexparser.
To use, you should have the ``bibtexparser`` python package installed.
https://bibtexparser.readthedocs.io/en/master/
This wrapper will use bibtexparser to load a collection of references from
a bibtex file and fetch document summaries.
"""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that the python package exists in environment."""
try:
import bibtexparser # noqa
except ImportError:
raise ImportError(
"Could not import bibtexparser python package. "
"Please install it with `pip install bibtexparser`."
)
return values
[docs] def load_bibtex_entries(self, path: str) -> List[Dict[str, Any]]:
"""Load bibtex entries from the bibtex file at the given path."""
import bibtexparser
with open(path) as file:
entries = bibtexparser.load(file).entries
return entries
[docs] def get_metadata(
self, entry: Mapping[str, Any], load_extra: bool = False
) -> Dict[str, Any]:
"""Get metadata for the given entry."""
publication = entry.get("journal") or entry.get("booktitle")
if "url" in entry:
url = entry["url"]
elif "doi" in entry:
url = f'https://doi.org/{entry["doi"]}'
else:
url = None
meta = {
"id": entry.get("ID"),
"published_year": entry.get("year"),
"title": entry.get("title"),
"publication": publication,
"authors": entry.get("author"),
"abstract": entry.get("abstract"),
"url": url,
}
if load_extra:
for field in OPTIONAL_FIELDS:
meta[field] = entry.get(field)
return {k: v for k, v in meta.items() if v is not None} | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/bibtex.html |
34bb4b5b-0bc0-4726-a0f1-3481669e9b4b | Source code for langchain.utilities.searx_search
"""Utility for using SearxNG meta search API.
SearxNG is a privacy-friendly free metasearch engine that aggregates results from
`multiple search engines
<https://docs.searxng.org/admin/engines/configured_engines.html>`_ and databases and
supports the `OpenSearch
<https://github.com/dewitt/opensearch/blob/master/opensearch-1-1-draft-6.md>`_
specification.
More details on the installation instructions `here. <../../integrations/searx.html>`_
For the search API refer to https://docs.searxng.org/dev/search_api.html
Quick Start
-----------
In order to use this utility you need to provide the searx host. This can be done
by passing the named parameter :attr:`searx_host <SearxSearchWrapper.searx_host>`
or exporting the environment variable SEARX_HOST.
Note: this is the only required parameter.
Then create a searx search instance like this:
.. code-block:: python
from langchain.utilities import SearxSearchWrapper
# when the host starts with `http` SSL is disabled and the connection
# is assumed to be on a private network
searx_host='http://self.hosted'
search = SearxSearchWrapper(searx_host=searx_host)
You can now use the ``search`` instance to query the searx API.
Searching
---------
Use the :meth:`run() <SearxSearchWrapper.run>` and
:meth:`results() <SearxSearchWrapper.results>` methods to query the searx API.
Other methods are available for convenience.
:class:`SearxResults` is a convenience wrapper around the raw json result.
Example usage of the ``run`` method to make a search:
.. code-block:: python
s.run(query="what is the best search engine?")
Engine Parameters
-----------------
You can pass any `accepted searx search API
<https://docs.searxng.org/dev/search_api.html>`_ parameters to the
:py:class:`SearxSearchWrapper` instance.
In the following example we are using the
:attr:`engines <SearxSearchWrapper.engines>` and the ``language`` parameters:
.. code-block:: python
# assuming the searx host is set as above or exported as an env variable
s = SearxSearchWrapper(engines=['google', 'bing'],
language='es')
Search Tips
-----------
Searx offers a special
`search syntax <https://docs.searxng.org/user/index.html#search-syntax>`_
that can also be used instead of passing engine parameters.
For example the following query:
.. code-block:: python
s = SearxSearchWrapper("langchain library", engines=['github'])
# can also be written as:
s = SearxSearchWrapper("langchain library !github")
# or even:
s = SearxSearchWrapper("langchain library !gh")
In some situations you might want to pass an extra string to the search query.
For example when the `run()` method is called by an agent. The search suffix can
also be used as a way to pass extra parameters to searx or the underlying search
engines.
.. code-block:: python
# select the github engine and pass the search suffix
s = SearchWrapper("langchain library", query_suffix="!gh")
s = SearchWrapper("langchain library")
# select github the conventional google search syntax
s.run("large language models", query_suffix="site:github.com")
*NOTE*: A search suffix can be defined on both the instance and the method level.
The resulting query will be the concatenation of the two with the former taking
precedence.
See `SearxNG Configured Engines
<https://docs.searxng.org/admin/engines/configured_engines.html>`_ and
`SearxNG Search Syntax <https://docs.searxng.org/user/index.html#id1>`_
for more details.
Notes
-----
This wrapper is based on the SearxNG fork https://github.com/searxng/searxng which is
better maintained than the original Searx project and offers more features.
Public searxNG instances often use a rate limiter for API usage, so you might want to
use a self hosted instance and disable the rate limiter.
If you are self-hosting an instance you can customize the rate limiter for your
own network as described
`here <https://docs.searxng.org/src/searx.botdetection.html#limiter-src>`_.
For a list of public SearxNG instances see https://searx.space/
"""
import json
from typing import Any, Dict, List, Optional
import aiohttp
import requests
from pydantic import BaseModel, Extra, Field, PrivateAttr, root_validator, validator
from langchain.utils import get_from_dict_or_env
def _get_default_params() -> dict:
return {"language": "en", "format": "json"}
class SearxResults(dict):
"""Dict like wrapper around search api results."""
_data = ""
def __init__(self, data: str):
"""Take a raw result from Searx and make it into a dict like object."""
json_data = json.loads(data)
super().__init__(json_data)
self.__dict__ = self
def __str__(self) -> str:
"""Text representation of searx result."""
return self._data
@property
def results(self) -> Any:
"""Silence mypy for accessing this field.
:meta private:
"""
return self.get("results")
@property
def answers(self) -> Any:
"""Helper accessor on the json result."""
return self.get("answers")
[docs]class SearxSearchWrapper(BaseModel):
"""Wrapper for Searx API.
To use you need to provide the searx host by passing the named parameter
``searx_host`` or exporting the environment variable ``SEARX_HOST``.
In some situations you might want to disable SSL verification, for example
if you are running searx locally. You can do this by passing the named parameter
``unsecure``. You can also pass the host url scheme as ``http`` to disable SSL.
Example:
.. code-block:: python
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://localhost:8888")
Example with SSL disabled:
.. code-block:: python
from langchain.utilities import SearxSearchWrapper
# note the unsecure parameter is not needed if you pass the url scheme as
# http
searx = SearxSearchWrapper(searx_host="http://localhost:8888",
unsecure=True)
"""
_result: SearxResults = PrivateAttr()
searx_host: str = ""
unsecure: bool = False
params: dict = Field(default_factory=_get_default_params)
headers: Optional[dict] = None
engines: Optional[List[str]] = []
categories: Optional[List[str]] = []
query_suffix: Optional[str] = ""
k: int = 10
aiosession: Optional[Any] = None
@validator("unsecure")
def disable_ssl_warnings(cls, v: bool) -> bool:
"""Disable SSL warnings."""
if v:
# requests.urllib3.disable_warnings()
try:
import urllib3
urllib3.disable_warnings()
except ImportError as e:
print(e)
return v
@root_validator()
def validate_params(cls, values: Dict) -> Dict:
"""Validate that custom searx params are merged with default ones."""
user_params = values["params"]
default = _get_default_params()
values["params"] = {**default, **user_params}
engines = values.get("engines")
if engines:
values["params"]["engines"] = ",".join(engines)
categories = values.get("categories")
if categories:
values["params"]["categories"] = ",".join(categories)
searx_host = get_from_dict_or_env(values, "searx_host", "SEARX_HOST")
if not searx_host.startswith("http"):
print(
f"Warning: missing the url scheme on host \
! assuming secure https://{searx_host} "
)
searx_host = "https://" + searx_host
elif searx_host.startswith("http://"):
values["unsecure"] = True
cls.disable_ssl_warnings(True)
values["searx_host"] = searx_host
return values
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
def _searx_api_query(self, params: dict) -> SearxResults:
"""Actual request to searx API."""
raw_result = requests.get(
self.searx_host,
headers=self.headers,
params=params,
verify=not self.unsecure,
)
# test if http result is ok
if not raw_result.ok:
raise ValueError("Searx API returned an error: ", raw_result.text)
res = SearxResults(raw_result.text)
self._result = res
return res
async def _asearx_api_query(self, params: dict) -> SearxResults:
if not self.aiosession:
async with aiohttp.ClientSession() as session:
async with session.get(
self.searx_host,
headers=self.headers,
params=params,
ssl=(lambda: False if self.unsecure else None)(),
) as response:
if not response.ok:
raise ValueError("Searx API returned an error: ", response.text)
result = SearxResults(await response.text())
self._result = result
else:
async with self.aiosession.get(
self.searx_host,
headers=self.headers,
params=params,
verify=not self.unsecure,
) as response:
if not response.ok:
raise ValueError("Searx API returned an error: ", response.text)
result = SearxResults(await response.text())
self._result = result
return result
[docs] def run(
self,
query: str,
engines: Optional[List[str]] = None,
categories: Optional[List[str]] = None,
query_suffix: Optional[str] = "",
**kwargs: Any,
) -> str:
"""Run query through Searx API and parse results.
You can pass any other params to the searx query API.
Args:
query: The query to search for.
query_suffix: Extra suffix appended to the query.
engines: List of engines to use for the query.
categories: List of categories to use for the query.
**kwargs: extra parameters to pass to the searx API.
Returns:
str: The result of the query.
Raises:
ValueError: If an error occured with the query.
Example:
This will make a query to the qwant engine:
.. code-block:: python
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://my.searx.host")
searx.run("what is the weather in France ?", engine="qwant")
# the same result can be achieved using the `!` syntax of searx
# to select the engine using `query_suffix`
searx.run("what is the weather in France ?", query_suffix="!qwant")
"""
_params = {
"q": query,
}
params = {**self.params, **_params, **kwargs}
if self.query_suffix and len(self.query_suffix) > 0:
params["q"] += " " + self.query_suffix
if isinstance(query_suffix, str) and len(query_suffix) > 0:
params["q"] += " " + query_suffix
if isinstance(engines, list) and len(engines) > 0:
params["engines"] = ",".join(engines)
if isinstance(categories, list) and len(categories) > 0:
params["categories"] = ",".join(categories)
res = self._searx_api_query(params)
if len(res.answers) > 0:
toret = res.answers[0]
# only return the content of the results list
elif len(res.results) > 0:
toret = "\n\n".join([r.get("content", "") for r in res.results[: self.k]])
else:
toret = "No good search result found"
return toret
[docs] async def arun(
self,
query: str,
engines: Optional[List[str]] = None,
query_suffix: Optional[str] = "",
**kwargs: Any,
) -> str:
"""Asynchronously version of `run`."""
_params = {
"q": query,
}
params = {**self.params, **_params, **kwargs}
if self.query_suffix and len(self.query_suffix) > 0:
params["q"] += " " + self.query_suffix
if isinstance(query_suffix, str) and len(query_suffix) > 0:
params["q"] += " " + query_suffix
if isinstance(engines, list) and len(engines) > 0:
params["engines"] = ",".join(engines)
res = await self._asearx_api_query(params)
if len(res.answers) > 0:
toret = res.answers[0]
# only return the content of the results list
elif len(res.results) > 0:
toret = "\n\n".join([r.get("content", "") for r in res.results[: self.k]])
else:
toret = "No good search result found"
return toret
[docs] def results(
self,
query: str,
num_results: int,
engines: Optional[List[str]] = None,
categories: Optional[List[str]] = None,
query_suffix: Optional[str] = "",
**kwargs: Any,
) -> List[Dict]:
"""Run query through Searx API and returns the results with metadata.
Args:
query: The query to search for.
query_suffix: Extra suffix appended to the query.
num_results: Limit the number of results to return.
engines: List of engines to use for the query.
categories: List of categories to use for the query.
**kwargs: extra parameters to pass to the searx API.
Returns:
Dict with the following keys:
{
snippet: The description of the result.
title: The title of the result.
link: The link to the result.
engines: The engines used for the result.
category: Searx category of the result.
}
"""
_params = {
"q": query,
}
params = {**self.params, **_params, **kwargs}
if self.query_suffix and len(self.query_suffix) > 0:
params["q"] += " " + self.query_suffix
if isinstance(query_suffix, str) and len(query_suffix) > 0:
params["q"] += " " + query_suffix
if isinstance(engines, list) and len(engines) > 0:
params["engines"] = ",".join(engines)
if isinstance(categories, list) and len(categories) > 0:
params["categories"] = ",".join(categories)
results = self._searx_api_query(params).results[:num_results]
if len(results) == 0:
return [{"Result": "No good Search Result was found"}]
return [
{
"snippet": result.get("content", ""),
"title": result["title"],
"link": result["url"],
"engines": result["engines"],
"category": result["category"],
}
for result in results
]
[docs] async def aresults(
self,
query: str,
num_results: int,
engines: Optional[List[str]] = None,
query_suffix: Optional[str] = "",
**kwargs: Any,
) -> List[Dict]:
"""Asynchronously query with json results.
Uses aiohttp. See `results` for more info.
"""
_params = {
"q": query,
}
params = {**self.params, **_params, **kwargs}
if self.query_suffix and len(self.query_suffix) > 0:
params["q"] += " " + self.query_suffix
if isinstance(query_suffix, str) and len(query_suffix) > 0:
params["q"] += " " + query_suffix
if isinstance(engines, list) and len(engines) > 0:
params["engines"] = ",".join(engines)
results = (await self._asearx_api_query(params)).results[:num_results]
if len(results) == 0:
return [{"Result": "No good Search Result was found"}]
return [
{
"snippet": result.get("content", ""),
"title": result["title"],
"link": result["url"],
"engines": result["engines"],
"category": result["category"],
}
for result in results
] | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html |
96522c26-15f1-470e-90eb-d5f487cd7ac3 | Source code for langchain.utilities.graphql
import json
from typing import Any, Callable, Dict, Optional
from pydantic import BaseModel, Extra, root_validator
[docs]class GraphQLAPIWrapper(BaseModel):
"""Wrapper around GraphQL API.
To use, you should have the ``gql`` python package installed.
This wrapper will use the GraphQL API to conduct queries.
"""
custom_headers: Optional[Dict[str, str]] = None
graphql_endpoint: str
gql_client: Any #: :meta private:
gql_function: Callable[[str], Any] #: :meta private:
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that the python package exists in the environment."""
try:
from gql import Client, gql
from gql.transport.requests import RequestsHTTPTransport
except ImportError as e:
raise ImportError(
"Could not import gql python package. "
f"Try installing it with `pip install gql`. Received error: {e}"
)
headers = values.get("custom_headers")
transport = RequestsHTTPTransport(
url=values["graphql_endpoint"],
headers=headers,
)
client = Client(transport=transport, fetch_schema_from_transport=True)
values["gql_client"] = client
values["gql_function"] = gql
return values
[docs] def run(self, query: str) -> str:
"""Run a GraphQL query and get the results."""
result = self._execute_query(query)
return json.dumps(result, indent=2)
def _execute_query(self, query: str) -> Dict[str, Any]:
"""Execute a GraphQL query and return the results."""
document_node = self.gql_function(query)
result = self.gql_client.execute(document_node)
return result | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/graphql.html |
ab07615c-3462-4621-8f1e-74c08e9cfb71 | Source code for langchain.utilities.brave_search
import json
import requests
from pydantic import BaseModel, Field
[docs]class BraveSearchWrapper(BaseModel):
api_key: str
search_kwargs: dict = Field(default_factory=dict)
[docs] def run(self, query: str) -> str:
headers = {
"X-Subscription-Token": self.api_key,
"Accept": "application/json",
}
base_url = "https://api.search.brave.com/res/v1/web/search"
req = requests.PreparedRequest()
params = {**self.search_kwargs, **{"q": query}}
req.prepare_url(base_url, params)
if req.url is None:
raise ValueError("prepared url is None, this should not happen")
response = requests.get(req.url, headers=headers)
if not response.ok:
raise Exception(f"HTTP error {response.status_code}")
parsed_response = response.json()
web_search_results = parsed_response.get("web", {}).get("results", [])
final_results = []
if isinstance(web_search_results, list):
for item in web_search_results:
final_results.append(
{
"title": item.get("title"),
"link": item.get("url"),
"snippet": item.get("description"),
}
)
return json.dumps(final_results) | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/brave_search.html |
efe7d237-1512-4a63-8dc9-a9bda9f0ce20 | Source code for langchain.utilities.jira
"""Util that calls Jira."""
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Extra, root_validator
from langchain.tools.jira.prompt import (
JIRA_CATCH_ALL_PROMPT,
JIRA_CONFLUENCE_PAGE_CREATE_PROMPT,
JIRA_GET_ALL_PROJECTS_PROMPT,
JIRA_ISSUE_CREATE_PROMPT,
JIRA_JQL_PROMPT,
)
from langchain.utils import get_from_dict_or_env
# TODO: think about error handling, more specific api specs, and jql/project limits
[docs]class JiraAPIWrapper(BaseModel):
"""Wrapper for Jira API."""
jira: Any #: :meta private:
confluence: Any
jira_username: Optional[str] = None
jira_api_token: Optional[str] = None
jira_instance_url: Optional[str] = None
operations: List[Dict] = [
{
"mode": "jql",
"name": "JQL Query",
"description": JIRA_JQL_PROMPT,
},
{
"mode": "get_projects",
"name": "Get Projects",
"description": JIRA_GET_ALL_PROJECTS_PROMPT,
},
{
"mode": "create_issue",
"name": "Create Issue",
"description": JIRA_ISSUE_CREATE_PROMPT,
},
{
"mode": "other",
"name": "Catch all Jira API call",
"description": JIRA_CATCH_ALL_PROMPT,
},
{
"mode": "create_page",
"name": "Create confluence page",
"description": JIRA_CONFLUENCE_PAGE_CREATE_PROMPT,
},
]
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
[docs] def list(self) -> List[Dict]:
return self.operations
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
jira_username = get_from_dict_or_env(values, "jira_username", "JIRA_USERNAME")
values["jira_username"] = jira_username
jira_api_token = get_from_dict_or_env(
values, "jira_api_token", "JIRA_API_TOKEN"
)
values["jira_api_token"] = jira_api_token
jira_instance_url = get_from_dict_or_env(
values, "jira_instance_url", "JIRA_INSTANCE_URL"
)
values["jira_instance_url"] = jira_instance_url
try:
from atlassian import Confluence, Jira
except ImportError:
raise ImportError(
"atlassian-python-api is not installed. "
"Please install it with `pip install atlassian-python-api`"
)
jira = Jira(
url=jira_instance_url,
username=jira_username,
password=jira_api_token,
cloud=True,
)
confluence = Confluence(
url=jira_instance_url,
username=jira_username,
password=jira_api_token,
cloud=True,
)
values["jira"] = jira
values["confluence"] = confluence
return values
[docs] def parse_issues(self, issues: Dict) -> List[dict]:
parsed = []
for issue in issues["issues"]:
key = issue["key"]
summary = issue["fields"]["summary"]
created = issue["fields"]["created"][0:10]
priority = issue["fields"]["priority"]["name"]
status = issue["fields"]["status"]["name"]
try:
assignee = issue["fields"]["assignee"]["displayName"]
except Exception:
assignee = "None"
rel_issues = {}
for related_issue in issue["fields"]["issuelinks"]:
if "inwardIssue" in related_issue.keys():
rel_type = related_issue["type"]["inward"]
rel_key = related_issue["inwardIssue"]["key"]
rel_summary = related_issue["inwardIssue"]["fields"]["summary"]
if "outwardIssue" in related_issue.keys():
rel_type = related_issue["type"]["outward"]
rel_key = related_issue["outwardIssue"]["key"]
rel_summary = related_issue["outwardIssue"]["fields"]["summary"]
rel_issues = {"type": rel_type, "key": rel_key, "summary": rel_summary}
parsed.append(
{
"key": key,
"summary": summary,
"created": created,
"assignee": assignee,
"priority": priority,
"status": status,
"related_issues": rel_issues,
}
)
return parsed
[docs] def parse_projects(self, projects: List[dict]) -> List[dict]:
parsed = []
for project in projects:
id = project["id"]
key = project["key"]
name = project["name"]
type = project["projectTypeKey"]
style = project["style"]
parsed.append(
{"id": id, "key": key, "name": name, "type": type, "style": style}
)
return parsed
[docs] def search(self, query: str) -> str:
issues = self.jira.jql(query)
parsed_issues = self.parse_issues(issues)
parsed_issues_str = (
"Found " + str(len(parsed_issues)) + " issues:\n" + str(parsed_issues)
)
return parsed_issues_str
[docs] def project(self) -> str:
projects = self.jira.projects()
parsed_projects = self.parse_projects(projects)
parsed_projects_str = (
"Found " + str(len(parsed_projects)) + " projects:\n" + str(parsed_projects)
)
return parsed_projects_str
[docs] def issue_create(self, query: str) -> str:
try:
import json
except ImportError:
raise ImportError(
"json is not installed. Please install it with `pip install json`"
)
params = json.loads(query)
return self.jira.issue_create(fields=dict(params))
[docs] def page_create(self, query: str) -> str:
try:
import json
except ImportError:
raise ImportError(
"json is not installed. Please install it with `pip install json`"
)
params = json.loads(query)
return self.confluence.create_page(**dict(params))
[docs] def other(self, query: str) -> str:
context = {"self": self}
exec(f"result = {query}", context)
result = context["result"]
return str(result)
[docs] def run(self, mode: str, query: str) -> str:
if mode == "jql":
return self.search(query)
elif mode == "get_projects":
return self.project()
elif mode == "create_issue":
return self.issue_create(query)
elif mode == "other":
return self.other(query)
elif mode == "create_page":
return self.page_create(query)
else:
raise ValueError(f"Got unexpected mode {mode}") | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/jira.html |
be098da0-8768-4654-9db0-dbab83cea814 | Source code for langchain.utilities.spark_sql
from __future__ import annotations
from typing import TYPE_CHECKING, Any, Iterable, List, Optional
if TYPE_CHECKING:
from pyspark.sql import DataFrame, Row, SparkSession
[docs]class SparkSQL:
def __init__(
self,
spark_session: Optional[SparkSession] = None,
catalog: Optional[str] = None,
schema: Optional[str] = None,
ignore_tables: Optional[List[str]] = None,
include_tables: Optional[List[str]] = None,
sample_rows_in_table_info: int = 3,
):
try:
from pyspark.sql import SparkSession
except ImportError:
raise ValueError(
"pyspark is not installed. Please install it with `pip install pyspark`"
)
self._spark = (
spark_session if spark_session else SparkSession.builder.getOrCreate()
)
if catalog is not None:
self._spark.catalog.setCurrentCatalog(catalog)
if schema is not None:
self._spark.catalog.setCurrentDatabase(schema)
self._all_tables = set(self._get_all_table_names())
self._include_tables = set(include_tables) if include_tables else set()
if self._include_tables:
missing_tables = self._include_tables - self._all_tables
if missing_tables:
raise ValueError(
f"include_tables {missing_tables} not found in database"
)
self._ignore_tables = set(ignore_tables) if ignore_tables else set()
if self._ignore_tables:
missing_tables = self._ignore_tables - self._all_tables
if missing_tables:
raise ValueError(
f"ignore_tables {missing_tables} not found in database"
)
usable_tables = self.get_usable_table_names()
self._usable_tables = set(usable_tables) if usable_tables else self._all_tables
if not isinstance(sample_rows_in_table_info, int):
raise TypeError("sample_rows_in_table_info must be an integer")
self._sample_rows_in_table_info = sample_rows_in_table_info
[docs] @classmethod
def from_uri(
cls, database_uri: str, engine_args: Optional[dict] = None, **kwargs: Any
) -> SparkSQL:
"""Creating a remote Spark Session via Spark connect.
For example: SparkSQL.from_uri("sc://localhost:15002")
"""
try:
from pyspark.sql import SparkSession
except ImportError:
raise ValueError(
"pyspark is not installed. Please install it with `pip install pyspark`"
)
spark = SparkSession.builder.remote(database_uri).getOrCreate()
return cls(spark, **kwargs)
[docs] def get_usable_table_names(self) -> Iterable[str]:
"""Get names of tables available."""
if self._include_tables:
return self._include_tables
# sorting the result can help LLM understanding it.
return sorted(self._all_tables - self._ignore_tables)
def _get_all_table_names(self) -> Iterable[str]:
rows = self._spark.sql("SHOW TABLES").select("tableName").collect()
return list(map(lambda row: row.tableName, rows))
def _get_create_table_stmt(self, table: str) -> str:
statement = (
self._spark.sql(f"SHOW CREATE TABLE {table}").collect()[0].createtab_stmt
)
# Ignore the data source provider and options to reduce the number of tokens.
using_clause_index = statement.find("USING")
return statement[:using_clause_index] + ";"
[docs] def get_table_info(self, table_names: Optional[List[str]] = None) -> str:
all_table_names = self.get_usable_table_names()
if table_names is not None:
missing_tables = set(table_names).difference(all_table_names)
if missing_tables:
raise ValueError(f"table_names {missing_tables} not found in database")
all_table_names = table_names
tables = []
for table_name in all_table_names:
table_info = self._get_create_table_stmt(table_name)
if self._sample_rows_in_table_info:
table_info += "\n\n/*"
table_info += f"\n{self._get_sample_spark_rows(table_name)}\n"
table_info += "*/"
tables.append(table_info)
final_str = "\n\n".join(tables)
return final_str
def _get_sample_spark_rows(self, table: str) -> str:
query = f"SELECT * FROM {table} LIMIT {self._sample_rows_in_table_info}"
df = self._spark.sql(query)
columns_str = "\t".join(list(map(lambda f: f.name, df.schema.fields)))
try:
sample_rows = self._get_dataframe_results(df)
# save the sample rows in string format
sample_rows_str = "\n".join(["\t".join(row) for row in sample_rows])
except Exception:
sample_rows_str = ""
return (
f"{self._sample_rows_in_table_info} rows from {table} table:\n"
f"{columns_str}\n"
f"{sample_rows_str}"
)
def _convert_row_as_tuple(self, row: Row) -> tuple:
return tuple(map(str, row.asDict().values()))
def _get_dataframe_results(self, df: DataFrame) -> list:
return list(map(self._convert_row_as_tuple, df.collect()))
[docs] def run(self, command: str, fetch: str = "all") -> str:
df = self._spark.sql(command)
if fetch == "one":
df = df.limit(1)
return str(self._get_dataframe_results(df))
[docs] def get_table_info_no_throw(self, table_names: Optional[List[str]] = None) -> str:
"""Get information about specified tables.
Follows best practices as specified in: Rajkumar et al, 2022
(https://arxiv.org/abs/2204.00498)
If `sample_rows_in_table_info`, the specified number of sample rows will be
appended to each table description. This can increase performance as
demonstrated in the paper.
"""
try:
return self.get_table_info(table_names)
except ValueError as e:
"""Format the error message"""
return f"Error: {e}"
[docs] def run_no_throw(self, command: str, fetch: str = "all") -> str:
"""Execute a SQL command and return a string representing the results.
If the statement returns rows, a string of the results is returned.
If the statement returns no rows, an empty string is returned.
If the statement throws an error, the error message is returned.
"""
try:
from pyspark.errors import PySparkException
except ImportError:
raise ValueError(
"pyspark is not installed. Please install it with `pip install pyspark`"
)
try:
return self.run(command, fetch)
except PySparkException as e:
"""Format the error message"""
return f"Error: {e}" | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/spark_sql.html |
25cc11fc-9474-4b07-a527-62c8d56208ef | Source code for langchain.utilities.wikipedia
"""Util that calls Wikipedia."""
import logging
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Extra, root_validator
from langchain.schema import Document
logger = logging.getLogger(__name__)
WIKIPEDIA_MAX_QUERY_LENGTH = 300
[docs]class WikipediaAPIWrapper(BaseModel):
"""Wrapper around WikipediaAPI.
To use, you should have the ``wikipedia`` python package installed.
This wrapper will use the Wikipedia API to conduct searches and
fetch page summaries. By default, it will return the page summaries
of the top-k results.
It limits the Document content by doc_content_chars_max.
"""
wiki_client: Any #: :meta private:
top_k_results: int = 3
lang: str = "en"
load_all_available_meta: bool = False
doc_content_chars_max: int = 4000
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that the python package exists in environment."""
try:
import wikipedia
wikipedia.set_lang(values["lang"])
values["wiki_client"] = wikipedia
except ImportError:
raise ImportError(
"Could not import wikipedia python package. "
"Please install it with `pip install wikipedia`."
)
return values
[docs] def run(self, query: str) -> str:
"""Run Wikipedia search and get page summaries."""
page_titles = self.wiki_client.search(query[:WIKIPEDIA_MAX_QUERY_LENGTH])
summaries = []
for page_title in page_titles[: self.top_k_results]:
if wiki_page := self._fetch_page(page_title):
if summary := self._formatted_page_summary(page_title, wiki_page):
summaries.append(summary)
if not summaries:
return "No good Wikipedia Search Result was found"
return "\n\n".join(summaries)[: self.doc_content_chars_max]
@staticmethod
def _formatted_page_summary(page_title: str, wiki_page: Any) -> Optional[str]:
return f"Page: {page_title}\nSummary: {wiki_page.summary}"
def _page_to_document(self, page_title: str, wiki_page: Any) -> Document:
main_meta = {
"title": page_title,
"summary": wiki_page.summary,
"source": wiki_page.url,
}
add_meta = (
{
"categories": wiki_page.categories,
"page_url": wiki_page.url,
"image_urls": wiki_page.images,
"related_titles": wiki_page.links,
"parent_id": wiki_page.parent_id,
"references": wiki_page.references,
"revision_id": wiki_page.revision_id,
"sections": wiki_page.sections,
}
if self.load_all_available_meta
else {}
)
doc = Document(
page_content=wiki_page.content[: self.doc_content_chars_max],
metadata={
**main_meta,
**add_meta,
},
)
return doc
def _fetch_page(self, page: str) -> Optional[str]:
try:
return self.wiki_client.page(title=page, auto_suggest=False)
except (
self.wiki_client.exceptions.PageError,
self.wiki_client.exceptions.DisambiguationError,
):
return None
[docs] def load(self, query: str) -> List[Document]:
"""
Run Wikipedia search and get the article text plus the meta information.
See
Returns: a list of documents.
"""
page_titles = self.wiki_client.search(query[:WIKIPEDIA_MAX_QUERY_LENGTH])
docs = []
for page_title in page_titles[: self.top_k_results]:
if wiki_page := self._fetch_page(page_title):
if doc := self._page_to_document(page_title, wiki_page):
docs.append(doc)
return docs | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/wikipedia.html |
868942d1-bfcf-4c0e-b344-7759ae0477c1 | Source code for langchain.utilities.apify
from typing import Any, Callable, Dict, Optional
from pydantic import BaseModel, root_validator
from langchain.document_loaders import ApifyDatasetLoader
from langchain.document_loaders.base import Document
from langchain.utils import get_from_dict_or_env
[docs]class ApifyWrapper(BaseModel):
"""Wrapper around Apify.
To use, you should have the ``apify-client`` python package installed,
and the environment variable ``APIFY_API_TOKEN`` set with your API key, or pass
`apify_api_token` as a named parameter to the constructor.
"""
apify_client: Any
apify_client_async: Any
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate environment.
Validate that an Apify API token is set and the apify-client
Python package exists in the current environment.
"""
apify_api_token = get_from_dict_or_env(
values, "apify_api_token", "APIFY_API_TOKEN"
)
try:
from apify_client import ApifyClient, ApifyClientAsync
values["apify_client"] = ApifyClient(apify_api_token)
values["apify_client_async"] = ApifyClientAsync(apify_api_token)
except ImportError:
raise ValueError(
"Could not import apify-client Python package. "
"Please install it with `pip install apify-client`."
)
return values
[docs] def call_actor(
self,
actor_id: str,
run_input: Dict,
dataset_mapping_function: Callable[[Dict], Document],
*,
build: Optional[str] = None,
memory_mbytes: Optional[int] = None,
timeout_secs: Optional[int] = None,
) -> ApifyDatasetLoader:
"""Run an Actor on the Apify platform and wait for results to be ready.
Args:
actor_id (str): The ID or name of the Actor on the Apify platform.
run_input (Dict): The input object of the Actor that you're trying to run.
dataset_mapping_function (Callable): A function that takes a single
dictionary (an Apify dataset item) and converts it to an
instance of the Document class.
build (str, optional): Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional): Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional): Optional timeout for the run, in seconds.
Returns:
ApifyDatasetLoader: A loader that will fetch the records from the
Actor run's default dataset.
"""
actor_call = self.apify_client.actor(actor_id).call(
run_input=run_input,
build=build,
memory_mbytes=memory_mbytes,
timeout_secs=timeout_secs,
)
return ApifyDatasetLoader(
dataset_id=actor_call["defaultDatasetId"],
dataset_mapping_function=dataset_mapping_function,
)
[docs] async def acall_actor(
self,
actor_id: str,
run_input: Dict,
dataset_mapping_function: Callable[[Dict], Document],
*,
build: Optional[str] = None,
memory_mbytes: Optional[int] = None,
timeout_secs: Optional[int] = None,
) -> ApifyDatasetLoader:
"""Run an Actor on the Apify platform and wait for results to be ready.
Args:
actor_id (str): The ID or name of the Actor on the Apify platform.
run_input (Dict): The input object of the Actor that you're trying to run.
dataset_mapping_function (Callable): A function that takes a single
dictionary (an Apify dataset item) and converts it to
an instance of the Document class.
build (str, optional): Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional): Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional): Optional timeout for the run, in seconds.
Returns:
ApifyDatasetLoader: A loader that will fetch the records from the
Actor run's default dataset.
"""
actor_call = await self.apify_client_async.actor(actor_id).call(
run_input=run_input,
build=build,
memory_mbytes=memory_mbytes,
timeout_secs=timeout_secs,
)
return ApifyDatasetLoader(
dataset_id=actor_call["defaultDatasetId"],
dataset_mapping_function=dataset_mapping_function,
) | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/apify.html |
60e598e2-aebf-4932-b73a-efbb639097c8 | Source code for langchain.utilities.pupmed
import json
import logging
import time
import urllib.error
import urllib.request
from typing import List
from pydantic import BaseModel, Extra
from langchain.schema import Document
logger = logging.getLogger(__name__)
[docs]class PubMedAPIWrapper(BaseModel):
"""
Wrapper around PubMed API.
This wrapper will use the PubMed API to conduct searches and fetch
document summaries. By default, it will return the document summaries
of the top-k results of an input search.
Parameters:
top_k_results: number of the top-scored document used for the PubMed tool
load_max_docs: a limit to the number of loaded documents
load_all_available_meta:
if True: the `metadata` of the loaded Documents gets all available meta info
(see https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch)
if False: the `metadata` gets only the most informative fields.
"""
base_url_esearch = "https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?"
base_url_efetch = "https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?"
max_retry = 5
sleep_time = 0.2
# Default values for the parameters
top_k_results: int = 3
load_max_docs: int = 25
ARXIV_MAX_QUERY_LENGTH = 300
doc_content_chars_max: int = 2000
load_all_available_meta: bool = False
email: str = "your_email@example.com"
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
[docs] def run(self, query: str) -> str:
"""
Run PubMed search and get the article meta information.
See https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch
It uses only the most informative fields of article meta information.
"""
try:
# Retrieve the top-k results for the query
docs = [
f"Published: {result['pub_date']}\nTitle: {result['title']}\n"
f"Summary: {result['summary']}"
for result in self.load(query[: self.ARXIV_MAX_QUERY_LENGTH])
]
# Join the results and limit the character count
return (
"\n\n".join(docs)[: self.doc_content_chars_max]
if docs
else "No good PubMed Result was found"
)
except Exception as ex:
return f"PubMed exception: {ex}"
[docs] def load(self, query: str) -> List[dict]:
"""
Search PubMed for documents matching the query.
Return a list of dictionaries containing the document metadata.
"""
url = (
self.base_url_esearch
+ "db=pubmed&term="
+ str({urllib.parse.quote(query)})
+ f"&retmode=json&retmax={self.top_k_results}&usehistory=y"
)
result = urllib.request.urlopen(url)
text = result.read().decode("utf-8")
json_text = json.loads(text)
articles = []
webenv = json_text["esearchresult"]["webenv"]
for uid in json_text["esearchresult"]["idlist"]:
article = self.retrieve_article(uid, webenv)
articles.append(article)
# Convert the list of articles to a JSON string
return articles
def _transform_doc(self, doc: dict) -> Document:
summary = doc.pop("summary")
return Document(page_content=summary, metadata=doc)
[docs] def load_docs(self, query: str) -> List[Document]:
document_dicts = self.load(query=query)
return [self._transform_doc(d) for d in document_dicts]
[docs] def retrieve_article(self, uid: str, webenv: str) -> dict:
url = (
self.base_url_efetch
+ "db=pubmed&retmode=xml&id="
+ uid
+ "&webenv="
+ webenv
)
retry = 0
while True:
try:
result = urllib.request.urlopen(url)
break
except urllib.error.HTTPError as e:
if e.code == 429 and retry < self.max_retry:
# Too Many Requests error
# wait for an exponentially increasing amount of time
print(
f"Too Many Requests, "
f"waiting for {self.sleep_time:.2f} seconds..."
)
time.sleep(self.sleep_time)
self.sleep_time *= 2
retry += 1
else:
raise e
xml_text = result.read().decode("utf-8")
# Get title
title = ""
if "<ArticleTitle>" in xml_text and "</ArticleTitle>" in xml_text:
start_tag = "<ArticleTitle>"
end_tag = "</ArticleTitle>"
title = xml_text[
xml_text.index(start_tag) + len(start_tag) : xml_text.index(end_tag)
]
# Get abstract
abstract = ""
if "<AbstractText>" in xml_text and "</AbstractText>" in xml_text:
start_tag = "<AbstractText>"
end_tag = "</AbstractText>"
abstract = xml_text[
xml_text.index(start_tag) + len(start_tag) : xml_text.index(end_tag)
]
# Get publication date
pub_date = ""
if "<PubDate>" in xml_text and "</PubDate>" in xml_text:
start_tag = "<PubDate>"
end_tag = "</PubDate>"
pub_date = xml_text[
xml_text.index(start_tag) + len(start_tag) : xml_text.index(end_tag)
]
# Return article as dictionary
article = {
"uid": uid,
"title": title,
"summary": abstract,
"pub_date": pub_date,
}
return article | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/pupmed.html |
14d09fd8-2156-4876-ba8b-2abd23eb1055 | Source code for langchain.utilities.duckduckgo_search
"""Util that calls DuckDuckGo Search.
No setup required. Free.
https://pypi.org/project/duckduckgo-search/
"""
from typing import Dict, List, Optional
from pydantic import BaseModel, Extra
from pydantic.class_validators import root_validator
[docs]class DuckDuckGoSearchAPIWrapper(BaseModel):
"""Wrapper for DuckDuckGo Search API.
Free and does not require any setup
"""
k: int = 10
region: Optional[str] = "wt-wt"
safesearch: str = "moderate"
time: Optional[str] = "y"
max_results: int = 5
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that python package exists in environment."""
try:
from duckduckgo_search import DDGS # noqa: F401
except ImportError:
raise ValueError(
"Could not import duckduckgo-search python package. "
"Please install it with `pip install duckduckgo-search`."
)
return values
[docs] def get_snippets(self, query: str) -> List[str]:
"""Run query through DuckDuckGo and return concatenated results."""
from duckduckgo_search import DDGS
with DDGS() as ddgs:
results = ddgs.text(
query,
region=self.region,
safesearch=self.safesearch,
timelimit=self.time,
)
if results is None:
return ["No good DuckDuckGo Search Result was found"]
snippets = []
for i, res in enumerate(results, 1):
if res is not None:
snippets.append(res["body"])
if len(snippets) == self.max_results:
break
return snippets
[docs] def run(self, query: str) -> str:
snippets = self.get_snippets(query)
return " ".join(snippets)
[docs] def results(self, query: str, num_results: int) -> List[Dict[str, str]]:
"""Run query through DuckDuckGo and return metadata.
Args:
query: The query to search for.
num_results: The number of results to return.
Returns:
A list of dictionaries with the following keys:
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
"""
from duckduckgo_search import DDGS
with DDGS() as ddgs:
results = ddgs.text(
query,
region=self.region,
safesearch=self.safesearch,
timelimit=self.time,
)
if results is None:
return [{"Result": "No good DuckDuckGo Search Result was found"}]
def to_metadata(result: Dict) -> Dict[str, str]:
return {
"snippet": result["body"],
"title": result["title"],
"link": result["href"],
}
formatted_results = []
for i, res in enumerate(results, 1):
if res is not None:
formatted_results.append(to_metadata(res))
if len(formatted_results) == num_results:
break
return formatted_results | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/duckduckgo_search.html |
cdd185f5-ae08-47a8-8319-2af42c889aec | Source code for langchain.utilities.openapi
"""Utility functions for parsing an OpenAPI spec."""
import copy
import json
import logging
import re
from enum import Enum
from pathlib import Path
from typing import Dict, List, Optional, Union
import requests
import yaml
from openapi_schema_pydantic import (
Components,
OpenAPI,
Operation,
Parameter,
PathItem,
Paths,
Reference,
RequestBody,
Schema,
)
from pydantic import ValidationError
logger = logging.getLogger(__name__)
class HTTPVerb(str, Enum):
"""HTTP verbs."""
GET = "get"
PUT = "put"
POST = "post"
DELETE = "delete"
OPTIONS = "options"
HEAD = "head"
PATCH = "patch"
TRACE = "trace"
@classmethod
def from_str(cls, verb: str) -> "HTTPVerb":
"""Parse an HTTP verb."""
try:
return cls(verb)
except ValueError:
raise ValueError(f"Invalid HTTP verb. Valid values are {cls.__members__}")
[docs]class OpenAPISpec(OpenAPI):
"""OpenAPI Model that removes misformatted parts of the spec."""
@property
def _paths_strict(self) -> Paths:
if not self.paths:
raise ValueError("No paths found in spec")
return self.paths
def _get_path_strict(self, path: str) -> PathItem:
path_item = self._paths_strict.get(path)
if not path_item:
raise ValueError(f"No path found for {path}")
return path_item
@property
def _components_strict(self) -> Components:
"""Get components or err."""
if self.components is None:
raise ValueError("No components found in spec. ")
return self.components
@property
def _parameters_strict(self) -> Dict[str, Union[Parameter, Reference]]:
"""Get parameters or err."""
parameters = self._components_strict.parameters
if parameters is None:
raise ValueError("No parameters found in spec. ")
return parameters
@property
def _schemas_strict(self) -> Dict[str, Schema]:
"""Get the dictionary of schemas or err."""
schemas = self._components_strict.schemas
if schemas is None:
raise ValueError("No schemas found in spec. ")
return schemas
@property
def _request_bodies_strict(self) -> Dict[str, Union[RequestBody, Reference]]:
"""Get the request body or err."""
request_bodies = self._components_strict.requestBodies
if request_bodies is None:
raise ValueError("No request body found in spec. ")
return request_bodies
def _get_referenced_parameter(self, ref: Reference) -> Union[Parameter, Reference]:
"""Get a parameter (or nested reference) or err."""
ref_name = ref.ref.split("/")[-1]
parameters = self._parameters_strict
if ref_name not in parameters:
raise ValueError(f"No parameter found for {ref_name}")
return parameters[ref_name]
def _get_root_referenced_parameter(self, ref: Reference) -> Parameter:
"""Get the root reference or err."""
parameter = self._get_referenced_parameter(ref)
while isinstance(parameter, Reference):
parameter = self._get_referenced_parameter(parameter)
return parameter
[docs] def get_referenced_schema(self, ref: Reference) -> Schema:
"""Get a schema (or nested reference) or err."""
ref_name = ref.ref.split("/")[-1]
schemas = self._schemas_strict
if ref_name not in schemas:
raise ValueError(f"No schema found for {ref_name}")
return schemas[ref_name]
[docs] def get_schema(self, schema: Union[Reference, Schema]) -> Schema:
if isinstance(schema, Reference):
return self.get_referenced_schema(schema)
return schema
def _get_root_referenced_schema(self, ref: Reference) -> Schema:
"""Get the root reference or err."""
schema = self.get_referenced_schema(ref)
while isinstance(schema, Reference):
schema = self.get_referenced_schema(schema)
return schema
def _get_referenced_request_body(
self, ref: Reference
) -> Optional[Union[Reference, RequestBody]]:
"""Get a request body (or nested reference) or err."""
ref_name = ref.ref.split("/")[-1]
request_bodies = self._request_bodies_strict
if ref_name not in request_bodies:
raise ValueError(f"No request body found for {ref_name}")
return request_bodies[ref_name]
def _get_root_referenced_request_body(
self, ref: Reference
) -> Optional[RequestBody]:
"""Get the root request Body or err."""
request_body = self._get_referenced_request_body(ref)
while isinstance(request_body, Reference):
request_body = self._get_referenced_request_body(request_body)
return request_body
@staticmethod
def _alert_unsupported_spec(obj: dict) -> None:
"""Alert if the spec is not supported."""
warning_message = (
" This may result in degraded performance."
+ " Convert your OpenAPI spec to 3.1.* spec"
+ " for better support."
)
swagger_version = obj.get("swagger")
openapi_version = obj.get("openapi")
if isinstance(openapi_version, str):
if openapi_version != "3.1.0":
logger.warning(
f"Attempting to load an OpenAPI {openapi_version}"
f" spec. {warning_message}"
)
else:
pass
elif isinstance(swagger_version, str):
logger.warning(
f"Attempting to load a Swagger {swagger_version}"
f" spec. {warning_message}"
)
else:
raise ValueError(
"Attempting to load an unsupported spec:"
f"\n\n{obj}\n{warning_message}"
)
[docs] @classmethod
def parse_obj(cls, obj: dict) -> "OpenAPISpec":
try:
cls._alert_unsupported_spec(obj)
return super().parse_obj(obj)
except ValidationError as e:
# We are handling possibly misconfigured specs and want to do a best-effort
# job to get a reasonable interface out of it.
new_obj = copy.deepcopy(obj)
for error in e.errors():
keys = error["loc"]
item = new_obj
for key in keys[:-1]:
item = item[key]
item.pop(keys[-1], None)
return cls.parse_obj(new_obj)
[docs] @classmethod
def from_spec_dict(cls, spec_dict: dict) -> "OpenAPISpec":
"""Get an OpenAPI spec from a dict."""
return cls.parse_obj(spec_dict)
[docs] @classmethod
def from_text(cls, text: str) -> "OpenAPISpec":
"""Get an OpenAPI spec from a text."""
try:
spec_dict = json.loads(text)
except json.JSONDecodeError:
spec_dict = yaml.safe_load(text)
return cls.from_spec_dict(spec_dict)
[docs] @classmethod
def from_file(cls, path: Union[str, Path]) -> "OpenAPISpec":
"""Get an OpenAPI spec from a file path."""
path_ = path if isinstance(path, Path) else Path(path)
if not path_.exists():
raise FileNotFoundError(f"{path} does not exist")
with path_.open("r") as f:
return cls.from_text(f.read())
[docs] @classmethod
def from_url(cls, url: str) -> "OpenAPISpec":
"""Get an OpenAPI spec from a URL."""
response = requests.get(url)
return cls.from_text(response.text)
@property
def base_url(self) -> str:
"""Get the base url."""
return self.servers[0].url
[docs] def get_methods_for_path(self, path: str) -> List[str]:
"""Return a list of valid methods for the specified path."""
path_item = self._get_path_strict(path)
results = []
for method in HTTPVerb:
operation = getattr(path_item, method.value, None)
if isinstance(operation, Operation):
results.append(method.value)
return results
[docs] def get_parameters_for_path(self, path: str) -> List[Parameter]:
path_item = self._get_path_strict(path)
parameters = []
if not path_item.parameters:
return []
for parameter in path_item.parameters:
if isinstance(parameter, Reference):
parameter = self._get_root_referenced_parameter(parameter)
parameters.append(parameter)
return parameters
[docs] def get_operation(self, path: str, method: str) -> Operation:
"""Get the operation object for a given path and HTTP method."""
path_item = self._get_path_strict(path)
operation_obj = getattr(path_item, method, None)
if not isinstance(operation_obj, Operation):
raise ValueError(f"No {method} method found for {path}")
return operation_obj
[docs] def get_parameters_for_operation(self, operation: Operation) -> List[Parameter]:
"""Get the components for a given operation."""
parameters = []
if operation.parameters:
for parameter in operation.parameters:
if isinstance(parameter, Reference):
parameter = self._get_root_referenced_parameter(parameter)
parameters.append(parameter)
return parameters
[docs] def get_request_body_for_operation(
self, operation: Operation
) -> Optional[RequestBody]:
"""Get the request body for a given operation."""
request_body = operation.requestBody
if isinstance(request_body, Reference):
request_body = self._get_root_referenced_request_body(request_body)
return request_body
[docs] @staticmethod
def get_cleaned_operation_id(operation: Operation, path: str, method: str) -> str:
"""Get a cleaned operation id from an operation id."""
operation_id = operation.operationId
if operation_id is None:
# Replace all punctuation of any kind with underscore
path = re.sub(r"[^a-zA-Z0-9]", "_", path.lstrip("/"))
operation_id = f"{path}_{method}"
return operation_id.replace("-", "_").replace(".", "_").replace("/", "_") | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openapi.html |
5e833693-9d36-4683-be68-a76808bcb0db | Source code for langchain.utilities.zapier
"""Util that can interact with Zapier NLA.
Full docs here: https://nla.zapier.com/api/v1/docs
Note: this wrapper currently only implemented the `api_key` auth method for testing
and server-side production use cases (using the developer's connected accounts on
Zapier.com)
For use-cases where LangChain + Zapier NLA is powering a user-facing application, and
LangChain needs access to the end-user's connected accounts on Zapier.com, you'll need
to use oauth. Review the full docs above and reach out to nla@zapier.com for
developer support.
"""
import json
from typing import Dict, List, Optional
import requests
from pydantic import BaseModel, Extra, root_validator
from requests import Request, Session
from langchain.utils import get_from_dict_or_env
[docs]class ZapierNLAWrapper(BaseModel):
"""Wrapper for Zapier NLA.
Full docs here: https://nla.zapier.com/api/v1/docs
Note: this wrapper currently only implemented the `api_key` auth method for
testingand server-side production use cases (using the developer's connected
accounts on Zapier.com)
For use-cases where LangChain + Zapier NLA is powering a user-facing application,
and LangChain needs access to the end-user's connected accounts on Zapier.com,
you'll need to use oauth. Review the full docs above and reach out to
nla@zapier.com for developer support.
"""
zapier_nla_api_key: str
zapier_nla_oauth_access_token: str
zapier_nla_api_base: str = "https://nla.zapier.com/api/v1/"
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
def _get_session(self) -> Session:
session = requests.Session()
session.headers.update(
{
"Accept": "application/json",
"Content-Type": "application/json",
}
)
if self.zapier_nla_oauth_access_token:
session.headers.update(
{"Authorization": f"Bearer {self.zapier_nla_oauth_access_token}"}
)
else:
session.params = {"api_key": self.zapier_nla_api_key}
return session
def _get_action_request(
self, action_id: str, instructions: str, params: Optional[Dict] = None
) -> Request:
data = params if params else {}
data.update(
{
"instructions": instructions,
}
)
return Request(
"POST",
self.zapier_nla_api_base + f"exposed/{action_id}/execute/",
json=data,
)
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key exists in environment."""
zapier_nla_api_key_default = None
# If there is a oauth_access_key passed in the values
# we don't need a nla_api_key it can be blank
if "zapier_nla_oauth_access_token" in values:
zapier_nla_api_key_default = ""
else:
values["zapier_nla_oauth_access_token"] = ""
# we require at least one API Key
zapier_nla_api_key = get_from_dict_or_env(
values,
"zapier_nla_api_key",
"ZAPIER_NLA_API_KEY",
zapier_nla_api_key_default,
)
values["zapier_nla_api_key"] = zapier_nla_api_key
return values
[docs] def list(self) -> List[Dict]:
"""Returns a list of all exposed (enabled) actions associated with
current user (associated with the set api_key). Change your exposed
actions here: https://nla.zapier.com/demo/start/
The return list can be empty if no actions exposed. Else will contain
a list of action objects:
[{
"id": str,
"description": str,
"params": Dict[str, str]
}]
`params` will always contain an `instructions` key, the only required
param. All others optional and if provided will override any AI guesses
(see "understanding the AI guessing flow" here:
https://nla.zapier.com/api/v1/docs)
"""
session = self._get_session()
response = session.get(self.zapier_nla_api_base + "exposed/")
response.raise_for_status()
return response.json()["results"]
[docs] def run(
self, action_id: str, instructions: str, params: Optional[Dict] = None
) -> Dict:
"""Executes an action that is identified by action_id, must be exposed
(enabled) by the current user (associated with the set api_key). Change
your exposed actions here: https://nla.zapier.com/demo/start/
The return JSON is guaranteed to be less than ~500 words (350
tokens) making it safe to inject into the prompt of another LLM
call.
"""
session = self._get_session()
request = self._get_action_request(action_id, instructions, params)
response = session.send(session.prepare_request(request))
response.raise_for_status()
return response.json()["result"]
[docs] def preview(
self, action_id: str, instructions: str, params: Optional[Dict] = None
) -> Dict:
"""Same as run, but instead of actually executing the action, will
instead return a preview of params that have been guessed by the AI in
case you need to explicitly review before executing."""
session = self._get_session()
params = params if params else {}
params.update({"preview_only": True})
request = self._get_action_request(action_id, instructions, params)
response = session.send(session.prepare_request(request))
response.raise_for_status()
return response.json()["input_params"]
[docs] def run_as_str(self, *args, **kwargs) -> str: # type: ignore[no-untyped-def]
"""Same as run, but returns a stringified version of the JSON for
insertting back into an LLM."""
data = self.run(*args, **kwargs)
return json.dumps(data)
[docs] def preview_as_str(self, *args, **kwargs) -> str: # type: ignore[no-untyped-def]
"""Same as preview, but returns a stringified version of the JSON for
insertting back into an LLM."""
data = self.preview(*args, **kwargs)
return json.dumps(data)
[docs] def list_as_str(self) -> str: # type: ignore[no-untyped-def]
"""Same as list, but returns a stringified version of the JSON for
insertting back into an LLM."""
actions = self.list()
return json.dumps(actions) | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/zapier.html |
944d4eb5-2f27-4243-b44d-408202f320b7 | Source code for langchain.utilities.scenexplain
"""Util that calls SceneXplain.
In order to set this up, you need API key for the SceneXplain API.
You can obtain a key by following the steps below.
- Sign up for a free account at https://scenex.jina.ai/.
- Navigate to the API Access page (https://scenex.jina.ai/api) and create a new API key.
"""
from typing import Dict
import requests
from pydantic import BaseModel, BaseSettings, Field, root_validator
from langchain.utils import get_from_dict_or_env
[docs]class SceneXplainAPIWrapper(BaseSettings, BaseModel):
"""Wrapper for SceneXplain API.
In order to set this up, you need API key for the SceneXplain API.
You can obtain a key by following the steps below.
- Sign up for a free account at https://scenex.jina.ai/.
- Navigate to the API Access page (https://scenex.jina.ai/api)
and create a new API key.
"""
scenex_api_key: str = Field(..., env="SCENEX_API_KEY")
scenex_api_url: str = (
"https://us-central1-causal-diffusion.cloudfunctions.net/describe"
)
def _describe_image(self, image: str) -> str:
headers = {
"x-api-key": f"token {self.scenex_api_key}",
"content-type": "application/json",
}
payload = {
"data": [
{
"image": image,
"algorithm": "Ember",
"languages": ["en"],
}
]
}
response = requests.post(self.scenex_api_url, headers=headers, json=payload)
response.raise_for_status()
result = response.json().get("result", [])
img = result[0] if result else {}
return img.get("text", "")
[docs] @root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key exists in environment."""
scenex_api_key = get_from_dict_or_env(
values, "scenex_api_key", "SCENEX_API_KEY"
)
values["scenex_api_key"] = scenex_api_key
return values
[docs] def run(self, image: str) -> str:
"""Run SceneXplain image explainer."""
description = self._describe_image(image)
if not description:
return "No description found."
return description | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/scenexplain.html |
3b45a8d1-1b02-48cd-8e98-46fbbd049552 | Source code for langchain.utilities.wolfram_alpha
"""Util that calls WolframAlpha."""
from typing import Any, Dict, Optional
from pydantic import BaseModel, Extra, root_validator
from langchain.utils import get_from_dict_or_env
[docs]class WolframAlphaAPIWrapper(BaseModel):
"""Wrapper for Wolfram Alpha.
Docs for using:
1. Go to wolfram alpha and sign up for a developer account
2. Create an app and get your APP ID
3. Save your APP ID into WOLFRAM_ALPHA_APPID env variable
4. pip install wolframalpha
"""
wolfram_client: Any #: :meta private:
wolfram_alpha_appid: Optional[str] = None
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
wolfram_alpha_appid = get_from_dict_or_env(
values, "wolfram_alpha_appid", "WOLFRAM_ALPHA_APPID"
)
values["wolfram_alpha_appid"] = wolfram_alpha_appid
try:
import wolframalpha
except ImportError:
raise ImportError(
"wolframalpha is not installed. "
"Please install it with `pip install wolframalpha`"
)
client = wolframalpha.Client(wolfram_alpha_appid)
values["wolfram_client"] = client
return values
[docs] def run(self, query: str) -> str:
"""Run query through WolframAlpha and parse result."""
res = self.wolfram_client.query(query)
try:
assumption = next(res.pods).text
answer = next(res.results).text
except StopIteration:
return "Wolfram Alpha wasn't able to answer it"
if answer is None or answer == "":
# We don't want to return the assumption alone if answer is empty
return "No good Wolfram Alpha Result was found"
else:
return f"Assumption: {assumption} \nAnswer: {answer}" | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/wolfram_alpha.html |
4ee1ccdc-5c0f-430b-ac81-1951c131fdc6 | Source code for langchain.utilities.google_serper
"""Util that calls Google Search using the Serper.dev API."""
from typing import Any, Dict, List, Optional
import aiohttp
import requests
from pydantic.class_validators import root_validator
from pydantic.main import BaseModel
from typing_extensions import Literal
from langchain.utils import get_from_dict_or_env
[docs]class GoogleSerperAPIWrapper(BaseModel):
"""Wrapper around the Serper.dev Google Search API.
You can create a free API key at https://serper.dev.
To use, you should have the environment variable ``SERPER_API_KEY``
set with your API key, or pass `serper_api_key` as a named parameter
to the constructor.
Example:
.. code-block:: python
from langchain import GoogleSerperAPIWrapper
google_serper = GoogleSerperAPIWrapper()
"""
k: int = 10
gl: str = "us"
hl: str = "en"
# "places" and "images" is available from Serper but not implemented in the
# parser of run(). They can be used in results()
type: Literal["news", "search", "places", "images"] = "search"
result_key_for_type = {
"news": "news",
"places": "places",
"images": "images",
"search": "organic",
}
tbs: Optional[str] = None
serper_api_key: Optional[str] = None
aiosession: Optional[aiohttp.ClientSession] = None
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key exists in environment."""
serper_api_key = get_from_dict_or_env(
values, "serper_api_key", "SERPER_API_KEY"
)
values["serper_api_key"] = serper_api_key
return values
[docs] def results(self, query: str, **kwargs: Any) -> Dict:
"""Run query through GoogleSearch."""
return self._google_serper_api_results(
query,
gl=self.gl,
hl=self.hl,
num=self.k,
tbs=self.tbs,
search_type=self.type,
**kwargs,
)
[docs] def run(self, query: str, **kwargs: Any) -> str:
"""Run query through GoogleSearch and parse result."""
results = self._google_serper_api_results(
query,
gl=self.gl,
hl=self.hl,
num=self.k,
tbs=self.tbs,
search_type=self.type,
**kwargs,
)
return self._parse_results(results)
[docs] async def aresults(self, query: str, **kwargs: Any) -> Dict:
"""Run query through GoogleSearch."""
results = await self._async_google_serper_search_results(
query,
gl=self.gl,
hl=self.hl,
num=self.k,
search_type=self.type,
tbs=self.tbs,
**kwargs,
)
return results
[docs] async def arun(self, query: str, **kwargs: Any) -> str:
"""Run query through GoogleSearch and parse result async."""
results = await self._async_google_serper_search_results(
query,
gl=self.gl,
hl=self.hl,
num=self.k,
search_type=self.type,
tbs=self.tbs,
**kwargs,
)
return self._parse_results(results)
def _parse_snippets(self, results: dict) -> List[str]:
snippets = []
if results.get("answerBox"):
answer_box = results.get("answerBox", {})
if answer_box.get("answer"):
return [answer_box.get("answer")]
elif answer_box.get("snippet"):
return [answer_box.get("snippet").replace("\n", " ")]
elif answer_box.get("snippetHighlighted"):
return answer_box.get("snippetHighlighted")
if results.get("knowledgeGraph"):
kg = results.get("knowledgeGraph", {})
title = kg.get("title")
entity_type = kg.get("type")
if entity_type:
snippets.append(f"{title}: {entity_type}.")
description = kg.get("description")
if description:
snippets.append(description)
for attribute, value in kg.get("attributes", {}).items():
snippets.append(f"{title} {attribute}: {value}.")
for result in results[self.result_key_for_type[self.type]][: self.k]:
if "snippet" in result:
snippets.append(result["snippet"])
for attribute, value in result.get("attributes", {}).items():
snippets.append(f"{attribute}: {value}.")
if len(snippets) == 0:
return ["No good Google Search Result was found"]
return snippets
def _parse_results(self, results: dict) -> str:
return " ".join(self._parse_snippets(results))
def _google_serper_api_results(
self, search_term: str, search_type: str = "search", **kwargs: Any
) -> dict:
headers = {
"X-API-KEY": self.serper_api_key or "",
"Content-Type": "application/json",
}
params = {
"q": search_term,
**{key: value for key, value in kwargs.items() if value is not None},
}
response = requests.post(
f"https://google.serper.dev/{search_type}", headers=headers, params=params
)
response.raise_for_status()
search_results = response.json()
return search_results
async def _async_google_serper_search_results(
self, search_term: str, search_type: str = "search", **kwargs: Any
) -> dict:
headers = {
"X-API-KEY": self.serper_api_key or "",
"Content-Type": "application/json",
}
url = f"https://google.serper.dev/{search_type}"
params = {
"q": search_term,
**{key: value for key, value in kwargs.items() if value is not None},
}
if not self.aiosession:
async with aiohttp.ClientSession() as session:
async with session.post(
url, params=params, headers=headers, raise_for_status=False
) as response:
search_results = await response.json()
else:
async with self.aiosession.post(
url, params=params, headers=headers, raise_for_status=True
) as response:
search_results = await response.json()
return search_results | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html |
83c2fcd0-0a7d-4972-9988-3bb136950bd8 | Source code for langchain.utilities.twilio
"""Util that calls Twilio."""
from typing import Any, Dict, Optional
from pydantic import BaseModel, Extra, root_validator
from langchain.utils import get_from_dict_or_env
[docs]class TwilioAPIWrapper(BaseModel):
"""Messaging Client using Twilio.
To use, you should have the ``twilio`` python package installed,
and the environment variables ``TWILIO_ACCOUNT_SID``, ``TWILIO_AUTH_TOKEN``, and
``TWILIO_FROM_NUMBER``, or pass `account_sid`, `auth_token`, and `from_number` as
named parameters to the constructor.
Example:
.. code-block:: python
from langchain.utilities.twilio import TwilioAPIWrapper
twilio = TwilioAPIWrapper(
account_sid="ACxxx",
auth_token="xxx",
from_number="+10123456789"
)
twilio.run('test', '+12484345508')
"""
client: Any #: :meta private:
account_sid: Optional[str] = None
"""Twilio account string identifier."""
auth_token: Optional[str] = None
"""Twilio auth token."""
from_number: Optional[str] = None
"""A Twilio phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164)
format, an
[alphanumeric sender ID](https://www.twilio.com/docs/sms/send-messages#use-an-alphanumeric-sender-id),
or a [Channel Endpoint address](https://www.twilio.com/docs/sms/channels#channel-addresses)
that is enabled for the type of message you want to send. Phone numbers or
[short codes](https://www.twilio.com/docs/sms/api/short-code) purchased from
Twilio also work here. You cannot, for example, spoof messages from a private
cell phone number. If you are using `messaging_service_sid`, this parameter
must be empty.
""" # noqa: E501
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = False
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
try:
from twilio.rest import Client
except ImportError:
raise ImportError(
"Could not import twilio python package. "
"Please install it with `pip install twilio`."
)
account_sid = get_from_dict_or_env(values, "account_sid", "TWILIO_ACCOUNT_SID")
auth_token = get_from_dict_or_env(values, "auth_token", "TWILIO_AUTH_TOKEN")
values["from_number"] = get_from_dict_or_env(
values, "from_number", "TWILIO_FROM_NUMBER"
)
values["client"] = Client(account_sid, auth_token)
return values
[docs] def run(self, body: str, to: str) -> str:
"""Run body through Twilio and respond with message sid.
Args:
body: The text of the message you want to send. Can be up to 1,600
characters in length.
to: The destination phone number in
[E.164](https://www.twilio.com/docs/glossary/what-e164) format for
SMS/MMS or
[Channel user address](https://www.twilio.com/docs/sms/channels#channel-addresses)
for other 3rd-party channels.
""" # noqa: E501
message = self.client.messages.create(to, from_=self.from_number, body=body)
return message.sid | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/twilio.html |
20fb05a6-b810-4862-acdd-baa537f8ce08 | Source code for langchain.utilities.google_places_api
"""Chain that calls Google Places API.
"""
import logging
from typing import Any, Dict, Optional
from pydantic import BaseModel, Extra, root_validator
from langchain.utils import get_from_dict_or_env
[docs]class GooglePlacesAPIWrapper(BaseModel):
"""Wrapper around Google Places API.
To use, you should have the ``googlemaps`` python package installed,
**an API key for the google maps platform**,
and the enviroment variable ''GPLACES_API_KEY''
set with your API key , or pass 'gplaces_api_key'
as a named parameter to the constructor.
By default, this will return the all the results on the input query.
You can use the top_k_results argument to limit the number of results.
Example:
.. code-block:: python
from langchain import GooglePlacesAPIWrapper
gplaceapi = GooglePlacesAPIWrapper()
"""
gplaces_api_key: Optional[str] = None
google_map_client: Any #: :meta private:
top_k_results: Optional[int] = None
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key is in your environment variable."""
gplaces_api_key = get_from_dict_or_env(
values, "gplaces_api_key", "GPLACES_API_KEY"
)
values["gplaces_api_key"] = gplaces_api_key
try:
import googlemaps
values["google_map_client"] = googlemaps.Client(gplaces_api_key)
except ImportError:
raise ImportError(
"Could not import googlemaps python package. "
"Please install it with `pip install googlemaps`."
)
return values
[docs] def run(self, query: str) -> str:
"""Run Places search and get k number of places that exists that match."""
search_results = self.google_map_client.places(query)["results"]
num_to_return = len(search_results)
places = []
if num_to_return == 0:
return "Google Places did not find any places that match the description"
num_to_return = (
num_to_return
if self.top_k_results is None
else min(num_to_return, self.top_k_results)
)
for i in range(num_to_return):
result = search_results[i]
details = self.fetch_place_details(result["place_id"])
if details is not None:
places.append(details)
return "\n".join([f"{i+1}. {item}" for i, item in enumerate(places)])
[docs] def fetch_place_details(self, place_id: str) -> Optional[str]:
try:
place_details = self.google_map_client.place(place_id)
formatted_details = self.format_place_details(place_details)
return formatted_details
except Exception as e:
logging.error(f"An Error occurred while fetching place details: {e}")
return None
[docs] def format_place_details(self, place_details: Dict[str, Any]) -> Optional[str]:
try:
name = place_details.get("result", {}).get("name", "Unkown")
address = place_details.get("result", {}).get(
"formatted_address", "Unknown"
)
phone_number = place_details.get("result", {}).get(
"formatted_phone_number", "Unknown"
)
website = place_details.get("result", {}).get("website", "Unknown")
formatted_details = (
f"{name}\nAddress: {address}\n"
f"Phone: {phone_number}\nWebsite: {website}\n\n"
)
return formatted_details
except Exception as e:
logging.error(f"An error occurred while formatting place details: {e}")
return None | https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_places_api.html |
b18be03a-7934-4af2-bcb2-d2f8b925b682 | Source code for langchain.document_loaders.tomarkdown
"""Loader that loads HTML to markdown using 2markdown."""
from __future__ import annotations
from typing import Iterator, List
import requests
from langchain.docstore.document import Document
from langchain.document_loaders.base import BaseLoader
[docs]class ToMarkdownLoader(BaseLoader):
"""Loader that loads HTML to markdown using 2markdown."""
def __init__(self, url: str, api_key: str):
"""Initialize with url and api key."""
self.url = url
self.api_key = api_key
[docs] def lazy_load(
self,
) -> Iterator[Document]:
"""Lazily load the file."""
response = requests.post(
"https://2markdown.com/api/2md",
headers={"X-Api-Key": self.api_key},
json={"url": self.url},
)
text = response.json()["article"]
metadata = {"source": self.url}
yield Document(page_content=text, metadata=metadata)
[docs] def load(self) -> List[Document]:
"""Load file."""
return list(self.lazy_load()) | https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/tomarkdown.html |
e886b059-b129-4b34-8297-173d4f226281 | Source code for langchain.document_loaders.conllu
"""Load CoNLL-U files."""
import csv
from typing import List
from langchain.docstore.document import Document
from langchain.document_loaders.base import BaseLoader
[docs]class CoNLLULoader(BaseLoader):
"""Load CoNLL-U files."""
def __init__(self, file_path: str):
"""Initialize with file path."""
self.file_path = file_path
[docs] def load(self) -> List[Document]:
"""Load from file path."""
with open(self.file_path, encoding="utf8") as f:
tsv = list(csv.reader(f, delimiter="\t"))
# If len(line) > 1, the line is not a comment
lines = [line for line in tsv if len(line) > 1]
text = ""
for i, line in enumerate(lines):
# Do not add a space after a punctuation mark or at the end of the sentence
if line[9] == "SpaceAfter=No" or i == len(lines) - 1:
text += line[1]
else:
text += line[1] + " "
metadata = {"source": self.file_path}
return [Document(page_content=text, metadata=metadata)] | https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/conllu.html |
8060e866-43c5-49aa-905e-a95da957f56f | Source code for langchain.document_loaders.toml
import json
from pathlib import Path
from typing import Iterator, List, Union
from langchain.docstore.document import Document
from langchain.document_loaders.base import BaseLoader
[docs]class TomlLoader(BaseLoader):
"""
A TOML document loader that inherits from the BaseLoader class.
This class can be initialized with either a single source file or a source
directory containing TOML files.
"""
def __init__(self, source: Union[str, Path]):
"""Initialize the TomlLoader with a source file or directory."""
self.source = Path(source)
[docs] def load(self) -> List[Document]:
"""Load and return all documents."""
return list(self.lazy_load())
[docs] def lazy_load(self) -> Iterator[Document]:
"""Lazily load the TOML documents from the source file or directory."""
import tomli
if self.source.is_file() and self.source.suffix == ".toml":
files = [self.source]
elif self.source.is_dir():
files = list(self.source.glob("**/*.toml"))
else:
raise ValueError("Invalid source path or file type")
for file_path in files:
with file_path.open("r", encoding="utf-8") as file:
content = file.read()
try:
data = tomli.loads(content)
doc = Document(
page_content=json.dumps(data),
metadata={"source": str(file_path)},
)
yield doc
except tomli.TOMLDecodeError as e:
print(f"Error parsing TOML file {file_path}: {e}") | https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/toml.html |
5f0b5252-763f-434d-9581-959f3987edb3 | Source code for langchain.document_loaders.url_playwright
"""Loader that uses Playwright to load a page, then uses unstructured to load the html.
"""
import logging
from typing import List, Optional
from langchain.docstore.document import Document
from langchain.document_loaders.base import BaseLoader
logger = logging.getLogger(__name__)
[docs]class PlaywrightURLLoader(BaseLoader):
"""Loader that uses Playwright and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
Attributes:
urls (List[str]): List of URLs to load.
continue_on_failure (bool): If True, continue loading other URLs on failure.
headless (bool): If True, the browser will run in headless mode.
"""
def __init__(
self,
urls: List[str],
continue_on_failure: bool = True,
headless: bool = True,
remove_selectors: Optional[List[str]] = None,
):
"""Load a list of URLs using Playwright and unstructured."""
try:
import playwright # noqa:F401
except ImportError:
raise ImportError(
"playwright package not found, please install it with "
"`pip install playwright`"
)
try:
import unstructured # noqa:F401
except ImportError:
raise ValueError(
"unstructured package not found, please install it with "
"`pip install unstructured`"
)
self.urls = urls
self.continue_on_failure = continue_on_failure
self.headless = headless
self.remove_selectors = remove_selectors
[docs] def load(self) -> List[Document]:
"""Load the specified URLs using Playwright and create Document instances.
Returns:
List[Document]: A list of Document instances with loaded content.
"""
from playwright.sync_api import sync_playwright
from unstructured.partition.html import partition_html
docs: List[Document] = list()
with sync_playwright() as p:
browser = p.chromium.launch(headless=self.headless)
for url in self.urls:
try:
page = browser.new_page()
page.goto(url)
for selector in self.remove_selectors or []:
elements = page.locator(selector).all()
for element in elements:
if element.is_visible():
element.evaluate("element => element.remove()")
page_source = page.content()
elements = partition_html(text=page_source)
text = "\n\n".join([str(el) for el in elements])
metadata = {"source": url}
docs.append(Document(page_content=text, metadata=metadata))
except Exception as e:
if self.continue_on_failure:
logger.error(
f"Error fetching or processing {url}, exception: {e}"
)
else:
raise e
browser.close()
return docs | https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url_playwright.html |
5c8dd0b6-d119-4485-983f-a024b4c80b88 | Source code for langchain.document_loaders.gcs_directory
"""Loading logic for loading documents from an GCS directory."""
from typing import List
from langchain.docstore.document import Document
from langchain.document_loaders.base import BaseLoader
from langchain.document_loaders.gcs_file import GCSFileLoader
[docs]class GCSDirectoryLoader(BaseLoader):
"""Loading logic for loading documents from GCS."""
def __init__(self, project_name: str, bucket: str, prefix: str = ""):
"""Initialize with bucket and key name."""
self.project_name = project_name
self.bucket = bucket
self.prefix = prefix
[docs] def load(self) -> List[Document]:
"""Load documents."""
try:
from google.cloud import storage
except ImportError:
raise ValueError(
"Could not import google-cloud-storage python package. "
"Please install it with `pip install google-cloud-storage`."
)
client = storage.Client(project=self.project_name)
docs = []
for blob in client.list_blobs(self.bucket, prefix=self.prefix):
# we shall just skip directories since GCSFileLoader creates
# intermediate directories on the fly
if blob.name.endswith("/"):
continue
loader = GCSFileLoader(self.project_name, self.bucket, blob.name)
docs.extend(loader.load())
return docs | https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/gcs_directory.html |
2ea52d58-c3a0-458d-a4df-10fa3c3d93e3 | Source code for langchain.document_loaders.joplin
import json
import urllib
from datetime import datetime
from typing import Iterator, List, Optional
from langchain.document_loaders.base import BaseLoader
from langchain.schema import Document
from langchain.utils import get_from_env
LINK_NOTE_TEMPLATE = "joplin://x-callback-url/openNote?id={id}"
[docs]class JoplinLoader(BaseLoader):
"""
Loader that fetches notes from Joplin.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for "Web Clipper" in the app settings).
To get the access token, you need to go to the Web Clipper options and
under "Advanced Options" you will find the access token.
You can find more information about the Web Clipper service here:
https://joplinapp.org/clipper/
"""
def __init__(
self,
access_token: Optional[str] = None,
port: int = 41184,
host: str = "localhost",
) -> None:
access_token = access_token or get_from_env(
"access_token", "JOPLIN_ACCESS_TOKEN"
)
base_url = f"http://{host}:{port}"
self._get_note_url = (
f"{base_url}/notes?token={access_token}"
f"&fields=id,parent_id,title,body,created_time,updated_time&page={{page}}"
)
self._get_folder_url = (
f"{base_url}/folders/{{id}}?token={access_token}&fields=title"
)
self._get_tag_url = (
f"{base_url}/notes/{{id}}/tags?token={access_token}&fields=title"
)
def _get_notes(self) -> Iterator[Document]:
has_more = True
page = 1
while has_more:
req_note = urllib.request.Request(self._get_note_url.format(page=page))
with urllib.request.urlopen(req_note) as response:
json_data = json.loads(response.read().decode())
for note in json_data["items"]:
metadata = {
"source": LINK_NOTE_TEMPLATE.format(id=note["id"]),
"folder": self._get_folder(note["parent_id"]),
"tags": self._get_tags(note["id"]),
"title": note["title"],
"created_time": self._convert_date(note["created_time"]),
"updated_time": self._convert_date(note["updated_time"]),
}
yield Document(page_content=note["body"], metadata=metadata)
has_more = json_data["has_more"]
page += 1
def _get_folder(self, folder_id: str) -> str:
req_folder = urllib.request.Request(self._get_folder_url.format(id=folder_id))
with urllib.request.urlopen(req_folder) as response:
json_data = json.loads(response.read().decode())
return json_data["title"]
def _get_tags(self, note_id: str) -> List[str]:
req_tag = urllib.request.Request(self._get_tag_url.format(id=note_id))
with urllib.request.urlopen(req_tag) as response:
json_data = json.loads(response.read().decode())
return [tag["title"] for tag in json_data["items"]]
def _convert_date(self, date: int) -> str:
return datetime.fromtimestamp(date / 1000).strftime("%Y-%m-%d %H:%M:%S")
[docs] def lazy_load(self) -> Iterator[Document]:
yield from self._get_notes()
[docs] def load(self) -> List[Document]:
return list(self.lazy_load()) | https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/joplin.html |
32761cf4-9206-4424-9a39-a6c61093a588 | Source code for langchain.document_loaders.powerpoint
"""Loader that loads powerpoint files."""
import os
from typing import List
from langchain.document_loaders.unstructured import UnstructuredFileLoader
[docs]class UnstructuredPowerPointLoader(UnstructuredFileLoader):
"""Loader that uses unstructured to load powerpoint files."""
def _get_elements(self) -> List:
from unstructured.__version__ import __version__ as __unstructured_version__
from unstructured.file_utils.filetype import FileType, detect_filetype
unstructured_version = tuple(
[int(x) for x in __unstructured_version__.split(".")]
)
# NOTE(MthwRobinson) - magic will raise an import error if the libmagic
# system dependency isn't installed. If it's not installed, we'll just
# check the file extension
try:
import magic # noqa: F401
is_ppt = detect_filetype(self.file_path) == FileType.PPT
except ImportError:
_, extension = os.path.splitext(str(self.file_path))
is_ppt = extension == ".ppt"
if is_ppt and unstructured_version < (0, 4, 11):
raise ValueError(
f"You are on unstructured version {__unstructured_version__}. "
"Partitioning .ppt files is only supported in unstructured>=0.4.11. "
"Please upgrade the unstructured package and try again."
)
if is_ppt:
from unstructured.partition.ppt import partition_ppt
return partition_ppt(filename=self.file_path, **self.unstructured_kwargs)
else:
from unstructured.partition.pptx import partition_pptx
return partition_pptx(filename=self.file_path, **self.unstructured_kwargs) | https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/powerpoint.html |
7acc3dea-e63b-4edc-a881-25501a391cf8 | Source code for langchain.document_loaders.snowflake_loader
from __future__ import annotations
from typing import Any, Dict, Iterator, List, Optional, Tuple
from langchain.docstore.document import Document
from langchain.document_loaders.base import BaseLoader
[docs]class SnowflakeLoader(BaseLoader):
"""Loads a query result from Snowflake into a list of documents.
Each document represents one row of the result. The `page_content_columns`
are written into the `page_content` of the document. The `metadata_columns`
are written into the `metadata` of the document. By default, all columns
are written into the `page_content` and none into the `metadata`.
"""
def __init__(
self,
query: str,
user: str,
password: str,
account: str,
warehouse: str,
role: str,
database: str,
schema: str,
parameters: Optional[Dict[str, Any]] = None,
page_content_columns: Optional[List[str]] = None,
metadata_columns: Optional[List[str]] = None,
):
"""Initialize Snowflake document loader.
Args:
query: The query to run in Snowflake.
user: Snowflake user.
password: Snowflake password.
account: Snowflake account.
warehouse: Snowflake warehouse.
role: Snowflake role.
database: Snowflake database
schema: Snowflake schema
page_content_columns: Optional. Columns written to Document `page_content`.
metadata_columns: Optional. Columns written to Document `metadata`.
"""
self.query = query
self.user = user
self.password = password
self.account = account
self.warehouse = warehouse
self.role = role
self.database = database
self.schema = schema
self.parameters = parameters
self.page_content_columns = (
page_content_columns if page_content_columns is not None else ["*"]
)
self.metadata_columns = metadata_columns if metadata_columns is not None else []
def _execute_query(self) -> List[Dict[str, Any]]:
try:
import snowflake.connector
except ImportError as ex:
raise ValueError(
"Could not import snowflake-connector-python package. "
"Please install it with `pip install snowflake-connector-python`."
) from ex
conn = snowflake.connector.connect(
user=self.user,
password=self.password,
account=self.account,
warehouse=self.warehouse,
role=self.role,
database=self.database,
schema=self.schema,
parameters=self.parameters,
)
try:
cur = conn.cursor()
cur.execute("USE DATABASE " + self.database)
cur.execute("USE SCHEMA " + self.schema)
cur.execute(self.query, self.parameters)
query_result = cur.fetchall()
column_names = [column[0] for column in cur.description]
query_result = [dict(zip(column_names, row)) for row in query_result]
except Exception as e:
print(f"An error occurred: {e}")
query_result = []
finally:
cur.close()
return query_result
def _get_columns(
self, query_result: List[Dict[str, Any]]
) -> Tuple[List[str], List[str]]:
page_content_columns = (
self.page_content_columns if self.page_content_columns else []
)
metadata_columns = self.metadata_columns if self.metadata_columns else []
if page_content_columns is None and query_result:
page_content_columns = list(query_result[0].keys())
if metadata_columns is None:
metadata_columns = []
return page_content_columns or [], metadata_columns
[docs] def lazy_load(self) -> Iterator[Document]:
query_result = self._execute_query()
if isinstance(query_result, Exception):
print(f"An error occurred during the query: {query_result}")
return []
page_content_columns, metadata_columns = self._get_columns(query_result)
if "*" in page_content_columns:
page_content_columns = list(query_result[0].keys())
for row in query_result:
page_content = "\n".join(
f"{k}: {v}" for k, v in row.items() if k in page_content_columns
)
metadata = {k: v for k, v in row.items() if k in metadata_columns}
doc = Document(page_content=page_content, metadata=metadata)
yield doc
[docs] def load(self) -> List[Document]:
"""Load data into document objects."""
return list(self.lazy_load()) | https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/snowflake_loader.html |