Transformers Agents是一个实验性的API,它随时可能发生变化。由于API或底层模型容易发生变化,因此由agents返回的结果可能会有所不同。
要了解更多关于agents和工具的信息,请确保阅读介绍指南。此页面包含底层类的API文档。
我们提供三种类型的agents:HfAgent使用开源模型的推理端点,LocalAgent使用您在本地选择的模型,OpenAiAgent使用OpenAI封闭模型。
( url_endpoint token = None chat_prompt_template = None run_prompt_template = None additional_tools = None )
Parameters
str
) —
The name of the url endpoint to use. str
, optional) —
The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when
running huggingface-cli login
(stored in ~/.huggingface
). str
, optional) —
Pass along your own prompt if you want to override the default template for the chat
method. Can be the
actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named
chat_prompt_template.txt
in this repo in this case. str
, optional) —
Pass along your own prompt if you want to override the default template for the run
method. Can be the
actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named
run_prompt_template.txt
in this repo in this case. Agent that uses an inference endpoint to generate code.
( model tokenizer chat_prompt_template = None run_prompt_template = None additional_tools = None )
Parameters
str
, optional) —
Pass along your own prompt if you want to override the default template for the chat
method. Can be the
actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named
chat_prompt_template.txt
in this repo in this case. str
, optional) —
Pass along your own prompt if you want to override the default template for the run
method. Can be the
actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named
run_prompt_template.txt
in this repo in this case. Agent that uses a local model and tokenizer to generate code.
Example:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LocalAgent
checkpoint = "bigcode/starcoder"
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
agent = LocalAgent(model, tokenizer)
agent.run("Draw me a picture of rivers and lakes.")
( pretrained_model_name_or_path **kwargs )
Parameters
str
or os.PathLike
) —
The name of a repo on the Hub or a local path to a folder containing both model and tokenizer. Dict[str, Any]
, optional) —
Keyword arguments passed along to from_pretrained(). Convenience method to build a LocalAgent
from a pretrained checkpoint.
( model = 'text-davinci-003' api_key = None chat_prompt_template = None run_prompt_template = None additional_tools = None )
Parameters
str
, optional, defaults to "text-davinci-003"
) —
The name of the OpenAI model to use. str
, optional) —
The API key to use. If unset, will look for the environment variable "OPENAI_API_KEY"
. str
, optional) —
Pass along your own prompt if you want to override the default template for the chat
method. Can be the
actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named
chat_prompt_template.txt
in this repo in this case. str
, optional) —
Pass along your own prompt if you want to override the default template for the run
method. Can be the
actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named
run_prompt_template.txt
in this repo in this case. Agent that uses the openai API to generate code.
The openAI models are used in generation mode, so even for the chat()
API, it’s better to use models like
"text-davinci-003"
over the chat-GPT variant. Proper support for chat-GPT models will come in a next version.
( deployment_id api_key = None resource_name = None api_version = '2022-12-01' is_chat_model = None chat_prompt_template = None run_prompt_template = None additional_tools = None )
Parameters
str
) —
The name of the deployed Azure openAI model to use. str
, optional) —
The API key to use. If unset, will look for the environment variable "AZURE_OPENAI_API_KEY"
. str
, optional) —
The name of your Azure OpenAI Resource. If unset, will look for the environment variable
"AZURE_OPENAI_RESOURCE_NAME"
. str
, optional, default to "2022-12-01"
) —
The API version to use for this agent. bool
, optional) —
Whether you are using a completion model or a chat model (see note above, chat models won’t be as
efficient). Will default to gpt
being in the deployment_id
or not. str
, optional) —
Pass along your own prompt if you want to override the default template for the chat
method. Can be the
actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named
chat_prompt_template.txt
in this repo in this case. str
, optional) —
Pass along your own prompt if you want to override the default template for the run
method. Can be the
actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named
run_prompt_template.txt
in this repo in this case. Agent that uses Azure OpenAI to generate code. See the official documentation to learn how to deploy an openAI model on Azure
The openAI models are used in generation mode, so even for the chat()
API, it’s better to use models like
"text-davinci-003"
over the chat-GPT variant. Proper support for chat-GPT models will come in a next version.
( chat_prompt_template = None run_prompt_template = None additional_tools = None )
Parameters
str
, optional) —
Pass along your own prompt if you want to override the default template for the chat
method. Can be the
actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named
chat_prompt_template.txt
in this repo in this case. str
, optional) —
Pass along your own prompt if you want to override the default template for the run
method. Can be the
actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named
run_prompt_template.txt
in this repo in this case. Base class for all agents which contains the main API methods.
( task return_code = False remote = False **kwargs )
Parameters
str
) — The task to perform bool
, optional, defaults to False
) —
Whether to just return code and not evaluate it. bool
, optional, defaults to False
) —
Whether or not to use remote tools (inference endpoints) instead of local ones. Sends a new request to the agent in a chat. Will use the previous ones in its history.
( task return_code = False remote = False **kwargs )
Parameters
str
) — The task to perform bool
, optional, defaults to False
) —
Whether to just return code and not evaluate it. bool
, optional, defaults to False
) —
Whether or not to use remote tools (inference endpoints) instead of local ones. Sends a request to the agent.
Clears the history of prior calls to chat().
( task_or_repo_id model_repo_id = None remote = False token = None **kwargs )
Parameters
str
) —
The task for which to load the tool or a repo ID of a tool on the Hub. Tasks implemented in Transformers
are:
"document-question-answering"
"image-captioning"
"image-question-answering"
"image-segmentation"
"speech-to-text"
"summarization"
"text-classification"
"text-question-answering"
"text-to-speech"
"translation"
str
, optional) —
Use this argument to use a different model than the default one for the tool you selected. bool
, optional, defaults to False
) —
Whether to use your tool by downloading the model or (if it is available) with an inference endpoint. str
, optional) —
The token to identify you on hf.co. If unset, will use the token generated when running huggingface-cli login
(stored in ~/.huggingface
). cache_dir
, revision
, subfolder
) will be used when downloading the files for your tool, and the others
will be passed along to its init. Main function to quickly load a tool, be it on the Hub or in the Transformers library.
Loading a tool means that you’ll download the tool and execute it locally. ALWAYS inspect the tool you’re downloading before loading it within your runtime, as you would do when installing a package using pip/npm/apt.
A base class for the functions used by the agent. Subclass this and implement the __call__
method as well as the
following class attributes:
str
) — A short description of what your tool does, the inputs it expects and the output(s) it
will return. For instance ‘This is a tool that downloads a file from a url
. It takes the url
as input, and
returns the text contained in the file’.str
) — A performative name that will be used for your tool in the prompt to the agent. For instance
"text-classifier"
or "image_generator"
.List[str]
) — The list of modalities expected for the inputs (in the same order as in the call).
Modalitiies should be "text"
, "image"
or "audio"
. This is only used by launch_gradio_demo
or to make a
nice space from your tool.List[str]
) — The list of modalities returned but the tool (in the same order as the return of the
call method). Modalitiies should be "text"
, "image"
or "audio"
. This is only used by launch_gradio_demo
or to make a nice space from your tool.You can also override the method setup() if your tool as an expensive operation to perform before being usable (such as loading a model). setup() will be called the first time you use your tool, but not at instantiation.
Creates a Tool from a gradio tool.
( repo_id: str model_repo_id: Optional = None token: Optional = None remote: bool = False **kwargs )
Parameters
str
) —
The name of the repo on the Hub where your tool is defined. str
, optional) —
If your tool uses a model and you want to use a different model than the default, you can pass a second
repo ID or an endpoint url to this argument. str
, optional) —
The token to identify you on hf.co. If unset, will use the token generated when running
huggingface-cli login
(stored in ~/.huggingface
). bool
, optional, defaults to False
) —
Whether to use your tool by downloading the model or (if it is available) with an inference endpoint. cache_dir
, revision
, subfolder
) will be used when downloading the files for your tool, and the
others will be passed along to its init. Loads a tool defined on the Hub.
Loading a tool from the Hub means that you’ll download the tool and execute it locally. ALWAYS inspect the tool you’re downloading before loading it within your runtime, as you would do when installing a package using pip/npm/apt.
( repo_id: str commit_message: str = 'Upload tool' private: Optional = None token: Union = None create_pr: bool = False )
Parameters
str
) —
The name of the repository you want to push your tool to. It should contain your organization name when
pushing to a given organization. str
, optional, defaults to "Upload tool"
) —
Message to commit while pushing. bool
, optional) —
Whether or not the repository created should be private. bool
or str
, optional) —
The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated
when running huggingface-cli login
(stored in ~/.huggingface
). bool
, optional, defaults to False
) —
Whether or not to create a PR with the uploaded files or directly commit. Upload the tool to the Hub.
( output_dir )
Saves the relevant code files for your tool so it can be pushed to the Hub. This will copy the code of your
tool in output_dir
as well as autogenerate:
tool_config.json
app.py
file so that your tool can be converted to a spacerequirements.txt
containing the names of the module used by your tool (as detected when inspecting its
code)You should only use this method to save tools that are defined in a separate module (not __main__
).
Overwrite this method here for any operation that is expensive and needs to be executed before you start using your tool. Such as loading a big model.
( model = None pre_processor = None post_processor = None device = None device_map = None model_kwargs = None token = None **hub_kwargs )
Parameters
str
or PreTrainedModel, optional) —
The name of the checkpoint to use for the model, or the instantiated model. If unset, will default to the
value of the class attribute default_checkpoint
. str
or Any
, optional) —
The name of the checkpoint to use for the pre-processor, or the instantiated pre-processor (can be a
tokenizer, an image processor, a feature extractor or a processor). Will default to the value of model
if
unset. str
or Any
, optional) —
The name of the checkpoint to use for the post-processor, or the instantiated pre-processor (can be a
tokenizer, an image processor, a feature extractor or a processor). Will default to the pre_processor
if
unset. int
, str
or torch.device
, optional) —
The device on which to execute the model. Will default to any accelerator available (GPU, MPS etc…), the
CPU otherwise. str
or dict
, optional) —
If passed along, will be used to instantiate the model. dict
, optional) —
Any keyword argument to send to the model instantiation. str
, optional) —
The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when
running huggingface-cli login
(stored in ~/.huggingface
). A Tool tailored towards Transformer models. On top of the class attributes of the base class Tool, you will need to specify:
type
) — The class to use to load the model in this tool.str
) — The default checkpoint that should be used when the user doesn’t specify one.type
, optional, defaults to AutoProcessor
) — The class to use to load the
pre-processortype
, optional, defaults to AutoProcessor
) — The class to use to load the
post-processor (when different from the pre-processor).Uses the post_processor
to decode the model output.
Uses the pre_processor
to prepare the inputs for the model
.
Sends the inputs through the model
.
Instantiates the pre_processor
, model
and post_processor
if necessary.
( endpoint_url = None token = None tool_class = None )
Parameters
str
, optional) —
The url of the endpoint to use. str
, optional) —
The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when
running huggingface-cli login
(stored in ~/.huggingface
). type
, optional) —
The corresponding tool_class
if this is a remote version of an existing tool. Will help determine when
the output should be converted to another type (like images). A Tool that will make requests to an inference endpoint.
You can override this method in your custom class of RemoteTool to apply some custom post-processing of the outputs of the endpoint.
Prepare the inputs received for the HTTP client sending data to the endpoint. Positional arguments will be
matched with the signature of the tool_class
if it was provided at instantation. Images will be encoded into
bytes.
You can override this method in your custom class of RemoteTool.
( tool_class: Tool )
Launches a gradio demo for a tool. The corresponding tool class needs to properly implement the class attributes
inputs
and outputs
.
Agents可以处理工具之间任何类型的对象;工具是多模态的,可以接受和返回文本、图像、音频、视频等类型。为了增加工具之间的兼容性,以及正确地在ipython(jupyter、colab、ipython notebooks等)中呈现这些返回值,我们实现了这些类型的包装类。
被包装的对象应该继续按照最初的行为方式运作;文本对象应该仍然像字符串一样运作,图像对象应该仍然像PIL.Image
一样运作。
这些类型有三个特定目的:
to_raw
应该返回底层对象to_string
应该将对象作为字符串返回:在AgentText
的情况下可能是字符串,但在其他情况下可能是对象序列化版本的路径Text type returned by the agent. Behaves as a string.
Image type returned by the agent. Behaves as a PIL.Image.
Returns the “raw” version of that object. In the case of an AgentImage, it is a PIL.Image.
Returns the stringified version of that object. In the case of an AgentImage, it is a path to the serialized version of the image.
Audio type returned by the agent.
Returns the “raw” version of that object. It is a torch.Tensor
object.
Returns the stringified version of that object. In the case of an AgentAudio, it is a path to the serialized version of the audio.