Transformers documentation

Pipelines

You are viewing v4.39.2 version. A newer version v4.47.1 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Pipelines

pipelines是使用模型进行推理的一种简单方法。这些pipelines是抽象了库中大部分复杂代码的对象,提供了一个专用于多个任务的简单API,包括专名识别、掩码语言建模、情感分析、特征提取和问答等。请参阅任务摘要以获取使用示例。

有两种pipelines抽象类需要注意:

pipeline抽象类

pipeline抽象类是对所有其他可用pipeline的封装。它可以像任何其他pipeline一样实例化,但进一步提供额外的便利性。

简单调用一个项目:

>>> pipe = pipeline("text-classification")
>>> pipe("This restaurant is awesome")
[{'label': 'POSITIVE', 'score': 0.9998743534088135}]

如果您想使用 hub 上的特定模型,可以忽略任务,如果hub上的模型已经定义了该任务:

>>> pipe = pipeline(model="FacebookAI/roberta-large-mnli")
>>> pipe("This restaurant is awesome")
[{'label': 'NEUTRAL', 'score': 0.7313136458396912}]

要在多个项目上调用pipeline,可以使用列表调用它。

>>> pipe = pipeline("text-classification")
>>> pipe(["This restaurant is awesome", "This restaurant is awful"])
[{'label': 'POSITIVE', 'score': 0.9998743534088135},
 {'label': 'NEGATIVE', 'score': 0.9996669292449951}]

为了遍历整个数据集,建议直接使用 dataset。这意味着您不需要一次性分配整个数据集,也不需要自己进行批处理。这应该与GPU上的自定义循环一样快。如果不是,请随时提出issue。

import datasets
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from tqdm.auto import tqdm

pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
dataset = datasets.load_dataset("superb", name="asr", split="test")

# KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
# as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
for out in tqdm(pipe(KeyDataset(dataset, "file"))):
    print(out)
    # {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
    # {"text": ....}
    # ....

为了方便使用,也可以使用生成器:

from transformers import pipeline

pipe = pipeline("text-classification")


def data():
    while True:
        # This could come from a dataset, a database, a queue or HTTP request
        # in a server
        # Caveat: because this is iterative, you cannot use `num_workers > 1` variable
        # to use multiple threads to preprocess data. You can still have 1 thread that
        # does the preprocessing while the main runs the big inference
        yield "This is a test"


for out in pipe(data()):
    print(out)
    # {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
    # {"text": ....}
    # ....

transformers.pipeline

< >

( task: str = None model: Union = None config: Union = None tokenizer: Union = None feature_extractor: Union = None image_processor: Union = None framework: Optional = None revision: Optional = None use_fast: bool = True token: Union = None device: Union = None device_map = None torch_dtype = None trust_remote_code: Optional = None model_kwargs: Dict = None pipeline_class: Optional = None **kwargs ) Pipeline

Parameters

  • task (str) — The task defining which pipeline will be returned. Currently accepted tasks are:

  • model (str or PreTrainedModel or TFPreTrainedModel, optional) — The model that will be used by the pipeline to make predictions. This can be a model identifier or an actual instance of a pretrained model inheriting from PreTrainedModel (for PyTorch) or TFPreTrainedModel (for TensorFlow).

    If not provided, the default for the task will be loaded.

  • config (str or PretrainedConfig, optional) — The configuration that will be used by the pipeline to instantiate the model. This can be a model identifier or an actual pretrained model configuration inheriting from PretrainedConfig.

    If not provided, the default configuration file for the requested model will be used. That means that if model is given, its default configuration will be used. However, if model is not supplied, this task’s default model’s config is used instead.

  • tokenizer (str or PreTrainedTokenizer, optional) — The tokenizer that will be used by the pipeline to encode data for the model. This can be a model identifier or an actual pretrained tokenizer inheriting from PreTrainedTokenizer.

    If not provided, the default tokenizer for the given model will be loaded (if it is a string). If model is not specified or not a string, then the default tokenizer for config is loaded (if it is a string). However, if config is also not given or not a string, then the default tokenizer for the given task will be loaded.

  • feature_extractor (str or PreTrainedFeatureExtractor, optional) — The feature extractor that will be used by the pipeline to encode data for the model. This can be a model identifier or an actual pretrained feature extractor inheriting from PreTrainedFeatureExtractor.

    Feature extractors are used for non-NLP models, such as Speech or Vision models as well as multi-modal models. Multi-modal models will also require a tokenizer to be passed.

    If not provided, the default feature extractor for the given model will be loaded (if it is a string). If model is not specified or not a string, then the default feature extractor for config is loaded (if it is a string). However, if config is also not given or not a string, then the default feature extractor for the given task will be loaded.

  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • revision (str, optional, defaults to "main") — When passing a task name or a string model identifier: The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • use_fast (bool, optional, defaults to True) — Whether or not to use a Fast tokenizer if possible (a PreTrainedTokenizerFast).
  • use_auth_token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
  • device (int or str or torch.device) — Defines the device (e.g., "cpu", "cuda:1", "mps", or a GPU ordinal rank like 1) on which this pipeline will be allocated.
  • device_map (str or Dict[str, Union[int, str, torch.device], optional) — Sent directly as model_kwargs (just a simpler shortcut). When accelerate library is present, set device_map="auto" to compute the most optimized device_map automatically (see here for more information).

    Do not use device_map AND device at the same time as they will conflict

  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto").
  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom code defined on the Hub in their own modeling, configuration, tokenization or even pipeline files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • model_kwargs (Dict[str, Any], optional) — Additional dictionary of keyword arguments passed along to the model’s from_pretrained(..., **model_kwargs) function.
  • kwargs (Dict[str, Any], optional) — Additional keyword arguments passed along to the specific pipeline init (see the documentation for the corresponding pipeline class for possible values).

Returns

Pipeline

A suitable pipeline for the task.

Utility factory method to build a Pipeline.

Pipelines are made of:

  • A tokenizer in charge of mapping raw textual input to token.
  • A model to make predictions from the inputs.
  • Some (optional) post processing for enhancing model’s output.

Examples:

>>> from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer

>>> # Sentiment analysis pipeline
>>> analyzer = pipeline("sentiment-analysis")

>>> # Question answering pipeline, specifying the checkpoint identifier
>>> oracle = pipeline(
...     "question-answering", model="distilbert/distilbert-base-cased-distilled-squad", tokenizer="google-bert/bert-base-cased"
... )

>>> # Named entity recognition pipeline, passing in a specific model and tokenizer
>>> model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")
>>> recognizer = pipeline("ner", model=model, tokenizer=tokenizer)

Pipeline batching

所有pipeline都可以使用批处理。这将在pipeline使用其流处理功能时起作用(即传递列表或 Datasetgenerator 时)。

from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets

dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised")
pipe = pipeline("text-classification", device=0)
for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"):
    print(out)
    # [{'label': 'POSITIVE', 'score': 0.9998743534088135}]
    # Exactly the same output as before, but the content are passed
    # as batches to the model

然而,这并不自动意味着性能提升。它可能是一个10倍的加速或5倍的减速,具体取决于硬件、数据和实际使用的模型。

主要是加速的示例:

from transformers import pipeline
from torch.utils.data import Dataset
from tqdm.auto import tqdm

pipe = pipeline("text-classification", device=0)


class MyDataset(Dataset):
    def __len__(self):
        return 5000

    def __getitem__(self, i):
        return "This is a test"


dataset = MyDataset()

for batch_size in [1, 8, 64, 256]:
    print("-" * 30)
    print(f"Streaming batch_size={batch_size}")
    for out in tqdm(pipe(dataset, batch_size=batch_size), total=len(dataset)):
        pass
# On GTX 970
------------------------------
Streaming no batching
100%|██████████████████████████████████████████████████████████████████████| 5000/5000 [00:26<00:00, 187.52it/s]
------------------------------
Streaming batch_size=8
100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:04<00:00, 1205.95it/s]
------------------------------
Streaming batch_size=64
100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:02<00:00, 2478.24it/s]
------------------------------
Streaming batch_size=256
100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:01<00:00, 2554.43it/s]
(diminishing returns, saturated the GPU)

主要是减速的示例:

class MyDataset(Dataset):
    def __len__(self):
        return 5000

    def __getitem__(self, i):
        if i % 64 == 0:
            n = 100
        else:
            n = 1
        return "This is a test" * n

与其他句子相比,这是一个非常长的句子。在这种情况下,整个批次将需要400个tokens的长度,因此整个批次将是 [64, 400] 而不是 [64, 4],从而导致较大的减速。更糟糕的是,在更大的批次上,程序会崩溃。

------------------------------
Streaming no batching
100%|█████████████████████████████████████████████████████████████████████| 1000/1000 [00:05<00:00, 183.69it/s]
------------------------------
Streaming batch_size=8
100%|█████████████████████████████████████████████████████████████████████| 1000/1000 [00:03<00:00, 265.74it/s]
------------------------------
Streaming batch_size=64
100%|██████████████████████████████████████████████████████████████████████| 1000/1000 [00:26<00:00, 37.80it/s]
------------------------------
Streaming batch_size=256
  0%|                                                                                 | 0/1000 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/home/nicolas/src/transformers/test.py", line 42, in <module>
    for out in tqdm(pipe(dataset, batch_size=256), total=len(dataset)):
....
    q = q / math.sqrt(dim_per_head)  # (bs, n_heads, q_length, dim_per_head)
RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 3.95 GiB total capacity; 1.72 GiB already allocated; 354.88 MiB free; 2.46 GiB reserved in total by PyTorch)

对于这个问题,没有好的(通用)解决方案,效果可能因您的用例而异。经验法则如下:

对于用户,一个经验法则是:

  • 使用硬件测量负载性能。测量、测量、再测量。真实的数字是唯一的方法。
  • 如果受到延迟的限制(进行推理的实时产品),不要进行批处理。
  • 如果使用CPU,不要进行批处理。
  • 如果您在GPU上处理的是吞吐量(您希望在大量静态数据上运行模型),则:
    • 如果对序列长度的大小没有概念(“自然”数据),默认情况下不要进行批处理,进行测试并尝试逐渐添加,添加OOM检查以在失败时恢复(如果您不能控制序列长度,它将在某些时候失败)。
    • 如果您的序列长度非常规律,那么批处理更有可能非常有趣,进行测试并推动它,直到出现OOM。
    • GPU越大,批处理越有可能变得更有趣
  • 一旦启用批处理,确保能够很好地处理OOM。

Pipeline chunk batching

zero-shot-classificationquestion-answering 在某种意义上稍微特殊,因为单个输入可能会导致模型的多次前向传递。在正常情况下,这将导致 batch_size 参数的问题。

为了规避这个问题,这两个pipeline都有点特殊,它们是 ChunkPipeline 而不是常规的 Pipeline。简而言之:

preprocessed = pipe.preprocess(inputs)
model_outputs = pipe.forward(preprocessed)
outputs = pipe.postprocess(model_outputs)

现在变成:

all_model_outputs = []
for preprocessed in pipe.preprocess(inputs):
    model_outputs = pipe.forward(preprocessed)
    all_model_outputs.append(model_outputs)
outputs = pipe.postprocess(all_model_outputs)

这对您的代码应该是非常直观的,因为pipeline的使用方式是相同的。

这是一个简化的视图,因为Pipeline可以自动处理批次!这意味着您不必担心您的输入实际上会触发多少次前向传递,您可以独立于输入优化 batch_size。前面部分的注意事项仍然适用。

Pipeline自定义

如果您想要重载特定的pipeline。

请随时为您手头的任务创建一个issue,Pipeline的目标是易于使用并支持大多数情况,因此 transformers 可能支持您的用例。

如果您想简单地尝试一下,可以:

  • 继承您选择的pipeline
class MyPipeline(TextClassificationPipeline):
    def postprocess():
        # Your code goes here
        scores = scores * 100
        # And here


my_pipeline = MyPipeline(model=model, tokenizer=tokenizer, ...)
# or if you use *pipeline* function, then:
my_pipeline = pipeline(model="xxxx", pipeline_class=MyPipeline)

这样就可以让您编写所有想要的自定义代码。

实现一个pipeline

实现一个新的pipeline

音频

可用于音频任务的pipeline包括以下几种。

AudioClassificationPipeline

class transformers.AudioClassificationPipeline

< >

( *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • feature_extractor (SequenceFeatureExtractor) — The feature extractor that will be used by the pipeline to encode data for the model. This object inherits from SequenceFeatureExtractor.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Audio classification pipeline using any AutoModelForAudioClassification. This pipeline predicts the class of a raw waveform or an audio file. In case of an audio file, ffmpeg should be installed to support multiple audio formats.

Example:

>>> from transformers import pipeline

>>> classifier = pipeline(model="superb/wav2vec2-base-superb-ks")
>>> classifier("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac")
[{'score': 0.997, 'label': '_unknown_'}, {'score': 0.002, 'label': 'left'}, {'score': 0.0, 'label': 'yes'}, {'score': 0.0, 'label': 'down'}, {'score': 0.0, 'label': 'stop'}]

Learn more about the basics of using a pipeline in the pipeline tutorial

This pipeline can currently be loaded from pipeline() using the following task identifier: "audio-classification".

See the list of available models on huggingface.co/models.

__call__

< >

( inputs: Union **kwargs ) A list of dict with the following keys

Parameters

  • inputs (np.ndarray or bytes or str or dict) — The inputs is either :
    • str that is the filename of the audio file, the file will be read at the correct sampling rate to get the waveform using ffmpeg. This requires ffmpeg to be installed on the system.
    • bytes it is supposed to be the content of an audio file and is interpreted by ffmpeg in the same way.
    • (np.ndarray of shape (n, ) of type np.float32 or np.float64) Raw audio at the correct sampling rate (no further check will be done)
    • dict form can be used to pass raw audio sampled at arbitrary sampling_rate and let this pipeline do the resampling. The dict must be either be in the format {"sampling_rate": int, "raw": np.array}, or {"sampling_rate": int, "array": np.array}, where the key "raw" or "array" is used to denote the raw audio waveform.
  • top_k (int, optional, defaults to None) — The number of top labels that will be returned by the pipeline. If the provided number is None or higher than the number of labels available in the model configuration, it will default to the number of labels.

Returns

A list of dict with the following keys

  • label (str) — The label predicted.
  • score (float) — The corresponding probability.

Classify the sequence(s) given as inputs. See the AutomaticSpeechRecognitionPipeline documentation for more information.

AutomaticSpeechRecognitionPipeline

class transformers.AutomaticSpeechRecognitionPipeline

< >

( model: PreTrainedModel feature_extractor: Union = None tokenizer: Optional = None decoder: Union = None device: Union = None torch_dtype: Union = None **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • feature_extractor (SequenceFeatureExtractor) — The feature extractor that will be used by the pipeline to encode waveform for the model.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • decoder (pyctcdecode.BeamSearchDecoderCTC, optional) — PyCTCDecode’s BeamSearchDecoderCTC can be passed for language model boosted decoding. See Wav2Vec2ProcessorWithLM for more information.
  • chunk_length_s (float, optional, defaults to 0) — The input length for in each chunk. If chunk_length_s = 0 then chunking is disabled (default).

    For more information on how to effectively use chunk_length_s, please have a look at the ASR chunking blog post.

  • stride_length_s (float, optional, defaults to chunk_length_s / 6) — The length of stride on the left and right of each chunk. Used only with chunk_length_s > 0. This enables the model to see more context and infer letters better than without this context but the pipeline discards the stride bits at the end to make the final reconstitution as perfect as possible.

    For more information on how to effectively use stride_length_s, please have a look at the ASR chunking blog post.

  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed. If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
  • device (Union[int, torch.device], optional) — Device ordinal for CPU/GPU supports. Setting this to None will leverage CPU, a positive will run the model on the associated CUDA device id.
  • torch_dtype (Union[int, torch.dtype], optional) — The data-type (dtype) of the computation. Setting this to None will use float32 precision. Set to torch.float16 or torch.bfloat16 to use half-precision in the respective dtypes.

Pipeline that aims at extracting spoken text contained within some audio.

The input can be either a raw waveform or a audio file. In case of the audio file, ffmpeg should be installed for to support multiple audio formats

Example:

>>> from transformers import pipeline

>>> transcriber = pipeline(model="openai/whisper-base")
>>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac")
{'text': ' He hoped there would be stew for dinner, turnips and carrots and bruised potatoes and fat mutton pieces to be ladled out in thick, peppered flour-fatten sauce.'}

Learn more about the basics of using a pipeline in the pipeline tutorial

__call__

< >

( inputs: Union **kwargs ) Dict

Parameters

  • inputs (np.ndarray or bytes or str or dict) — The inputs is either :

    • str that is either the filename of a local audio file, or a public URL address to download the audio file. The file will be read at the correct sampling rate to get the waveform using ffmpeg. This requires ffmpeg to be installed on the system.
    • bytes it is supposed to be the content of an audio file and is interpreted by ffmpeg in the same way.
    • (np.ndarray of shape (n, ) of type np.float32 or np.float64) Raw audio at the correct sampling rate (no further check will be done)
    • dict form can be used to pass raw audio sampled at arbitrary sampling_rate and let this pipeline do the resampling. The dict must be in the format {"sampling_rate": int, "raw": np.array} with optionally a "stride": (left: int, right: int) than can ask the pipeline to treat the first left samples and last right samples to be ignored in decoding (but used at inference to provide more context to the model). Only use stride with CTC models.
  • return_timestamps (optional, str or bool) — Only available for pure CTC models (Wav2Vec2, HuBERT, etc) and the Whisper model. Not available for other sequence-to-sequence models.

    For CTC models, timestamps can take one of two formats:

    • "char": the pipeline will return timestamps along the text for every character in the text. For instance, if you get [{"text": "h", "timestamp": (0.5, 0.6)}, {"text": "i", "timestamp": (0.7, 0.9)}], then it means the model predicts that the letter “h” was spoken after 0.5 and before 0.6 seconds.
    • "word": the pipeline will return timestamps along the text for every word in the text. For instance, if you get [{"text": "hi ", "timestamp": (0.5, 0.9)}, {"text": "there", "timestamp": (1.0, 1.5)}], then it means the model predicts that the word “hi” was spoken after 0.5 and before 0.9 seconds.

    For the Whisper model, timestamps can take one of two formats:

    • "word": same as above for word-level CTC timestamps. Word-level timestamps are predicted through the dynamic-time warping (DTW) algorithm, an approximation to word-level timestamps by inspecting the cross-attention weights.
    • True: the pipeline will return timestamps along the text for segments of words in the text. For instance, if you get [{"text": " Hi there!", "timestamp": (0.5, 1.5)}], then it means the model predicts that the segment “Hi there!” was spoken after 0.5 and before 1.5 seconds. Note that a segment of text refers to a sequence of one or more words, rather than individual words as with word-level timestamps.
  • generate_kwargs (dict, optional) — The dictionary of ad-hoc parametrization of generate_config to be used for the generation call. For a complete overview of generate, check the following guide.
  • max_new_tokens (int, optional) — The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt.

Returns

Dict

A dictionary with the following keys:

  • text (str): The recognized text.
  • chunks (optional(, List[Dict]) When using return_timestamps, the chunks will become a list containing all the various text chunks identified by the model, e.g.* [{"text": "hi ", "timestamp": (0.5, 0.9)}, {"text": "there", "timestamp": (1.0, 1.5)}]. The original full text can roughly be recovered by doing "".join(chunk["text"] for chunk in output["chunks"]).

Transcribe the audio sequence(s) given as inputs to text. See the AutomaticSpeechRecognitionPipeline documentation for more information.

TextToAudioPipeline

class transformers.TextToAudioPipeline

< >

( *args vocoder = None sampling_rate = None **kwargs )

Text-to-audio generation pipeline using any AutoModelForTextToWaveform or AutoModelForTextToSpectrogram. This pipeline generates an audio file from an input text and optional other conditional inputs.

Example:

>>> from transformers import pipeline

>>> pipe = pipeline(model="suno/bark-small")
>>> output = pipe("Hey it's HuggingFace on the phone!")

>>> audio = output["audio"]
>>> sampling_rate = output["sampling_rate"]

Learn more about the basics of using a pipeline in the pipeline tutorial

You can specify parameters passed to the model by using TextToAudioPipeline.__call__.forward_params or TextToAudioPipeline.__call__.generate_kwargs.

Example:

>>> from transformers import pipeline

>>> music_generator = pipeline(task="text-to-audio", model="facebook/musicgen-small", framework="pt")

>>> # diversify the music generation by adding randomness with a high temperature and set a maximum music length
>>> generate_kwargs = {
...     "do_sample": True,
...     "temperature": 0.7,
...     "max_new_tokens": 35,
... }

>>> outputs = music_generator("Techno music with high melodic riffs", generate_kwargs=generate_kwargs)

This pipeline can currently be loaded from pipeline() using the following task identifiers: "text-to-speech" or "text-to-audio".

See the list of available models on huggingface.co/models.

__call__

< >

( text_inputs: Union **forward_params ) A dict or a list of dict

Parameters

  • text_inputs (str or List[str]) — The text(s) to generate.
  • forward_params (dict, optional) — Parameters passed to the model generation/forward method. forward_params are always passed to the underlying model.
  • generate_kwargs (dict, optional) — The dictionary of ad-hoc parametrization of generate_config to be used for the generation call. For a complete overview of generate, check the following guide. generate_kwargs are only passed to the underlying model if the latter is a generative model.

Returns

A dict or a list of dict

The dictionaries have two keys:

  • audio (np.ndarray of shape (nb_channels, audio_length)) — The generated audio waveform.
  • sampling_rate (int) — The sampling rate of the generated audio waveform.

Generates speech/audio from the inputs. See the TextToAudioPipeline documentation for more information.

ZeroShotAudioClassificationPipeline

class transformers.ZeroShotAudioClassificationPipeline

< >

( **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • feature_extractor (SequenceFeatureExtractor) — The feature extractor that will be used by the pipeline to encode data for the model. This object inherits from SequenceFeatureExtractor.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Zero shot audio classification pipeline using ClapModel. This pipeline predicts the class of an audio when you provide an audio and a set of candidate_labels.

Example:

>>> from transformers import pipeline
>>> from datasets import load_dataset

>>> dataset = load_dataset("ashraq/esc50")
>>> audio = next(iter(dataset["train"]["audio"]))["array"]
>>> classifier = pipeline(task="zero-shot-audio-classification", model="laion/clap-htsat-unfused")
>>> classifier(audio, candidate_labels=["Sound of a dog", "Sound of vaccum cleaner"])
[{'score': 0.9996, 'label': 'Sound of a dog'}, {'score': 0.0004, 'label': 'Sound of vaccum cleaner'}]

Learn more about the basics of using a pipeline in the pipeline tutorial This audio classification pipeline can currently be loaded from pipeline() using the following task identifier: "zero-shot-audio-classification". See the list of available models on huggingface.co/models.

__call__

< >

( audios: Union **kwargs )

Parameters

  • audios (str, List[str], np.array or List[np.array]) — The pipeline handles three types of inputs:
    • A string containing a http link pointing to an audio
    • A string containing a local path to an audio
    • An audio loaded in numpy
  • candidate_labels (List[str]) — The candidate labels for this audio
  • hypothesis_template (str, optional, defaults to "This is a sound of {}") — The sentence used in cunjunction with candidate_labels to attempt the audio classification by replacing the placeholder with the candidate_labels. Then likelihood is estimated by using logits_per_audio

Assign labels to the audio(s) passed as inputs.

计算机视觉

可用于计算机视觉任务的pipeline包括以下几种。

DepthEstimationPipeline

class transformers.DepthEstimationPipeline

< >

( *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • image_processor (BaseImageProcessor) — The image processor that will be used by the pipeline to encode data for the model. This object inherits from BaseImageProcessor.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Depth estimation pipeline using any AutoModelForDepthEstimation. This pipeline predicts the depth of an image.

Example:

>>> from transformers import pipeline

>>> depth_estimator = pipeline(task="depth-estimation", model="LiheYoung/depth-anything-base-hf")
>>> output = depth_estimator("http://images.cocodataset.org/val2017/000000039769.jpg")
>>> # This is a tensor with the values being the depth expressed in meters for each pixel
>>> output["predicted_depth"].shape
torch.Size([1, 384, 384])

Learn more about the basics of using a pipeline in the pipeline tutorial

This depth estimation pipeline can currently be loaded from pipeline() using the following task identifier: "depth-estimation".

See the list of available models on huggingface.co/models.

__call__

< >

( images: Union **kwargs )

Parameters

  • images (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:

    • A string containing a http link pointing to an image
    • A string containing a local path to an image
    • An image loaded in PIL directly

    The pipeline accepts either a single image or a batch of images, which must then be passed as a string. Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL images.

  • timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.

Predict the depth(s) of the image(s) passed as inputs.

ImageClassificationPipeline

class transformers.ImageClassificationPipeline

< >

( *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • image_processor (BaseImageProcessor) — The image processor that will be used by the pipeline to encode data for the model. This object inherits from BaseImageProcessor.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.
  • function_to_apply (str, optional, defaults to "default") — The function to apply to the model outputs in order to retrieve the scores. Accepts four different values:

    • "default": if the model has a single label, will apply the sigmoid function on the output. If the model has several labels, will apply the softmax function on the output.
    • "sigmoid": Applies the sigmoid function on the output.
    • "softmax": Applies the softmax function on the output.
    • "none": Does not apply any function on the output.

Image classification pipeline using any AutoModelForImageClassification. This pipeline predicts the class of an image.

Example:

>>> from transformers import pipeline

>>> classifier = pipeline(model="microsoft/beit-base-patch16-224-pt22k-ft22k")
>>> classifier("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'score': 0.442, 'label': 'macaw'}, {'score': 0.088, 'label': 'popinjay'}, {'score': 0.075, 'label': 'parrot'}, {'score': 0.073, 'label': 'parodist, lampooner'}, {'score': 0.046, 'label': 'poll, poll_parrot'}]

Learn more about the basics of using a pipeline in the pipeline tutorial

This image classification pipeline can currently be loaded from pipeline() using the following task identifier: "image-classification".

See the list of available models on huggingface.co/models.

__call__

< >

( images: Union **kwargs )

Parameters

  • images (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:

    • A string containing a http link pointing to an image
    • A string containing a local path to an image
    • An image loaded in PIL directly

    The pipeline accepts either a single image or a batch of images, which must then be passed as a string. Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL images.

  • function_to_apply (str, optional, defaults to "default") — The function to apply to the model outputs in order to retrieve the scores. Accepts four different values:

    If this argument is not specified, then it will apply the following functions according to the number of labels:

    • If the model has a single label, will apply the sigmoid function on the output.
    • If the model has several labels, will apply the softmax function on the output.

    Possible values are:

    • "sigmoid": Applies the sigmoid function on the output.
    • "softmax": Applies the softmax function on the output.
    • "none": Does not apply any function on the output.
  • top_k (int, optional, defaults to 5) — The number of top labels that will be returned by the pipeline. If the provided number is higher than the number of labels available in the model configuration, it will default to the number of labels.
  • timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.

Assign labels to the image(s) passed as inputs.

ImageSegmentationPipeline

class transformers.ImageSegmentationPipeline

< >

( *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • image_processor (BaseImageProcessor) — The image processor that will be used by the pipeline to encode data for the model. This object inherits from BaseImageProcessor.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Image segmentation pipeline using any AutoModelForXXXSegmentation. This pipeline predicts masks of objects and their classes.

Example:

>>> from transformers import pipeline

>>> segmenter = pipeline(model="facebook/detr-resnet-50-panoptic")
>>> segments = segmenter("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
>>> len(segments)
2

>>> segments[0]["label"]
'bird'

>>> segments[1]["label"]
'bird'

>>> type(segments[0]["mask"])  # This is a black and white mask showing where is the bird on the original image.
<class 'PIL.Image.Image'>

>>> segments[0]["mask"].size
(768, 512)

This image segmentation pipeline can currently be loaded from pipeline() using the following task identifier: "image-segmentation".

See the list of available models on huggingface.co/models.

__call__

< >

( images **kwargs )

Parameters

  • images (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:

    • A string containing an HTTP(S) link pointing to an image
    • A string containing a local path to an image
    • An image loaded in PIL directly

    The pipeline accepts either a single image or a batch of images. Images in a batch must all be in the same format: all as HTTP(S) links, all as local paths, or all as PIL images.

  • subtask (str, optional) — Segmentation task to be performed, choose [semantic, instance and panoptic] depending on model capabilities. If not set, the pipeline will attempt tp resolve in the following order: panoptic, instance, semantic.
  • threshold (float, optional, defaults to 0.9) — Probability threshold to filter out predicted masks.
  • mask_threshold (float, optional, defaults to 0.5) — Threshold to use when turning the predicted masks into binary values.
  • overlap_mask_area_threshold (float, optional, defaults to 0.5) — Mask overlap threshold to eliminate small, disconnected segments.
  • timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.

Perform segmentation (detect masks & classes) in the image(s) passed as inputs.

ImageToImagePipeline

class transformers.ImageToImagePipeline

< >

( *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • image_processor (BaseImageProcessor) — The image processor that will be used by the pipeline to encode data for the model. This object inherits from BaseImageProcessor.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Image to Image pipeline using any AutoModelForImageToImage. This pipeline generates an image based on a previous image input.

Example:

>>> from PIL import Image
>>> import requests

>>> from transformers import pipeline

>>> upscaler = pipeline("image-to-image", model="caidas/swin2SR-classical-sr-x2-64")
>>> img = Image.open(requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw)
>>> img = img.resize((64, 64))
>>> upscaled_img = upscaler(img)
>>> img.size
(64, 64)

>>> upscaled_img.size
(144, 144)

This image to image pipeline can currently be loaded from pipeline() using the following task identifier: "image-to-image".

See the list of available models on huggingface.co/models.

__call__

< >

( images: Union **kwargs )

Parameters

  • images (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:

    • A string containing a http link pointing to an image
    • A string containing a local path to an image
    • An image loaded in PIL directly

    The pipeline accepts either a single image or a batch of images, which must then be passed as a string. Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL images.

  • timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is used and the call may block forever.

Transform the image(s) passed as inputs.

ObjectDetectionPipeline

class transformers.ObjectDetectionPipeline

< >

( *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • image_processor (BaseImageProcessor) — The image processor that will be used by the pipeline to encode data for the model. This object inherits from BaseImageProcessor.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Object detection pipeline using any AutoModelForObjectDetection. This pipeline predicts bounding boxes of objects and their classes.

Example:

>>> from transformers import pipeline

>>> detector = pipeline(model="facebook/detr-resnet-50")
>>> detector("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'score': 0.997, 'label': 'bird', 'box': {'xmin': 69, 'ymin': 171, 'xmax': 396, 'ymax': 507}}, {'score': 0.999, 'label': 'bird', 'box': {'xmin': 398, 'ymin': 105, 'xmax': 767, 'ymax': 507}}]

>>> # x, y  are expressed relative to the top left hand corner.

Learn more about the basics of using a pipeline in the pipeline tutorial

This object detection pipeline can currently be loaded from pipeline() using the following task identifier: "object-detection".

See the list of available models on huggingface.co/models.

__call__

< >

( *args **kwargs )

Parameters

  • images (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:

    • A string containing an HTTP(S) link pointing to an image
    • A string containing a local path to an image
    • An image loaded in PIL directly

    The pipeline accepts either a single image or a batch of images. Images in a batch must all be in the same format: all as HTTP(S) links, all as local paths, or all as PIL images.

  • threshold (float, optional, defaults to 0.9) — The probability necessary to make a prediction.
  • timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.

Detect objects (bounding boxes & classes) in the image(s) passed as inputs.

VideoClassificationPipeline

class transformers.VideoClassificationPipeline

< >

( *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • image_processor (BaseImageProcessor) — The image processor that will be used by the pipeline to encode data for the model. This object inherits from BaseImageProcessor.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Video classification pipeline using any AutoModelForVideoClassification. This pipeline predicts the class of a video.

This video classification pipeline can currently be loaded from pipeline() using the following task identifier: "video-classification".

See the list of available models on huggingface.co/models.

__call__

< >

( videos: Union **kwargs )

Parameters

  • videos (str, List[str]) — The pipeline handles three types of videos:

    • A string containing a http link pointing to a video
    • A string containing a local path to a video

    The pipeline accepts either a single video or a batch of videos, which must then be passed as a string. Videos in a batch must all be in the same format: all as http links or all as local paths.

  • top_k (int, optional, defaults to 5) — The number of top labels that will be returned by the pipeline. If the provided number is higher than the number of labels available in the model configuration, it will default to the number of labels.
  • num_frames (int, optional, defaults to self.model.config.num_frames) — The number of frames sampled from the video to run the classification on. If not provided, will default to the number of frames specified in the model configuration.
  • frame_sampling_rate (int, optional, defaults to 1) — The sampling rate used to select frames from the video. If not provided, will default to 1, i.e. every frame will be used.

Assign labels to the video(s) passed as inputs.

ZeroShotImageClassificationPipeline

class transformers.ZeroShotImageClassificationPipeline

< >

( **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • image_processor (BaseImageProcessor) — The image processor that will be used by the pipeline to encode data for the model. This object inherits from BaseImageProcessor.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Zero shot image classification pipeline using CLIPModel. This pipeline predicts the class of an image when you provide an image and a set of candidate_labels.

Example:

>>> from transformers import pipeline

>>> classifier = pipeline(model="google/siglip-so400m-patch14-384")
>>> classifier(
...     "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png",
...     candidate_labels=["animals", "humans", "landscape"],
... )
[{'score': 0.965, 'label': 'animals'}, {'score': 0.03, 'label': 'humans'}, {'score': 0.005, 'label': 'landscape'}]

>>> classifier(
...     "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png",
...     candidate_labels=["black and white", "photorealist", "painting"],
... )
[{'score': 0.996, 'label': 'black and white'}, {'score': 0.003, 'label': 'photorealist'}, {'score': 0.0, 'label': 'painting'}]

Learn more about the basics of using a pipeline in the pipeline tutorial

This image classification pipeline can currently be loaded from pipeline() using the following task identifier: "zero-shot-image-classification".

See the list of available models on huggingface.co/models.

__call__

< >

( images: Union **kwargs )

Parameters

  • images (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:

    • A string containing a http link pointing to an image
    • A string containing a local path to an image
    • An image loaded in PIL directly
  • candidate_labels (List[str]) — The candidate labels for this image
  • hypothesis_template (str, optional, defaults to "This is a photo of {}") — The sentence used in cunjunction with candidate_labels to attempt the image classification by replacing the placeholder with the candidate_labels. Then likelihood is estimated by using logits_per_image
  • timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.

Assign labels to the image(s) passed as inputs.

ZeroShotObjectDetectionPipeline

class transformers.ZeroShotObjectDetectionPipeline

< >

( **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • image_processor (BaseImageProcessor) — The image processor that will be used by the pipeline to encode data for the model. This object inherits from BaseImageProcessor.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Zero shot object detection pipeline using OwlViTForObjectDetection. This pipeline predicts bounding boxes of objects when you provide an image and a set of candidate_labels.

Example:

>>> from transformers import pipeline

>>> detector = pipeline(model="google/owlvit-base-patch32", task="zero-shot-object-detection")
>>> detector(
...     "http://images.cocodataset.org/val2017/000000039769.jpg",
...     candidate_labels=["cat", "couch"],
... )
[{'score': 0.287, 'label': 'cat', 'box': {'xmin': 324, 'ymin': 20, 'xmax': 640, 'ymax': 373}}, {'score': 0.254, 'label': 'cat', 'box': {'xmin': 1, 'ymin': 55, 'xmax': 315, 'ymax': 472}}, {'score': 0.121, 'label': 'couch', 'box': {'xmin': 4, 'ymin': 0, 'xmax': 642, 'ymax': 476}}]

>>> detector(
...     "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png",
...     candidate_labels=["head", "bird"],
... )
[{'score': 0.119, 'label': 'bird', 'box': {'xmin': 71, 'ymin': 170, 'xmax': 410, 'ymax': 508}}]

Learn more about the basics of using a pipeline in the pipeline tutorial

This object detection pipeline can currently be loaded from pipeline() using the following task identifier: "zero-shot-object-detection".

See the list of available models on huggingface.co/models.

__call__

< >

( image: Union candidate_labels: Union = None **kwargs )

Parameters

  • image (str, PIL.Image or List[Dict[str, Any]]) — The pipeline handles three types of images:

    • A string containing an http url pointing to an image
    • A string containing a local path to an image
    • An image loaded in PIL directly

    You can use this parameter to send directly a list of images, or a dataset or a generator like so:

Detect objects (bounding boxes & classes) in the image(s) passed as inputs.

自然语言处理

可用于自然语言处理任务的pipeline包括以下几种。

ConversationalPipeline

class transformers.Conversation

< >

( messages: Union = None conversation_id: UUID = None **deprecated_kwargs )

Parameters

  • messages (Union[str, List[Dict[str, str]]], optional) — The initial messages to start the conversation, either a string, or a list of dicts containing “role” and “content” keys. If a string is passed, it is interpreted as a single message with the “user” role.
  • conversation_id (uuid.UUID, optional) — Unique identifier for the conversation. If not provided, a random UUID4 id will be assigned to the conversation.

Utility class containing a conversation and its history. This class is meant to be used as an input to the ConversationalPipeline. The conversation contains several utility functions to manage the addition of new user inputs and generated model responses.

Usage:

conversation = Conversation("Going to the movies tonight - any suggestions?")
conversation.add_message({"role": "assistant", "content": "The Big lebowski."})
conversation.add_message({"role": "user", "content": "Is it good?"})

add_user_input

< >

( text: str overwrite: bool = False )

Add a user input to the conversation for the next round. This is a legacy method that assumes that inputs must alternate user/assistant/user/assistant, and so will not add multiple user messages in succession. We recommend just using add_message with role “user” instead.

append_response

< >

( response: str )

This is a legacy method. We recommend just using add_message with an appropriate role instead.

mark_processed

< >

( )

This is a legacy method, as the Conversation no longer distinguishes between processed and unprocessed user input. We set a counter here to keep behaviour mostly backward-compatible, but in general you should just read the messages directly when writing new code.

class transformers.ConversationalPipeline

< >

( *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.
  • min_length_for_response (int, optional, defaults to 32) — The minimum length (in number of tokens) for a response.

Multi-turn conversational pipeline.

Example:

>>> from transformers import pipeline, Conversation
# Any model with a chat template can be used in a ConversationalPipeline.

>>> chatbot = pipeline(model="facebook/blenderbot-400M-distill")
>>> # Conversation objects initialized with a string will treat it as a user message
>>> conversation = Conversation("I'm looking for a movie - what's your favourite one?")
>>> conversation = chatbot(conversation)
>>> conversation.messages[-1]["content"]
"I don't really have a favorite movie, but I do like action movies. What about you?"

>>> conversation.add_message({"role": "user", "content": "That's interesting, why do you like action movies?"})
>>> conversation = chatbot(conversation)
>>> conversation.messages[-1]["content"]
" I think it's just because they're so fast-paced and action-fantastic."

Learn more about the basics of using a pipeline in the pipeline tutorial

This conversational pipeline can currently be loaded from pipeline() using the following task identifier: "conversational".

This pipeline can be used with any model that has a chat template set.

__call__

< >

( conversations: Union num_workers = 0 **kwargs ) Conversation or a list of Conversation

Parameters

  • conversations (a Conversation or a list of Conversation) — Conversation to generate responses for. Inputs can also be passed as a list of dictionaries with role and content keys - in this case, they will be converted to Conversation objects automatically. Multiple conversations in either format may be passed as a list.
  • clean_up_tokenization_spaces (bool, optional, defaults to True) — Whether or not to clean up the potential extra spaces in the text output. generate_kwargs — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).

Returns

Conversation or a list of Conversation

Conversation(s) with updated generated responses for those containing a new user input.

Generate responses for the conversation(s) given as inputs.

FillMaskPipeline

class transformers.FillMaskPipeline

< >

( model: Union tokenizer: Optional = None feature_extractor: Optional = None image_processor: Optional = None modelcard: Optional = None framework: Optional = None task: str = '' args_parser: ArgumentHandler = None device: Union = None torch_dtype: Union = None binary_output: bool = False **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.
  • top_k (int, defaults to 5) — The number of predictions to return.
  • targets (str or List[str], optional) — When passed, the model will limit the scores to the passed targets instead of looking up in the whole vocab. If the provided targets are not in the model vocab, they will be tokenized and the first resulting token will be used (with a warning, and that might be slower).
  • tokenizer_kwargs (dict, optional) — Additional dictionary of keyword arguments passed along to the tokenizer.

Masked language modeling prediction pipeline using any ModelWithLMHead. See the masked language modeling examples for more information.

Example:

>>> from transformers import pipeline

>>> fill_masker = pipeline(model="google-bert/bert-base-uncased")
>>> fill_masker("This is a simple [MASK].")
[{'score': 0.042, 'token': 3291, 'token_str': 'problem', 'sequence': 'this is a simple problem.'}, {'score': 0.031, 'token': 3160, 'token_str': 'question', 'sequence': 'this is a simple question.'}, {'score': 0.03, 'token': 8522, 'token_str': 'equation', 'sequence': 'this is a simple equation.'}, {'score': 0.027, 'token': 2028, 'token_str': 'one', 'sequence': 'this is a simple one.'}, {'score': 0.024, 'token': 3627, 'token_str': 'rule', 'sequence': 'this is a simple rule.'}]

Learn more about the basics of using a pipeline in the pipeline tutorial

This mask filling pipeline can currently be loaded from pipeline() using the following task identifier: "fill-mask".

The models that this pipeline can use are models that have been trained with a masked language modeling objective, which includes the bi-directional models in the library. See the up-to-date list of available models on huggingface.co/models.

This pipeline only works for inputs with exactly one token masked. Experimental: We added support for multiple masks. The returned values are raw model output, and correspond to disjoint probabilities where one might expect joint probabilities (See discussion).

This pipeline now supports tokenizer_kwargs. For example try:

>>> from transformers import pipeline

>>> fill_masker = pipeline(model="google-bert/bert-base-uncased")
>>> tokenizer_kwargs = {"truncation": True}
>>> fill_masker(
...     "This is a simple [MASK]. " + "...with a large amount of repeated text appended. " * 100,
...     tokenizer_kwargs=tokenizer_kwargs,
... )

__call__

< >

( inputs *args **kwargs ) A list or a list of list of dict

Parameters

  • args (str or List[str]) — One or several texts (or one list of prompts) with masked tokens.
  • targets (str or List[str], optional) — When passed, the model will limit the scores to the passed targets instead of looking up in the whole vocab. If the provided targets are not in the model vocab, they will be tokenized and the first resulting token will be used (with a warning, and that might be slower).
  • top_k (int, optional) — When passed, overrides the number of predictions to return.

Returns

A list or a list of list of dict

Each result comes as list of dictionaries with the following keys:

  • sequence (str) — The corresponding input with the mask token prediction.
  • score (float) — The corresponding probability.
  • token (int) — The predicted token id (to replace the masked one).
  • token_str (str) — The predicted token (to replace the masked one).

Fill the masked token in the text(s) given as inputs.

NerPipeline

class transformers.TokenClassificationPipeline

< >

( args_parser = <transformers.pipelines.token_classification.TokenClassificationArgumentHandler object at 0x7f0d46e5d270> *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.
  • ignore_labels (List[str], defaults to ["O"]) — A list of labels to ignore.
  • grouped_entities (bool, optional, defaults to False) — DEPRECATED, use aggregation_strategy instead. Whether or not to group the tokens corresponding to the same entity together in the predictions or not.
  • stride (int, optional) — If stride is provided, the pipeline is applied on all the text. The text is split into chunks of size model_max_length. Works only with fast tokenizers and aggregation_strategy different from NONE. The value of this argument defines the number of overlapping tokens between chunks. In other words, the model will shift forward by tokenizer.model_max_length - stride tokens each step.
  • aggregation_strategy (str, optional, defaults to "none") — The strategy to fuse (or not) tokens based on the model prediction.

    • “none” : Will simply not do any aggregation and simply return raw results from the model
    • “simple” : Will attempt to group entities following the default schema. (A, B-TAG), (B, I-TAG), (C, I-TAG), (D, B-TAG2) (E, B-TAG2) will end up being [{“word”: ABC, “entity”: “TAG”}, {“word”: “D”, “entity”: “TAG2”}, {“word”: “E”, “entity”: “TAG2”}] Notice that two consecutive B tags will end up as different entities. On word based languages, we might end up splitting words undesirably : Imagine Microsoft being tagged as [{“word”: “Micro”, “entity”: “ENTERPRISE”}, {“word”: “soft”, “entity”: “NAME”}]. Look for FIRST, MAX, AVERAGE for ways to mitigate that and disambiguate words (on languages that support that meaning, which is basically tokens separated by a space). These mitigations will only work on real words, “New york” might still be tagged with two different entities.
    • “first” : (works only on word based models) Will use the SIMPLE strategy except that words, cannot end up with different tags. Words will simply use the tag of the first token of the word when there is ambiguity.
    • “average” : (works only on word based models) Will use the SIMPLE strategy except that words, cannot end up with different tags. scores will be averaged first across tokens, and then the maximum label is applied.
    • “max” : (works only on word based models) Will use the SIMPLE strategy except that words, cannot end up with different tags. Word entity will simply be the token with the maximum score.

Named Entity Recognition pipeline using any ModelForTokenClassification. See the named entity recognition examples for more information.

Example:

>>> from transformers import pipeline

>>> token_classifier = pipeline(model="Jean-Baptiste/camembert-ner", aggregation_strategy="simple")
>>> sentence = "Je m'appelle jean-baptiste et je vis à montréal"
>>> tokens = token_classifier(sentence)
>>> tokens
[{'entity_group': 'PER', 'score': 0.9931, 'word': 'jean-baptiste', 'start': 12, 'end': 26}, {'entity_group': 'LOC', 'score': 0.998, 'word': 'montréal', 'start': 38, 'end': 47}]

>>> token = tokens[0]
>>> # Start and end provide an easy way to highlight words in the original text.
>>> sentence[token["start"] : token["end"]]
' jean-baptiste'

>>> # Some models use the same idea to do part of speech.
>>> syntaxer = pipeline(model="vblagoje/bert-english-uncased-finetuned-pos", aggregation_strategy="simple")
>>> syntaxer("My name is Sarah and I live in London")
[{'entity_group': 'PRON', 'score': 0.999, 'word': 'my', 'start': 0, 'end': 2}, {'entity_group': 'NOUN', 'score': 0.997, 'word': 'name', 'start': 3, 'end': 7}, {'entity_group': 'AUX', 'score': 0.994, 'word': 'is', 'start': 8, 'end': 10}, {'entity_group': 'PROPN', 'score': 0.999, 'word': 'sarah', 'start': 11, 'end': 16}, {'entity_group': 'CCONJ', 'score': 0.999, 'word': 'and', 'start': 17, 'end': 20}, {'entity_group': 'PRON', 'score': 0.999, 'word': 'i', 'start': 21, 'end': 22}, {'entity_group': 'VERB', 'score': 0.998, 'word': 'live', 'start': 23, 'end': 27}, {'entity_group': 'ADP', 'score': 0.999, 'word': 'in', 'start': 28, 'end': 30}, {'entity_group': 'PROPN', 'score': 0.999, 'word': 'london', 'start': 31, 'end': 37}]

Learn more about the basics of using a pipeline in the pipeline tutorial

This token recognition pipeline can currently be loaded from pipeline() using the following task identifier: "ner" (for predicting the classes of tokens in a sequence: person, organisation, location or miscellaneous).

The models that this pipeline can use are models that have been fine-tuned on a token classification task. See the up-to-date list of available models on huggingface.co/models.

aggregate_words

< >

( entities: List aggregation_strategy: AggregationStrategy )

Override tokens from a given word that disagree to force agreement on word boundaries.

Example: micro|soft| com|pany| B-ENT I-NAME I-ENT I-ENT will be rewritten with first strategy as microsoft| company| B-ENT I-ENT

gather_pre_entities

< >

( sentence: str input_ids: ndarray scores: ndarray offset_mapping: Optional special_tokens_mask: ndarray aggregation_strategy: AggregationStrategy )

Fuse various numpy arrays into dicts with all the information needed for aggregation

group_entities

< >

( entities: List )

Parameters

  • entities (dict) — The entities predicted by the pipeline.

Find and group together the adjacent tokens with the same entity predicted.

group_sub_entities

< >

( entities: List )

Parameters

  • entities (dict) — The entities predicted by the pipeline.

Group together the adjacent tokens with the same entity predicted.

See TokenClassificationPipeline for all details.

QuestionAnsweringPipeline

class transformers.QuestionAnsweringPipeline

< >

( model: Union tokenizer: PreTrainedTokenizer modelcard: Optional = None framework: Optional = None task: str = '' **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Question Answering pipeline using any ModelForQuestionAnswering. See the question answering examples for more information.

Example:

>>> from transformers import pipeline

>>> oracle = pipeline(model="deepset/roberta-base-squad2")
>>> oracle(question="Where do I live?", context="My name is Wolfgang and I live in Berlin")
{'score': 0.9191, 'start': 34, 'end': 40, 'answer': 'Berlin'}

Learn more about the basics of using a pipeline in the pipeline tutorial

This question answering pipeline can currently be loaded from pipeline() using the following task identifier: "question-answering".

The models that this pipeline can use are models that have been fine-tuned on a question answering task. See the up-to-date list of available models on huggingface.co/models.

__call__

< >

( *args **kwargs ) A dict or a list of dict

Parameters

  • args (SquadExample or a list of SquadExample) — One or several SquadExample containing the question and context.
  • X (SquadExample or a list of SquadExample, optional) — One or several SquadExample containing the question and context (will be treated the same way as if passed as the first positional argument).
  • data (SquadExample or a list of SquadExample, optional) — One or several SquadExample containing the question and context (will be treated the same way as if passed as the first positional argument).
  • question (str or List[str]) — One or several question(s) (must be used in conjunction with the context argument).
  • context (str or List[str]) — One or several context(s) associated with the question(s) (must be used in conjunction with the question argument).
  • topk (int, optional, defaults to 1) — The number of answers to return (will be chosen by order of likelihood). Note that we return less than topk answers if there are not enough options available within the context.
  • doc_stride (int, optional, defaults to 128) — If the context is too long to fit with the question for the model, it will be split in several chunks with some overlap. This argument controls the size of that overlap.
  • max_answer_len (int, optional, defaults to 15) — The maximum length of predicted answers (e.g., only answers with a shorter length are considered).
  • max_seq_len (int, optional, defaults to 384) — The maximum length of the total sentence (context + question) in tokens of each chunk passed to the model. The context will be split in several chunks (using doc_stride as overlap) if needed.
  • max_question_len (int, optional, defaults to 64) — The maximum length of the question after tokenization. It will be truncated if needed.
  • handle_impossible_answer (bool, optional, defaults to False) — Whether or not we accept impossible as an answer.
  • align_to_words (bool, optional, defaults to True) — Attempts to align the answer to real words. Improves quality on space separated langages. Might hurt on non-space-separated languages (like Japanese or Chinese)

Returns

A dict or a list of dict

Each result comes as a dictionary with the following keys:

  • score (float) — The probability associated to the answer.
  • start (int) — The character start index of the answer (in the tokenized version of the input).
  • end (int) — The character end index of the answer (in the tokenized version of the input).
  • answer (str) — The answer to the question.

Answer the question(s) given as inputs by using the context(s).

create_sample

< >

( question: Union context: Union ) One or a list of SquadExample

Parameters

  • question (str or List[str]) — The question(s) asked.
  • context (str or List[str]) — The context(s) in which we will look for the answer.

Returns

One or a list of SquadExample

The corresponding SquadExample grouping question and context.

QuestionAnsweringPipeline leverages the SquadExample internally. This helper method encapsulate all the logic for converting question(s) and context(s) to SquadExample.

We currently support extractive question answering.

span_to_answer

< >

( text: str start: int end: int ) Dictionary like `{‘answer’

Parameters

  • text (str) — The actual context to extract the answer from.
  • start (int) — The answer starting token index.
  • end (int) — The answer end token index.

Returns

Dictionary like `{‘answer’

str, ‘start’: int, ‘end’: int}`

When decoding from token probabilities, this method maps token indexes to actual word in the initial context.

SummarizationPipeline

class transformers.SummarizationPipeline

< >

( *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Summarize news articles and other documents.

This summarizing pipeline can currently be loaded from pipeline() using the following task identifier: "summarization".

The models that this pipeline can use are models that have been fine-tuned on a summarization task, which is currently, ’bart-large-cnn’, ’google-t5/t5-small’, ’google-t5/t5-base’, ’google-t5/t5-large’, ’google-t5/t5-3b’, ’google-t5/t5-11b’. See the up-to-date list of available models on huggingface.co/models. For a list of available parameters, see the following documentation

Usage:

# use bart in pytorch
summarizer = pipeline("summarization")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)

# use t5 in tf
summarizer = pipeline("summarization", model="google-t5/t5-base", tokenizer="google-t5/t5-base", framework="tf")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)

__call__

< >

( *args **kwargs ) A list or a list of list of dict

Parameters

  • documents (str or List[str]) — One or several articles (or one list of articles) to summarize.
  • return_text (bool, optional, defaults to True) — Whether or not to include the decoded texts in the outputs
  • return_tensors (bool, optional, defaults to False) — Whether or not to include the tensors of predictions (as token indices) in the outputs.
  • clean_up_tokenization_spaces (bool, optional, defaults to False) — Whether or not to clean up the potential extra spaces in the text output. generate_kwargs — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).

Returns

A list or a list of list of dict

Each result comes as a dictionary with the following keys:

  • summary_text (str, present when return_text=True) — The summary of the corresponding input.
  • summary_token_ids (torch.Tensor or tf.Tensor, present when return_tensors=True) — The token ids of the summary.

Summarize the text(s) given as inputs.

TableQuestionAnsweringPipeline

class transformers.TableQuestionAnsweringPipeline

< >

( args_parser = <transformers.pipelines.table_question_answering.TableQuestionAnsweringArgumentHandler object at 0x7f0d46f904f0> *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Table Question Answering pipeline using a ModelForTableQuestionAnswering. This pipeline is only available in PyTorch.

Example:

>>> from transformers import pipeline

>>> oracle = pipeline(model="google/tapas-base-finetuned-wtq")
>>> table = {
...     "Repository": ["Transformers", "Datasets", "Tokenizers"],
...     "Stars": ["36542", "4512", "3934"],
...     "Contributors": ["651", "77", "34"],
...     "Programming language": ["Python", "Python", "Rust, Python and NodeJS"],
... }
>>> oracle(query="How many stars does the transformers repository have?", table=table)
{'answer': 'AVERAGE > 36542', 'coordinates': [(0, 1)], 'cells': ['36542'], 'aggregator': 'AVERAGE'}

Learn more about the basics of using a pipeline in the pipeline tutorial

This tabular question answering pipeline can currently be loaded from pipeline() using the following task identifier: "table-question-answering".

The models that this pipeline can use are models that have been fine-tuned on a tabular question answering task. See the up-to-date list of available models on huggingface.co/models.

__call__

< >

( *args **kwargs ) A dictionary or a list of dictionaries containing results

Parameters

  • table (pd.DataFrame or Dict) — Pandas DataFrame or dictionary that will be converted to a DataFrame containing all the table values. See above for an example of dictionary.
  • query (str or List[str]) — Query or list of queries that will be sent to the model alongside the table.
  • sequential (bool, optional, defaults to False) — Whether to do inference sequentially or as a batch. Batching is faster, but models like SQA require the inference to be done sequentially to extract relations within sequences, given their conversational nature.
  • padding (bool, str or PaddingStrategy, optional, defaults to False) — Activates and controls padding. Accepts the following values:

    • True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
    • 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
    • False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
  • truncation (bool, str or TapasTruncationStrategy, optional, defaults to False) — Activates and controls truncation. Accepts the following values:

    • True or 'drop_rows_to_fit': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate row by row, removing rows from the table.
    • False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).

Returns

A dictionary or a list of dictionaries containing results

Each result is a dictionary with the following keys:

  • answer (str) — The answer of the query given the table. If there is an aggregator, the answer will be preceded by AGGREGATOR >.
  • coordinates (List[Tuple[int, int]]) — Coordinates of the cells of the answers.
  • cells (List[str]) — List of strings made up of the answer cell values.
  • aggregator (str) — If the model has an aggregator, this returns the aggregator.

Answers queries according to a table. The pipeline accepts several types of inputs which are detailed below:

  • pipeline(table, query)
  • pipeline(table, [query])
  • pipeline(table=table, query=query)
  • pipeline(table=table, query=[query])
  • pipeline({"table": table, "query": query})
  • pipeline({"table": table, "query": [query]})
  • pipeline([{"table": table, "query": query}, {"table": table, "query": query}])

The table argument should be a dict or a DataFrame built from that dict, containing the whole table:

Example:

data = {
    "actors": ["brad pitt", "leonardo di caprio", "george clooney"],
    "age": ["56", "45", "59"],
    "number of movies": ["87", "53", "69"],
    "date of birth": ["7 february 1967", "10 june 1996", "28 november 1967"],
}

This dictionary can be passed in as such, or can be converted to a pandas DataFrame:

Example:

import pandas as pd

table = pd.DataFrame.from_dict(data)

TextClassificationPipeline

class transformers.TextClassificationPipeline

< >

( **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.
  • return_all_scores (bool, optional, defaults to False) — Whether to return all prediction scores or just the one of the predicted class.
  • function_to_apply (str, optional, defaults to "default") — The function to apply to the model outputs in order to retrieve the scores. Accepts four different values:

    • "default": if the model has a single label, will apply the sigmoid function on the output. If the model has several labels, will apply the softmax function on the output.
    • "sigmoid": Applies the sigmoid function on the output.
    • "softmax": Applies the softmax function on the output.
    • "none": Does not apply any function on the output.

Text classification pipeline using any ModelForSequenceClassification. See the sequence classification examples for more information.

Example:

>>> from transformers import pipeline

>>> classifier = pipeline(model="distilbert/distilbert-base-uncased-finetuned-sst-2-english")
>>> classifier("This movie is disgustingly good !")
[{'label': 'POSITIVE', 'score': 1.0}]

>>> classifier("Director tried too much.")
[{'label': 'NEGATIVE', 'score': 0.996}]

Learn more about the basics of using a pipeline in the pipeline tutorial

This text classification pipeline can currently be loaded from pipeline() using the following task identifier: "sentiment-analysis" (for classifying sequences according to positive or negative sentiments).

If multiple classification labels are available (model.config.num_labels >= 2), the pipeline will run a softmax over the results. If there is a single label, the pipeline will run a sigmoid over the result.

The models that this pipeline can use are models that have been fine-tuned on a sequence classification task. See the up-to-date list of available models on huggingface.co/models.

__call__

< >

( inputs **kwargs ) A list or a list of list of dict

Parameters

  • inputs (str or List[str] or Dict[str], or List[Dict[str]]) — One or several texts to classify. In order to use text pairs for your classification, you can send a dictionary containing {"text", "text_pair"} keys, or a list of those.
  • top_k (int, optional, defaults to 1) — How many results to return.
  • function_to_apply (str, optional, defaults to "default") — The function to apply to the model outputs in order to retrieve the scores. Accepts four different values:

    If this argument is not specified, then it will apply the following functions according to the number of labels:

    • If the model has a single label, will apply the sigmoid function on the output.
    • If the model has several labels, will apply the softmax function on the output.

    Possible values are:

    • "sigmoid": Applies the sigmoid function on the output.
    • "softmax": Applies the softmax function on the output.
    • "none": Does not apply any function on the output.

Returns

A list or a list of list of dict

Each result comes as list of dictionaries with the following keys:

  • label (str) — The label predicted.
  • score (float) — The corresponding probability.

If top_k is used, one such dictionary is returned per label.

Classify the text(s) given as inputs.

TextGenerationPipeline

class transformers.TextGenerationPipeline

< >

( *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Language generation pipeline using any ModelWithLMHead. This pipeline predicts the words that will follow a specified text prompt. It can also accept one or more chats. Each chat takes the form of a list of dicts, where each dict contains “role” and “content” keys.

Example:

>>> from transformers import pipeline

>>> generator = pipeline(model="openai-community/gpt2")
>>> generator("I can't believe you did such a ", do_sample=False)
[{'generated_text': "I can't believe you did such a icky thing to me. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I"}]

>>> # These parameters will return suggestions, and only the newly created text making it easier for prompting suggestions.
>>> outputs = generator("My tart needs some", num_return_sequences=4, return_full_text=False)

Learn more about the basics of using a pipeline in the pipeline tutorial. You can pass text generation parameters to this pipeline to control stopping criteria, decoding strategy, and more. Learn more about text generation parameters in Text generation strategies and Text generation.

This language generation pipeline can currently be loaded from pipeline() using the following task identifier: "text-generation".

The models that this pipeline can use are models that have been trained with an autoregressive language modeling objective, which includes the uni-directional models in the library (e.g. openai-community/gpt2). See the list of available models on huggingface.co/models.

__call__

< >

( text_inputs **kwargs ) A list or a list of list of dict

Parameters

  • text_inputs (str or List[str]) — One or several prompts (or one list of prompts) to complete.
  • return_tensors (bool, optional, defaults to False) — Whether or not to return the tensors of predictions (as token indices) in the outputs. If set to True, the decoded text is not returned.
  • return_text (bool, optional, defaults to True) — Whether or not to return the decoded texts in the outputs.
  • return_full_text (bool, optional, defaults to True) — If set to False only added text is returned, otherwise the full text is returned. Only meaningful if return_text is set to True.
  • clean_up_tokenization_spaces (bool, optional, defaults to True) — Whether or not to clean up the potential extra spaces in the text output.
  • prefix (str, optional) — Prefix added to prompt.
  • handle_long_generation (str, optional) — By default, this pipelines does not handle long generation (ones that exceed in one form or the other the model maximum length). There is no perfect way to adress this (more info :https://github.com/huggingface/transformers/issues/14033#issuecomment-948385227). This provides common strategies to work around that problem depending on your use case.

    • None : default strategy where nothing in particular happens
    • "hole": Truncates left of input, and leaves a gap wide enough to let generation happen (might truncate a lot of the prompt and not suitable when generation exceed the model capacity)
  • generate_kwargs (dict, optional) — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).

Returns

A list or a list of list of dict

Returns one of the following dictionaries (cannot return a combination of both generated_text and generated_token_ids):

  • generated_text (str, present when return_text=True) — The generated text.
  • generated_token_ids (torch.Tensor or tf.Tensor, present when return_tensors=True) — The token ids of the generated text.

Complete the prompt(s) given as inputs.

Text2TextGenerationPipeline

class transformers.Text2TextGenerationPipeline

< >

( *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Pipeline for text to text generation using seq2seq models.

Example:

>>> from transformers import pipeline

>>> generator = pipeline(model="mrm8488/t5-base-finetuned-question-generation-ap")
>>> generator(
...     "answer: Manuel context: Manuel has created RuPERTa-base with the support of HF-Transformers and Google"
... )
[{'generated_text': 'question: Who created the RuPERTa-base?'}]

Learn more about the basics of using a pipeline in the pipeline tutorial. You can pass text generation parameters to this pipeline to control stopping criteria, decoding strategy, and more. Learn more about text generation parameters in Text generation strategies and Text generation.

This Text2TextGenerationPipeline pipeline can currently be loaded from pipeline() using the following task identifier: "text2text-generation".

The models that this pipeline can use are models that have been fine-tuned on a translation task. See the up-to-date list of available models on huggingface.co/models. For a list of available parameters, see the following documentation

Usage:

text2text_generator = pipeline("text2text-generation")
text2text_generator("question: What is 42 ? context: 42 is the answer to life, the universe and everything")

__call__

< >

( *args **kwargs ) A list or a list of list of dict

Parameters

  • args (str or List[str]) — Input text for the encoder.
  • return_tensors (bool, optional, defaults to False) — Whether or not to include the tensors of predictions (as token indices) in the outputs.
  • return_text (bool, optional, defaults to True) — Whether or not to include the decoded texts in the outputs.
  • clean_up_tokenization_spaces (bool, optional, defaults to False) — Whether or not to clean up the potential extra spaces in the text output.
  • truncation (TruncationStrategy, optional, defaults to TruncationStrategy.DO_NOT_TRUNCATE) — The truncation strategy for the tokenization within the pipeline. TruncationStrategy.DO_NOT_TRUNCATE (default) will never truncate, but it is sometimes desirable to truncate the input to fit the model’s max_length instead of throwing an error down the line. generate_kwargs — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).

Returns

A list or a list of list of dict

Each result comes as a dictionary with the following keys:

  • generated_text (str, present when return_text=True) — The generated text.
  • generated_token_ids (torch.Tensor or tf.Tensor, present when return_tensors=True) — The token ids of the generated text.

Generate the output text(s) using text(s) given as inputs.

check_inputs

< >

( input_length: int min_length: int max_length: int )

Checks whether there might be something wrong with given input with regard to the model.

TokenClassificationPipeline

class transformers.TokenClassificationPipeline

< >

( args_parser = <transformers.pipelines.token_classification.TokenClassificationArgumentHandler object at 0x7f0d46e5d270> *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.
  • ignore_labels (List[str], defaults to ["O"]) — A list of labels to ignore.
  • grouped_entities (bool, optional, defaults to False) — DEPRECATED, use aggregation_strategy instead. Whether or not to group the tokens corresponding to the same entity together in the predictions or not.
  • stride (int, optional) — If stride is provided, the pipeline is applied on all the text. The text is split into chunks of size model_max_length. Works only with fast tokenizers and aggregation_strategy different from NONE. The value of this argument defines the number of overlapping tokens between chunks. In other words, the model will shift forward by tokenizer.model_max_length - stride tokens each step.
  • aggregation_strategy (str, optional, defaults to "none") — The strategy to fuse (or not) tokens based on the model prediction.

    • “none” : Will simply not do any aggregation and simply return raw results from the model
    • “simple” : Will attempt to group entities following the default schema. (A, B-TAG), (B, I-TAG), (C, I-TAG), (D, B-TAG2) (E, B-TAG2) will end up being [{“word”: ABC, “entity”: “TAG”}, {“word”: “D”, “entity”: “TAG2”}, {“word”: “E”, “entity”: “TAG2”}] Notice that two consecutive B tags will end up as different entities. On word based languages, we might end up splitting words undesirably : Imagine Microsoft being tagged as [{“word”: “Micro”, “entity”: “ENTERPRISE”}, {“word”: “soft”, “entity”: “NAME”}]. Look for FIRST, MAX, AVERAGE for ways to mitigate that and disambiguate words (on languages that support that meaning, which is basically tokens separated by a space). These mitigations will only work on real words, “New york” might still be tagged with two different entities.
    • “first” : (works only on word based models) Will use the SIMPLE strategy except that words, cannot end up with different tags. Words will simply use the tag of the first token of the word when there is ambiguity.
    • “average” : (works only on word based models) Will use the SIMPLE strategy except that words, cannot end up with different tags. scores will be averaged first across tokens, and then the maximum label is applied.
    • “max” : (works only on word based models) Will use the SIMPLE strategy except that words, cannot end up with different tags. Word entity will simply be the token with the maximum score.

Named Entity Recognition pipeline using any ModelForTokenClassification. See the named entity recognition examples for more information.

Example:

>>> from transformers import pipeline

>>> token_classifier = pipeline(model="Jean-Baptiste/camembert-ner", aggregation_strategy="simple")
>>> sentence = "Je m'appelle jean-baptiste et je vis à montréal"
>>> tokens = token_classifier(sentence)
>>> tokens
[{'entity_group': 'PER', 'score': 0.9931, 'word': 'jean-baptiste', 'start': 12, 'end': 26}, {'entity_group': 'LOC', 'score': 0.998, 'word': 'montréal', 'start': 38, 'end': 47}]

>>> token = tokens[0]
>>> # Start and end provide an easy way to highlight words in the original text.
>>> sentence[token["start"] : token["end"]]
' jean-baptiste'

>>> # Some models use the same idea to do part of speech.
>>> syntaxer = pipeline(model="vblagoje/bert-english-uncased-finetuned-pos", aggregation_strategy="simple")
>>> syntaxer("My name is Sarah and I live in London")
[{'entity_group': 'PRON', 'score': 0.999, 'word': 'my', 'start': 0, 'end': 2}, {'entity_group': 'NOUN', 'score': 0.997, 'word': 'name', 'start': 3, 'end': 7}, {'entity_group': 'AUX', 'score': 0.994, 'word': 'is', 'start': 8, 'end': 10}, {'entity_group': 'PROPN', 'score': 0.999, 'word': 'sarah', 'start': 11, 'end': 16}, {'entity_group': 'CCONJ', 'score': 0.999, 'word': 'and', 'start': 17, 'end': 20}, {'entity_group': 'PRON', 'score': 0.999, 'word': 'i', 'start': 21, 'end': 22}, {'entity_group': 'VERB', 'score': 0.998, 'word': 'live', 'start': 23, 'end': 27}, {'entity_group': 'ADP', 'score': 0.999, 'word': 'in', 'start': 28, 'end': 30}, {'entity_group': 'PROPN', 'score': 0.999, 'word': 'london', 'start': 31, 'end': 37}]

Learn more about the basics of using a pipeline in the pipeline tutorial

This token recognition pipeline can currently be loaded from pipeline() using the following task identifier: "ner" (for predicting the classes of tokens in a sequence: person, organisation, location or miscellaneous).

The models that this pipeline can use are models that have been fine-tuned on a token classification task. See the up-to-date list of available models on huggingface.co/models.

__call__

< >

( inputs: Union **kwargs ) A list or a list of list of dict

Parameters

  • inputs (str or List[str]) — One or several texts (or one list of texts) for token classification.

Returns

A list or a list of list of dict

Each result comes as a list of dictionaries (one for each token in the corresponding input, or each entity if this pipeline was instantiated with an aggregation_strategy) with the following keys:

  • word (str) — The token/word classified. This is obtained by decoding the selected tokens. If you want to have the exact string in the original sentence, use start and end.
  • score (float) — The corresponding probability for entity.
  • entity (str) — The entity predicted for that token/word (it is named entity_group when aggregation_strategy is not "none".
  • index (int, only present when aggregation_strategy="none") — The index of the corresponding token in the sentence.
  • start (int, optional) — The index of the start of the corresponding entity in the sentence. Only exists if the offsets are available within the tokenizer
  • end (int, optional) — The index of the end of the corresponding entity in the sentence. Only exists if the offsets are available within the tokenizer

Classify each token of the text(s) given as inputs.

aggregate_words

< >

( entities: List aggregation_strategy: AggregationStrategy )

Override tokens from a given word that disagree to force agreement on word boundaries.

Example: micro|soft| com|pany| B-ENT I-NAME I-ENT I-ENT will be rewritten with first strategy as microsoft| company| B-ENT I-ENT

gather_pre_entities

< >

( sentence: str input_ids: ndarray scores: ndarray offset_mapping: Optional special_tokens_mask: ndarray aggregation_strategy: AggregationStrategy )

Fuse various numpy arrays into dicts with all the information needed for aggregation

group_entities

< >

( entities: List )

Parameters

  • entities (dict) — The entities predicted by the pipeline.

Find and group together the adjacent tokens with the same entity predicted.

group_sub_entities

< >

( entities: List )

Parameters

  • entities (dict) — The entities predicted by the pipeline.

Group together the adjacent tokens with the same entity predicted.

TranslationPipeline

class transformers.TranslationPipeline

< >

( *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Translates from one language to another.

This translation pipeline can currently be loaded from pipeline() using the following task identifier: "translation_xx_to_yy".

The models that this pipeline can use are models that have been fine-tuned on a translation task. See the up-to-date list of available models on huggingface.co/models. For a list of available parameters, see the following documentation

Usage:

en_fr_translator = pipeline("translation_en_to_fr")
en_fr_translator("How old are you?")

__call__

< >

( *args **kwargs ) A list or a list of list of dict

Parameters

  • args (str or List[str]) — Texts to be translated.
  • return_tensors (bool, optional, defaults to False) — Whether or not to include the tensors of predictions (as token indices) in the outputs.
  • return_text (bool, optional, defaults to True) — Whether or not to include the decoded texts in the outputs.
  • clean_up_tokenization_spaces (bool, optional, defaults to False) — Whether or not to clean up the potential extra spaces in the text output.
  • src_lang (str, optional) — The language of the input. Might be required for multilingual models. Will not have any effect for single pair translation models
  • tgt_lang (str, optional) — The language of the desired output. Might be required for multilingual models. Will not have any effect for single pair translation models generate_kwargs — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).

Returns

A list or a list of list of dict

Each result comes as a dictionary with the following keys:

  • translation_text (str, present when return_text=True) — The translation.
  • translation_token_ids (torch.Tensor or tf.Tensor, present when return_tensors=True) — The token ids of the translation.

Translate the text(s) given as inputs.

ZeroShotClassificationPipeline

class transformers.ZeroShotClassificationPipeline

< >

( args_parser = <transformers.pipelines.zero_shot_classification.ZeroShotClassificationArgumentHandler object at 0x7f0d46ebc310> *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

NLI-based zero-shot classification pipeline using a ModelForSequenceClassification trained on NLI (natural language inference) tasks. Equivalent of text-classification pipelines, but these models don’t require a hardcoded number of potential classes, they can be chosen at runtime. It usually means it’s slower but it is much more flexible.

Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis pair and passed to the pretrained model. Then, the logit for entailment is taken as the logit for the candidate label being valid. Any NLI model can be used, but the id of the entailment label must be included in the model config’s :attr:~transformers.PretrainedConfig.label2id.

Example:

>>> from transformers import pipeline

>>> oracle = pipeline(model="facebook/bart-large-mnli")
>>> oracle(
...     "I have a problem with my iphone that needs to be resolved asap!!",
...     candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"],
... )
{'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['urgent', 'phone', 'computer', 'not urgent', 'tablet'], 'scores': [0.504, 0.479, 0.013, 0.003, 0.002]}

>>> oracle(
...     "I have a problem with my iphone that needs to be resolved asap!!",
...     candidate_labels=["english", "german"],
... )
{'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['english', 'german'], 'scores': [0.814, 0.186]}

Learn more about the basics of using a pipeline in the pipeline tutorial

This NLI pipeline can currently be loaded from pipeline() using the following task identifier: "zero-shot-classification".

The models that this pipeline can use are models that have been fine-tuned on an NLI task. See the up-to-date list of available models on huggingface.co/models.

__call__

< >

( sequences: Union *args **kwargs ) A dict or a list of dict

Parameters

  • sequences (str or List[str]) — The sequence(s) to classify, will be truncated if the model input is too large.
  • candidate_labels (str or List[str]) — The set of possible class labels to classify each sequence into. Can be a single label, a string of comma-separated labels, or a list of labels.
  • hypothesis_template (str, optional, defaults to "This example is {}.") — The template used to turn each label into an NLI-style hypothesis. This template must include a {} or similar syntax for the candidate label to be inserted into the template. For example, the default template is "This example is {}." With the candidate label "sports", this would be fed into the model like "<cls> sequence to classify <sep> This example is sports . <sep>". The default template works well in many cases, but it may be worthwhile to experiment with different templates depending on the task setting.
  • multi_label (bool, optional, defaults to False) — Whether or not multiple candidate labels can be true. If False, the scores are normalized such that the sum of the label likelihoods for each sequence is 1. If True, the labels are considered independent and probabilities are normalized for each candidate by doing a softmax of the entailment score vs. the contradiction score.

Returns

A dict or a list of dict

Each result comes as a dictionary with the following keys:

  • sequence (str) — The sequence for which this is the output.
  • labels (List[str]) — The labels sorted by order of likelihood.
  • scores (List[float]) — The probabilities for each of the labels.

Classify the sequence(s) given as inputs. See the ZeroShotClassificationPipeline documentation for more information.

多模态

可用于多模态任务的pipeline包括以下几种。

DocumentQuestionAnsweringPipeline

class transformers.DocumentQuestionAnsweringPipeline

< >

( *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • image_processor (BaseImageProcessor) — The image processor that will be used by the pipeline to encode data for the model. This object inherits from BaseImageProcessor.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Document Question Answering pipeline using any AutoModelForDocumentQuestionAnswering. The inputs/outputs are similar to the (extractive) question answering pipeline; however, the pipeline takes an image (and optional OCR’d words/boxes) as input instead of text context.

Example:

>>> from transformers import pipeline

>>> document_qa = pipeline(model="impira/layoutlm-document-qa")
>>> document_qa(
...     image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png",
...     question="What is the invoice number?",
... )
[{'score': 0.425, 'answer': 'us-001', 'start': 16, 'end': 16}]

Learn more about the basics of using a pipeline in the pipeline tutorial

This document question answering pipeline can currently be loaded from pipeline() using the following task identifier: "document-question-answering".

The models that this pipeline can use are models that have been fine-tuned on a document question answering task. See the up-to-date list of available models on huggingface.co/models.

__call__

< >

( image: Union question: Optional = None word_boxes: Tuple = None **kwargs ) A dict or a list of dict

Parameters

  • image (str or PIL.Image) — The pipeline handles three types of images:

    • A string containing a http link pointing to an image
    • A string containing a local path to an image
    • An image loaded in PIL directly

    The pipeline accepts either a single image or a batch of images. If given a single image, it can be broadcasted to multiple questions.

  • question (str) — A question to ask of the document.
  • word_boxes (List[str, Tuple[float, float, float, float]], optional) — A list of words and bounding boxes (normalized 0->1000). If you provide this optional input, then the pipeline will use these words and boxes instead of running OCR on the image to derive them for models that need them (e.g. LayoutLM). This allows you to reuse OCR’d results across many invocations of the pipeline without having to re-run it each time.
  • top_k (int, optional, defaults to 1) — The number of answers to return (will be chosen by order of likelihood). Note that we return less than top_k answers if there are not enough options available within the context.
  • doc_stride (int, optional, defaults to 128) — If the words in the document are too long to fit with the question for the model, it will be split in several chunks with some overlap. This argument controls the size of that overlap.
  • max_answer_len (int, optional, defaults to 15) — The maximum length of predicted answers (e.g., only answers with a shorter length are considered).
  • max_seq_len (int, optional, defaults to 384) — The maximum length of the total sentence (context + question) in tokens of each chunk passed to the model. The context will be split in several chunks (using doc_stride as overlap) if needed.
  • max_question_len (int, optional, defaults to 64) — The maximum length of the question after tokenization. It will be truncated if needed.
  • handle_impossible_answer (bool, optional, defaults to False) — Whether or not we accept impossible as an answer.
  • lang (str, optional) — Language to use while running OCR. Defaults to english.
  • tesseract_config (str, optional) — Additional flags to pass to tesseract while running OCR.
  • timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.

Returns

A dict or a list of dict

Each result comes as a dictionary with the following keys:

  • score (float) — The probability associated to the answer.
  • start (int) — The start word index of the answer (in the OCR’d version of the input or provided word_boxes).
  • end (int) — The end word index of the answer (in the OCR’d version of the input or provided word_boxes).
  • answer (str) — The answer to the question.
  • words (list[int]) — The index of each word/box pair that is in the answer

Answer the question(s) given as inputs by using the document(s). A document is defined as an image and an optional list of (word, box) tuples which represent the text in the document. If the word_boxes are not provided, it will use the Tesseract OCR engine (if available) to extract the words and boxes automatically for LayoutLM-like models which require them as input. For Donut, no OCR is run.

You can invoke the pipeline several ways:

  • pipeline(image=image, question=question)
  • pipeline(image=image, question=question, word_boxes=word_boxes)
  • pipeline([{"image": image, "question": question}])
  • pipeline([{"image": image, "question": question, "word_boxes": word_boxes}])

FeatureExtractionPipeline

class transformers.FeatureExtractionPipeline

< >

( model: Union tokenizer: Optional = None feature_extractor: Optional = None image_processor: Optional = None modelcard: Optional = None framework: Optional = None task: str = '' args_parser: ArgumentHandler = None device: Union = None torch_dtype: Union = None binary_output: bool = False **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • tokenize_kwargs (dict, optional) — Additional dictionary of keyword arguments passed along to the tokenizer.
  • return_tensors (bool, optional) — If True, returns a tensor according to the specified framework, otherwise returns a list.

Feature extraction pipeline uses no model head. This pipeline extracts the hidden states from the base transformer, which can be used as features in downstream tasks.

Example:

>>> from transformers import pipeline

>>> extractor = pipeline(model="google-bert/bert-base-uncased", task="feature-extraction")
>>> result = extractor("This is a simple test.", return_tensors=True)
>>> result.shape  # This is a tensor of shape [1, sequence_lenth, hidden_dimension] representing the input string.
torch.Size([1, 8, 768])

Learn more about the basics of using a pipeline in the pipeline tutorial

This feature extraction pipeline can currently be loaded from pipeline() using the task identifier: "feature-extraction".

All models may be used for this pipeline. See a list of all models, including community-contributed models on huggingface.co/models.

__call__

< >

( *args **kwargs ) A nested list of float

Parameters

  • args (str or List[str]) — One or several texts (or one list of texts) to get the features of.

Returns

A nested list of float

The features computed by the model.

Extract the features of the input(s).

ImageFeatureExtractionPipeline

class transformers.ImageFeatureExtractionPipeline

< >

( model: Union tokenizer: Optional = None feature_extractor: Optional = None image_processor: Optional = None modelcard: Optional = None framework: Optional = None task: str = '' args_parser: ArgumentHandler = None device: Union = None torch_dtype: Union = None binary_output: bool = False **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • image_processor (BaseImageProcessor) — The image processor that will be used by the pipeline to encode data for the model. This object inherits from BaseImageProcessor.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.
  • image_processor_kwargs (dict, optional) — Additional dictionary of keyword arguments passed along to the image processor e.g. {“size”: {“height”: 100, “width”: 100}‌}
  • pool (bool, optional, defaults to False) — Whether or not to return the pooled output. If False, the model will return the raw hidden states.

Image feature extraction pipeline uses no model head. This pipeline extracts the hidden states from the base transformer, which can be used as features in downstream tasks.

Example:

>>> from transformers import pipeline

>>> extractor = pipeline(model="google/vit-base-patch16-224", task="image-feature-extraction")
>>> result = extractor("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png", return_tensors=True)
>>> result.shape  # This is a tensor of shape [1, sequence_lenth, hidden_dimension] representing the input image.
torch.Size([1, 197, 768])

Learn more about the basics of using a pipeline in the pipeline tutorial

This image feature extraction pipeline can currently be loaded from pipeline() using the task identifier: "image-feature-extraction".

All vision models may be used for this pipeline. See a list of all models, including community-contributed models on huggingface.co/models.

__call__

< >

( *args **kwargs ) A nested list of float

Parameters

  • images (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:

    • A string containing a http link pointing to an image
    • A string containing a local path to an image
    • An image loaded in PIL directly

    The pipeline accepts either a single image or a batch of images, which must then be passed as a string. Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL images.

  • timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is used and the call may block forever.

Returns

A nested list of float

The features computed by the model.

Extract the features of the input(s).

ImageToTextPipeline

class transformers.ImageToTextPipeline

< >

( *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • image_processor (BaseImageProcessor) — The image processor that will be used by the pipeline to encode data for the model. This object inherits from BaseImageProcessor.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Image To Text pipeline using a AutoModelForVision2Seq. This pipeline predicts a caption for a given image.

Example:

>>> from transformers import pipeline

>>> captioner = pipeline(model="ydshieh/vit-gpt2-coco-en")
>>> captioner("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'generated_text': 'two birds are standing next to each other '}]

Learn more about the basics of using a pipeline in the pipeline tutorial

This image to text pipeline can currently be loaded from pipeline() using the following task identifier: “image-to-text”.

See the list of available models on huggingface.co/models.

__call__

< >

( images: Union **kwargs ) A list or a list of list of dict

Parameters

  • images (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:

    • A string containing a HTTP(s) link pointing to an image
    • A string containing a local path to an image
    • An image loaded in PIL directly

    The pipeline accepts either a single image or a batch of images.

  • max_new_tokens (int, optional) — The amount of maximum tokens to generate. By default it will use generate default.
  • generate_kwargs (Dict, optional) — Pass it to send all of these arguments directly to generate allowing full control of this function.
  • timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.

Returns

A list or a list of list of dict

Each result comes as a dictionary with the following key:

  • generated_text (str) — The generated text.

Assign labels to the image(s) passed as inputs.

MaskGenerationPipeline

class transformers.MaskGenerationPipeline

< >

( **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • image_processor (BaseImageProcessor) — The image processor that will be used by the pipeline to encode data for the model. This object inherits from BaseImageProcessor.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.
  • points_per_batch (optional, int, default to 64) — Sets the number of points run simultaneously by the model. Higher numbers may be faster but use more GPU memory.
  • output_bboxes_mask (bool, optional, default to False) — Whether or not to output the bounding box predictions.
  • output_rle_masks (bool, optional, default to False) — Whether or not to output the masks in RLE format

Automatic mask generation for images using SamForMaskGeneration. This pipeline predicts binary masks for an image, given an image. It is a ChunkPipeline because you can seperate the points in a mini-batch in order to avoid OOM issues. Use the points_per_batch argument to control the number of points that will be processed at the same time. Default is 64.

The pipeline works in 3 steps:

  1. preprocess: A grid of 1024 points evenly separated is generated along with bounding boxes and point labels. For more details on how the points and bounding boxes are created, check the _generate_crop_boxes function. The image is also preprocessed using the image_processor. This function yields a minibatch of points_per_batch.

  2. forward: feeds the outputs of preprocess to the model. The image embedding is computed only once. Calls both self.model.get_image_embeddings and makes sure that the gradients are not computed, and the tensors and models are on the same device.

  3. postprocess: The most important part of the automatic mask generation happens here. Three steps are induced:

    • image_processor.postprocess_masks (run on each minibatch loop): takes in the raw output masks, resizes them according to the image size, and transforms there to binary masks.
    • image_processor.filter_masks (on each minibatch loop): uses both pred_iou_thresh and stability_scores. Also applies a variety of filters based on non maximum suppression to remove bad masks.
    • image_processor.postprocess_masks_for_amg applies the NSM on the mask to only keep relevant ones.

Example:

>>> from transformers import pipeline

>>> generator = pipeline(model="facebook/sam-vit-base", task="mask-generation")
>>> outputs = generator(
...     "http://images.cocodataset.org/val2017/000000039769.jpg",
... )

>>> outputs = generator(
...     "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png", points_per_batch=128
... )

Learn more about the basics of using a pipeline in the pipeline tutorial

This segmentation pipeline can currently be loaded from pipeline() using the following task identifier: "mask-generation".

See the list of available models on huggingface.co/models.

__call__

< >

( image *args num_workers = None batch_size = None **kwargs ) Dict

Parameters

  • inputs (np.ndarray or bytes or str or dict) — Image or list of images.
  • mask_threshold (float, optional, defaults to 0.0) — Threshold to use when turning the predicted masks into binary values.
  • pred_iou_thresh (float, optional, defaults to 0.88) — A filtering threshold in [0,1] applied on the model’s predicted mask quality.
  • stability_score_thresh (float, optional, defaults to 0.95) — A filtering threshold in [0,1], using the stability of the mask under changes to the cutoff used to binarize the model’s mask predictions.
  • stability_score_offset (int, optional, defaults to 1) — The amount to shift the cutoff when calculated the stability score.
  • crops_nms_thresh (float, optional, defaults to 0.7) — The box IoU cutoff used by non-maximal suppression to filter duplicate masks.
  • crops_n_layers (int, optional, defaults to 0) — If crops_n_layers>0, mask prediction will be run again on crops of the image. Sets the number of layers to run, where each layer has 2**i_layer number of image crops.
  • crop_overlap_ratio (float, optional, defaults to 512 / 1500) — Sets the degree to which crops overlap. In the first crop layer, crops will overlap by this fraction of the image length. Later layers with more crops scale down this overlap.
  • crop_n_points_downscale_factor (int, optional, defaults to 1) — The number of points-per-side sampled in layer n is scaled down by crop_n_points_downscale_factor**n.
  • timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.

Returns

Dict

A dictionary with the following keys:

  • mask (PIL.Image) — A binary mask of the detected object as a PIL Image of shape (width, height) of the original image. Returns a mask filled with zeros if no object is found.
  • score (optional float) — Optionally, when the model is capable of estimating a confidence of the “object” described by the label and the mask.

Generates binary segmentation masks

VisualQuestionAnsweringPipeline

class transformers.VisualQuestionAnsweringPipeline

< >

( *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • image_processor (BaseImageProcessor) — The image processor that will be used by the pipeline to encode data for the model. This object inherits from BaseImageProcessor.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

Visual Question Answering pipeline using a AutoModelForVisualQuestionAnswering. This pipeline is currently only available in PyTorch.

Example:

>>> from transformers import pipeline

>>> oracle = pipeline(model="dandelin/vilt-b32-finetuned-vqa")
>>> image_url = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/lena.png"
>>> oracle(question="What is she wearing ?", image=image_url)
[{'score': 0.948, 'answer': 'hat'}, {'score': 0.009, 'answer': 'fedora'}, {'score': 0.003, 'answer': 'clothes'}, {'score': 0.003, 'answer': 'sun hat'}, {'score': 0.002, 'answer': 'nothing'}]

>>> oracle(question="What is she wearing ?", image=image_url, top_k=1)
[{'score': 0.948, 'answer': 'hat'}]

>>> oracle(question="Is this a person ?", image=image_url, top_k=1)
[{'score': 0.993, 'answer': 'yes'}]

>>> oracle(question="Is this a man ?", image=image_url, top_k=1)
[{'score': 0.996, 'answer': 'no'}]

Learn more about the basics of using a pipeline in the pipeline tutorial

This visual question answering pipeline can currently be loaded from pipeline() using the following task identifiers: "visual-question-answering", "vqa".

The models that this pipeline can use are models that have been fine-tuned on a visual question answering task. See the up-to-date list of available models on huggingface.co/models.

__call__

< >

( image: Union question: str = None **kwargs ) A dictionary or a list of dictionaries containing the result. The dictionaries contain the following keys

Parameters

  • image (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:

    • A string containing a http link pointing to an image
    • A string containing a local path to an image
    • An image loaded in PIL directly

    The pipeline accepts either a single image or a batch of images. If given a single image, it can be broadcasted to multiple questions.

  • question (str, List[str]) — The question(s) asked. If given a single question, it can be broadcasted to multiple images.
  • top_k (int, optional, defaults to 5) — The number of top labels that will be returned by the pipeline. If the provided number is higher than the number of labels available in the model configuration, it will default to the number of labels.
  • timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.

Returns

A dictionary or a list of dictionaries containing the result. The dictionaries contain the following keys

  • label (str) — The label identified by the model.
  • score (int) — The score attributed by the model for that label.

Answers open-ended questions about images. The pipeline accepts several types of inputs which are detailed below:

  • pipeline(image=image, question=question)
  • pipeline({"image": image, "question": question})
  • pipeline([{"image": image, "question": question}])
  • pipeline([{"image": image, "question": question}, {"image": image, "question": question}])

Parent class: Pipeline

class transformers.Pipeline

< >

( model: Union tokenizer: Optional = None feature_extractor: Optional = None image_processor: Optional = None modelcard: Optional = None framework: Optional = None task: str = '' args_parser: ArgumentHandler = None device: Union = None torch_dtype: Union = None binary_output: bool = False **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • feature_extractor (SequenceFeatureExtractor) — The feature extractor that will be used by the pipeline to encode data for the model. This object inherits from SequenceFeatureExtractor.
  • image_processor (BaseImageProcessor) — The image processor that will be used by the pipeline to encode data for the model. This object inherits from BaseImageProcessor.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too
  • torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto")
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a serialized format (i.e., pickle) or as the raw output data e.g. text.

The Pipeline class is the class from which all pipelines inherit. Refer to this class for methods shared across different pipelines.

Base class implementing pipelined operations. Pipeline workflow is defined as a sequence of the following operations:

Input -> Tokenization -> Model Inference -> Post-Processing (task dependent) -> Output

Pipeline supports running on CPU or GPU through the device argument (see below).

Some pipeline, like for instance FeatureExtractionPipeline ('feature-extraction') output large tensor object as nested-lists. In order to avoid dumping such large structure as textual data we provide the binary_output constructor argument. If set to True, the output will be stored in the pickle format.

check_model_type

< >

( supported_models: Union )

Parameters

  • supported_models (List[str] or dict) — The list of models supported by the pipeline, or a dictionary with model class values.

Check if the model class is in supported by the pipeline.

device_placement

< >

( )

Context Manager allowing tensor allocation on the user-specified device in framework agnostic way.

Examples:

# Explicitly ask for tensor allocation on CUDA device :0
pipe = pipeline(..., device=0)
with pipe.device_placement():
    # Every framework specific tensor allocation will be done on the request device
    output = pipe(...)

ensure_tensor_on_device

< >

( **inputs ) Dict[str, torch.Tensor]

Parameters

  • inputs (keyword arguments that should be torch.Tensor, the rest is ignored) — The tensors to place on self.device.
  • Recursive on lists only. —

Returns

Dict[str, torch.Tensor]

The same as inputs but on the proper device.

Ensure PyTorch tensors are on the specified device.

postprocess

< >

( model_outputs: ModelOutput **postprocess_parameters: Dict )

Postprocess will receive the raw outputs of the _forward method, generally tensors, and reformat them into something more friendly. Generally it will output a list or a dict or results (containing just strings and numbers).

predict

< >

( X )

Scikit / Keras interface to transformers’ pipelines. This method will forward to call().

preprocess

< >

( input_: Any **preprocess_parameters: Dict )

Preprocess will take the input_ of a specific pipeline and return a dictionary of everything necessary for _forward to run properly. It should contain at least one tensor, but might have arbitrary other items.

save_pretrained

< >

( save_directory: str safe_serialization: bool = True )

Parameters

  • save_directory (str) — A path to the directory where to saved. It will be created if it doesn’t exist.
  • safe_serialization (str) — Whether to save the model using safetensors or the traditional way for PyTorch or Tensorflow.

Save the pipeline’s model and tokenizer.

transform

< >

( X )

Scikit / Keras interface to transformers’ pipelines. This method will forward to call().