Pipelines¶
The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of the complex code from the library, offering a simple API dedicated to several tasks, including Named Entity Recognition, Masked Language Modeling, Sentiment Analysis, Feature Extraction and Question Answering. See the task summary for examples of use.
There are two categories of pipeline abstractions to be aware about:
The
pipeline()
which is the most powerful object encapsulating all other pipelines.The other task-specific pipelines:
The pipeline abstraction¶
The pipeline abstraction is a wrapper around all the other available pipelines. It is instantiated as any other pipeline but requires an additional argument which is the task.
-
transformers.
pipeline
(task: str, model: Optional = None, config: Optional[Union[str, transformers.configuration_utils.PretrainedConfig]] = None, tokenizer: Optional[Union[str, transformers.tokenization_utils.PreTrainedTokenizer]] = None, feature_extractor: Optional[Union[str, SequenceFeatureExtractor]] = None, framework: Optional[str] = None, revision: Optional[str] = None, use_fast: bool = True, use_auth_token: Optional[Union[bool, str]] = None, model_kwargs: Dict[str, Any] = {}, **kwargs) → transformers.pipelines.base.Pipeline[source]¶ Utility factory method to build a
Pipeline
.Pipelines are made of:
- Parameters
task (
str
) –The task defining which pipeline will be returned. Currently accepted tasks are:
"feature-extraction"
: will return aFeatureExtractionPipeline
."text-classification"
: will return aTextClassificationPipeline
."sentiment-analysis"
: (alias of"text-classification") will return a :class:`~transformers.TextClassificationPipeline
."token-classification"
: will return aTokenClassificationPipeline
."ner"
(alias of"token-classification"): will return a :class:`~transformers.TokenClassificationPipeline
."question-answering"
: will return aQuestionAnsweringPipeline
."fill-mask"
: will return aFillMaskPipeline
."summarization"
: will return aSummarizationPipeline
."translation_xx_to_yy"
: will return aTranslationPipeline
."text2text-generation"
: will return aText2TextGenerationPipeline
."text-generation"
: will return aTextGenerationPipeline
."zero-shot-classification:
: will return aZeroShotClassificationPipeline
."conversational"
: will return aConversationalPipeline
.
model (
str
orPreTrainedModel
orTFPreTrainedModel
, optional) –The model that will be used by the pipeline to make predictions. This can be a model identifier or an actual instance of a pretrained model inheriting from
PreTrainedModel
(for PyTorch) orTFPreTrainedModel
(for TensorFlow).If not provided, the default for the
task
will be loaded.config (
str
orPretrainedConfig
, optional) –The configuration that will be used by the pipeline to instantiate the model. This can be a model identifier or an actual pretrained model configuration inheriting from
PretrainedConfig
.If not provided, the default configuration file for the requested model will be used. That means that if
model
is given, its default configuration will be used. However, ifmodel
is not supplied, thistask
’s default model’s config is used instead.tokenizer (
str
orPreTrainedTokenizer
, optional) –The tokenizer that will be used by the pipeline to encode data for the model. This can be a model identifier or an actual pretrained tokenizer inheriting from
PreTrainedTokenizer
.If not provided, the default tokenizer for the given
model
will be loaded (if it is a string). Ifmodel
is not specified or not a string, then the default tokenizer forconfig
is loaded (if it is a string). However, ifconfig
is also not given or not a string, then the default tokenizer for the giventask
will be loaded.feature_extractor (
str
orPreTrainedFeatureExtractor
, optional) –The feature extractor that will be used by the pipeline to encode data for the model. This can be a model identifier or an actual pretrained feature extractor inheriting from
PreTrainedFeatureExtractor
.Feature extractors are used for non-NLP models, such as Speech or Vision models as well as multi-modal models. Multi-modal models will also require a tokenizer to be passed.
If not provided, the default feature extractor for the given
model
will be loaded (if it is a string). Ifmodel
is not specified or not a string, then the default feature extractor forconfig
is loaded (if it is a string). However, ifconfig
is also not given or not a string, then the default feature extractor for the giventask
will be loaded.framework (
str
, optional) –The framework to use, either
"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.revision (
str
, optional, defaults to"main"
) – When passing a task name or a string model identifier: The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.use_fast (
bool
, optional, defaults toTrue
) – Whether or not to use a Fast tokenizer if possible (aPreTrainedTokenizerFast
).use_auth_token (
str
or bool, optional) – The token to use as HTTP bearer authorization for remote files. IfTrue
, will use the token generated when runningtransformers-cli login
(stored inhuggingface
). revision(str
, optional, defaults to"main"
):model_kwargs – Additional dictionary of keyword arguments passed along to the model’s
from_pretrained(..., **model_kwargs)
function.kwargs – Additional keyword arguments passed along to the specific pipeline init (see the documentation for the corresponding pipeline class for possible values).
- Returns
A suitable pipeline for the task.
- Return type
Examples:
>>> from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer >>> # Sentiment analysis pipeline >>> pipeline('sentiment-analysis') >>> # Question answering pipeline, specifying the checkpoint identifier >>> pipeline('question-answering', model='distilbert-base-cased-distilled-squad', tokenizer='bert-base-cased') >>> # Named entity recognition pipeline, passing in a specific model and tokenizer >>> model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english") >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") >>> pipeline('ner', model=model, tokenizer=tokenizer)
The task specific pipelines¶
AutomaticSpeechRecognitionPipeline¶
-
class
transformers.
AutomaticSpeechRecognitionPipeline
(feature_extractor: SequenceFeatureExtractor, *args, **kwargs)[source]¶ Pipeline that aims at extracting spoken text contained within some audio.
The input can be either a raw waveform or a audio file. In case of the audio file, ffmpeg should be installed for to support multiple audio formats
-
__call__
(inputs: Union[numpy.ndarray, bytes, str], **kwargs)[source]¶ Classify the sequence(s) given as inputs. See the
AutomaticSpeechRecognitionPipeline
documentation for more information.- Parameters
inputs (
np.ndarray
orbytes
orstr
) – The inputs is either a raw waveform (np.ndarray
of shape (n, ) of typenp.float32
ornp.float64
) at the correct sampling rate (no further check will be done) or astr
that is the filename of the audio file, the file will be read at the correct sampling rate to get the waveform using ffmpeg. This requires ffmpeg to be installed on the system. If inputs isbytes
it is supposed to be the content of an audio file and is interpreted by ffmpeg in the same way.- Returns
text (
str
) – The recognized text.
- Return type
A
dict
with the following keys
-
ConversationalPipeline¶
-
class
transformers.
Conversation
(text: str = None, conversation_id: uuid.UUID = None, past_user_inputs=None, generated_responses=None)[source]¶ Utility class containing a conversation and its history. This class is meant to be used as an input to the
ConversationalPipeline
. The conversation contains a number of utility function to manage the addition of new user input and generated model responses. A conversation needs to contain an unprocessed user input before being passed to theConversationalPipeline
. This user input is either created when the class is instantiated, or by callingconversational_pipeline.append_response("input")
after a conversation turn.- Parameters
text (
str
, optional) – The initial user input to start the conversation. If not provided, a user input needs to be provided manually using theadd_user_input()
method before the conversation can begin.conversation_id (
uuid.UUID
, optional) – Unique identifier for the conversation. If not provided, a random UUID4 id will be assigned to the conversation.past_user_inputs (
List[str]
, optional) – Eventual past history of the conversation of the user. You don’t need to pass it manually if you use the pipeline interactively but if you want to recreate history you need to set bothpast_user_inputs
andgenerated_responses
with equal length lists of stringsgenerated_responses (
List[str]
, optional) – Eventual past history of the conversation of the model. You don’t need to pass it manually if you use the pipeline interactively but if you want to recreate history you need to set bothpast_user_inputs
andgenerated_responses
with equal length lists of strings
Usage:
conversation = Conversation("Going to the movies tonight - any suggestions?") # Steps usually performed by the model when generating a response: # 1. Mark the user input as processed (moved to the history) conversation.mark_processed() # 2. Append a mode response conversation.append_response("The Big lebowski.") conversation.add_user_input("Is it good?")
-
class
transformers.
ConversationalPipeline
(min_length_for_response=32, *args, **kwargs)[source]¶ Multi-turn conversational pipeline.
This conversational pipeline can currently be loaded from
pipeline()
using the following task identifier:"conversational"
.The models that this pipeline can use are models that have been fine-tuned on a multi-turn conversational task, currently: ‘microsoft/DialoGPT-small’, ‘microsoft/DialoGPT-medium’, ‘microsoft/DialoGPT-large’. See the up-to-date list of available models on huggingface.co/models.
Usage:
conversational_pipeline = pipeline("conversational") conversation_1 = Conversation("Going to the movies tonight - any suggestions?") conversation_2 = Conversation("What's the last book you have read?") conversational_pipeline([conversation_1, conversation_2]) conversation_1.add_user_input("Is it an action movie?") conversation_2.add_user_input("What is the genre of this book?") conversational_pipeline([conversation_1, conversation_2])
- Parameters
model (
PreTrainedModel
orTFPreTrainedModel
) – The model that will be used by the pipeline to make predictions. This needs to be a model inheriting fromPreTrainedModel
for PyTorch andTFPreTrainedModel
for TensorFlow.tokenizer (
PreTrainedTokenizer
) – The tokenizer that will be used by the pipeline to encode data for the model. This object inherits fromPreTrainedTokenizer
.modelcard (
str
orModelCard
, optional) – Model card attributed to the model for this pipeline.framework (
str
, optional) –The framework to use, either
"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) – A task-identifier for the pipeline.args_parser (
ArgumentHandler
, optional) – Reference to the object in charge of parsing supplied pipeline parameters.device (
int
, optional, defaults to -1) – Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id.binary_output (
bool
, optional, defaults toFalse
) – Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.min_length_for_response (
int
, optional, defaults to 32) – The minimum length (in number of tokens) for a response.
-
__call__
(conversations: Union[transformers.pipelines.conversational.Conversation, List[transformers.pipelines.conversational.Conversation]], clean_up_tokenization_spaces=True, **generate_kwargs)[source]¶ Generate responses for the conversation(s) given as inputs.
- Parameters
conversations (a
Conversation
or a list ofConversation
) – Conversations to generate responses for.clean_up_tokenization_spaces (
bool
, optional, defaults toFalse
) – Whether or not to clean up the potential extra spaces in the text output.generate_kwargs – Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).
- Returns
Conversation(s) with updated generated responses for those containing a new user input.
- Return type
Conversation
or a list ofConversation
FeatureExtractionPipeline¶
-
class
transformers.
FeatureExtractionPipeline
(model: Union[PreTrainedModel, TFPreTrainedModel], tokenizer: transformers.tokenization_utils.PreTrainedTokenizer, modelcard: Optional[transformers.modelcard.ModelCard] = None, framework: Optional[str] = None, args_parser: transformers.pipelines.base.ArgumentHandler = None, device: int = - 1, task: str = '')[source]¶ Feature extraction pipeline using no model head. This pipeline extracts the hidden states from the base transformer, which can be used as features in downstream tasks.
This feature extraction pipeline can currently be loaded from
pipeline()
using the task identifier:"feature-extraction"
.All models may be used for this pipeline. See a list of all models, including community-contributed models on huggingface.co/models.
- Parameters
model (
PreTrainedModel
orTFPreTrainedModel
) – The model that will be used by the pipeline to make predictions. This needs to be a model inheriting fromPreTrainedModel
for PyTorch andTFPreTrainedModel
for TensorFlow.tokenizer (
PreTrainedTokenizer
) – The tokenizer that will be used by the pipeline to encode data for the model. This object inherits fromPreTrainedTokenizer
.modelcard (
str
orModelCard
, optional) – Model card attributed to the model for this pipeline.framework (
str
, optional) –The framework to use, either
"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) – A task-identifier for the pipeline.args_parser (
ArgumentHandler
, optional) – Reference to the object in charge of parsing supplied pipeline parameters.device (
int
, optional, defaults to -1) – Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id.
FillMaskPipeline¶
-
class
transformers.
FillMaskPipeline
(model: Union[PreTrainedModel, TFPreTrainedModel], tokenizer: transformers.tokenization_utils.PreTrainedTokenizer, modelcard: Optional[transformers.modelcard.ModelCard] = None, framework: Optional[str] = None, args_parser: transformers.pipelines.base.ArgumentHandler = None, device: int = - 1, top_k=5, task: str = '')[source]¶ Masked language modeling prediction pipeline using any
ModelWithLMHead
. See the masked language modeling examples for more information.This mask filling pipeline can currently be loaded from
pipeline()
using the following task identifier:"fill-mask"
.The models that this pipeline can use are models that have been trained with a masked language modeling objective, which includes the bi-directional models in the library. See the up-to-date list of available models on huggingface.co/models.
Note
This pipeline only works for inputs with exactly one token masked.
- Parameters
model (
PreTrainedModel
orTFPreTrainedModel
) – The model that will be used by the pipeline to make predictions. This needs to be a model inheriting fromPreTrainedModel
for PyTorch andTFPreTrainedModel
for TensorFlow.tokenizer (
PreTrainedTokenizer
) – The tokenizer that will be used by the pipeline to encode data for the model. This object inherits fromPreTrainedTokenizer
.modelcard (
str
orModelCard
, optional) – Model card attributed to the model for this pipeline.framework (
str
, optional) –The framework to use, either
"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) – A task-identifier for the pipeline.args_parser (
ArgumentHandler
, optional) – Reference to the object in charge of parsing supplied pipeline parameters.device (
int
, optional, defaults to -1) – Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id.binary_output (
bool
, optional, defaults toFalse
) – Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.top_k (
int
, defaults to 5) – The number of predictions to return.
-
__call__
(*args, targets=None, top_k: Optional[int] = None, **kwargs)[source]¶ Fill the masked token in the text(s) given as inputs.
- Parameters
args (
str
orList[str]
) – One or several texts (or one list of prompts) with masked tokens.targets (
str
orList[str]
, optional) – When passed, the model will return the scores for the passed token or tokens rather than the top k predictions in the entire vocabulary. If the provided targets are not in the model vocab, they will be tokenized and the first resulting token will be used (with a warning).top_k (
int
, optional) – When passed, overrides the number of predictions to return.
- Returns
Each result comes as list of dictionaries with the following keys:
sequence (
str
) – The corresponding input with the mask token prediction.score (
float
) – The corresponding probability.token (
int
) – The predicted token id (to replace the masked one).token (
str
) – The predicted token (to replace the masked one).
- Return type
A list or a list of list of
dict
ImageClassificationPipeline¶
-
class
transformers.
ImageClassificationPipeline
(model: Union[PreTrainedModel, TFPreTrainedModel], feature_extractor: SequenceFeatureExtractor, framework: Optional[str] = None, **kwargs)[source]¶ Image classification pipeline using any
AutoModelForImageClassification
. This pipeline predicts the class of an image.This image classification pipeline can currently be loaded from
pipeline()
using the following task identifier:"image-classification"
.See the list of available models on huggingface.co/models.
- Parameters
model (
PreTrainedModel
orTFPreTrainedModel
) – The model that will be used by the pipeline to make predictions. This needs to be a model inheriting fromPreTrainedModel
for PyTorch andTFPreTrainedModel
for TensorFlow.tokenizer (
PreTrainedTokenizer
) – The tokenizer that will be used by the pipeline to encode data for the model. This object inherits fromPreTrainedTokenizer
.modelcard (
str
orModelCard
, optional) – Model card attributed to the model for this pipeline.framework (
str
, optional) –The framework to use, either
"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) – A task-identifier for the pipeline.args_parser (
ArgumentHandler
, optional) – Reference to the object in charge of parsing supplied pipeline parameters.device (
int
, optional, defaults to -1) – Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id.binary_output (
bool
, optional, defaults toFalse
) – Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
-
__call__
(images: Union[str, List[str], Image, List[Image]], top_k=5)[source]¶ Assign labels to the image(s) passed as inputs.
- Parameters
images (
str
,List[str]
,PIL.Image
orList[PIL.Image]
) –The pipeline handles three types of images:
A string containing a http link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images, which must then be passed as a string. Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL images.
top_k (
int
, optional, defaults to 5) – The number of top labels that will be returned by the pipeline. If the provided number is higher than the number of labels available in the model configuration, it will default to the number of labels.
- Returns
A dictionary or a list of dictionaries containing result. If the input is a single image, will return a dictionary, if the input is a list of several images, will return a list of dictionaries corresponding to the images.
The dictionaries contain the following keys:
label (
str
) – The label identified by the model.score (
int
) – The score attributed by the model for that label.
NerPipeline¶
-
transformers.
NerPipeline
¶ alias of
transformers.pipelines.token_classification.TokenClassificationPipeline
See TokenClassificationPipeline
for all details.
QuestionAnsweringPipeline¶
-
class
transformers.
QuestionAnsweringPipeline
(model: Union[PreTrainedModel, TFPreTrainedModel], tokenizer: transformers.tokenization_utils.PreTrainedTokenizer, modelcard: Optional[transformers.modelcard.ModelCard] = None, framework: Optional[str] = None, device: int = - 1, task: str = '', **kwargs)[source]¶ Question Answering pipeline using any
ModelForQuestionAnswering
. See the question answering examples for more information.This question answering pipeline can currently be loaded from
pipeline()
using the following task identifier:"question-answering"
.The models that this pipeline can use are models that have been fine-tuned on a question answering task. See the up-to-date list of available models on huggingface.co/models.
- Parameters
model (
PreTrainedModel
orTFPreTrainedModel
) – The model that will be used by the pipeline to make predictions. This needs to be a model inheriting fromPreTrainedModel
for PyTorch andTFPreTrainedModel
for TensorFlow.tokenizer (
PreTrainedTokenizer
) – The tokenizer that will be used by the pipeline to encode data for the model. This object inherits fromPreTrainedTokenizer
.modelcard (
str
orModelCard
, optional) – Model card attributed to the model for this pipeline.framework (
str
, optional) –The framework to use, either
"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) – A task-identifier for the pipeline.args_parser (
ArgumentHandler
, optional) – Reference to the object in charge of parsing supplied pipeline parameters.device (
int
, optional, defaults to -1) – Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id.binary_output (
bool
, optional, defaults toFalse
) – Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
-
__call__
(*args, **kwargs)[source]¶ Answer the question(s) given as inputs by using the context(s).
- Parameters
args (
SquadExample
or a list ofSquadExample
) – One or severalSquadExample
containing the question and context.X (
SquadExample
or a list ofSquadExample
, optional) – One or severalSquadExample
containing the question and context (will be treated the same way as if passed as the first positional argument).data (
SquadExample
or a list ofSquadExample
, optional) – One or severalSquadExample
containing the question and context (will be treated the same way as if passed as the first positional argument).question (
str
orList[str]
) – One or several question(s) (must be used in conjunction with thecontext
argument).context (
str
orList[str]
) – One or several context(s) associated with the question(s) (must be used in conjunction with thequestion
argument).topk (
int
, optional, defaults to 1) – The number of answers to return (will be chosen by order of likelihood). Note that we return less than topk answers if there are not enough options available within the context.doc_stride (
int
, optional, defaults to 128) – If the context is too long to fit with the question for the model, it will be split in several chunks with some overlap. This argument controls the size of that overlap.max_answer_len (
int
, optional, defaults to 15) – The maximum length of predicted answers (e.g., only answers with a shorter length are considered).max_seq_len (
int
, optional, defaults to 384) – The maximum length of the total sentence (context + question) after tokenization. The context will be split in several chunks (usingdoc_stride
) if needed.max_question_len (
int
, optional, defaults to 64) – The maximum length of the question after tokenization. It will be truncated if needed.handle_impossible_answer (
bool
, optional, defaults toFalse
) – Whether or not we accept impossible as an answer.
- Returns
Each result comes as a dictionary with the following keys:
score (
float
) – The probability associated to the answer.start (
int
) – The character start index of the answer (in the tokenized version of the input).end (
int
) – The character end index of the answer (in the tokenized version of the input).answer (
str
) – The answer to the question.
- Return type
A
dict
or a list ofdict
-
static
create_sample
(question: Union[str, List[str]], context: Union[str, List[str]]) → Union[transformers.data.processors.squad.SquadExample, List[transformers.data.processors.squad.SquadExample]][source]¶ QuestionAnsweringPipeline leverages the
SquadExample
internally. This helper method encapsulate all the logic for converting question(s) and context(s) toSquadExample
.We currently support extractive question answering.
- Parameters
question (
str
orList[str]
) – The question(s) asked.context (
str
orList[str]
) – The context(s) in which we will look for the answer.
- Returns
The corresponding
SquadExample
grouping question and context.- Return type
One or a list of
SquadExample
-
decode
(start: numpy.ndarray, end: numpy.ndarray, topk: int, max_answer_len: int, undesired_tokens: numpy.ndarray) → Tuple[source]¶ Take the output of any
ModelForQuestionAnswering
and will generate probabilities for each span to be the actual answer.In addition, it filters out some unwanted/impossible cases like answer len being greater than max_answer_len or answer end position being before the starting position. The method supports output the k-best answer through the topk argument.
- Parameters
start (
np.ndarray
) – Individual start probabilities for each token.end (
np.ndarray
) – Individual end probabilities for each token.topk (
int
) – Indicates how many possible answer span(s) to extract from the model output.max_answer_len (
int
) – Maximum size of the answer to extract from the model’s output.undesired_tokens (
np.ndarray
) – Mask determining tokens that can be part of the answer
-
span_to_answer
(text: str, start: int, end: int) → Dict[str, Union[str, int]][source]¶ When decoding from token probabilities, this method maps token indexes to actual word in the initial context.
- Parameters
text (
str
) – The actual context to extract the answer from.start (
int
) – The answer starting token index.end (
int
) – The answer end token index.
- Returns
Dictionary like
{'answer': str, 'start': int, 'end': int}
SummarizationPipeline¶
-
class
transformers.
SummarizationPipeline
(*args, **kwargs)[source]¶ Summarize news articles and other documents.
This summarizing pipeline can currently be loaded from
pipeline()
using the following task identifier:"summarization"
.The models that this pipeline can use are models that have been fine-tuned on a summarization task, which is currently, ‘bart-large-cnn’, ‘t5-small’, ‘t5-base’, ‘t5-large’, ‘t5-3b’, ‘t5-11b’. See the up-to-date list of available models on huggingface.co/models.
Usage:
# use bart in pytorch summarizer = pipeline("summarization") summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20) # use t5 in tf summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base", framework="tf") summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)
- Parameters
model (
PreTrainedModel
orTFPreTrainedModel
) – The model that will be used by the pipeline to make predictions. This needs to be a model inheriting fromPreTrainedModel
for PyTorch andTFPreTrainedModel
for TensorFlow.tokenizer (
PreTrainedTokenizer
) – The tokenizer that will be used by the pipeline to encode data for the model. This object inherits fromPreTrainedTokenizer
.modelcard (
str
orModelCard
, optional) – Model card attributed to the model for this pipeline.framework (
str
, optional) –The framework to use, either
"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) – A task-identifier for the pipeline.args_parser (
ArgumentHandler
, optional) – Reference to the object in charge of parsing supplied pipeline parameters.device (
int
, optional, defaults to -1) – Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id.binary_output (
bool
, optional, defaults toFalse
) – Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
-
__call__
(*args, **kwargs)[source]¶ Summarize the text(s) given as inputs.
- Parameters
documents (str or
List[str]
) – One or several articles (or one list of articles) to summarize.return_text (
bool
, optional, defaults toTrue
) – Whether or not to include the decoded texts in the outputsreturn_tensors (
bool
, optional, defaults toFalse
) – Whether or not to include the tensors of predictions (as token indices) in the outputs.clean_up_tokenization_spaces (
bool
, optional, defaults toFalse
) – Whether or not to clean up the potential extra spaces in the text output.generate_kwargs – Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).
- Returns
Each result comes as a dictionary with the following keys:
summary_text (
str
, present whenreturn_text=True
) – The summary of the corresponding input.summary_token_ids (
torch.Tensor
ortf.Tensor
, present whenreturn_tensors=True
) – The token ids of the summary.
- Return type
A list or a list of list of
dict
TableQuestionAnsweringPipeline¶
-
class
transformers.
TableQuestionAnsweringPipeline
(args_parser=<transformers.pipelines.table_question_answering.TableQuestionAnsweringArgumentHandler object>, *args, **kwargs)[source]¶ Table Question Answering pipeline using a
ModelForTableQuestionAnswering
. This pipeline is only available in PyTorch.This tabular question answering pipeline can currently be loaded from
pipeline()
using the following task identifier:"table-question-answering"
.The models that this pipeline can use are models that have been fine-tuned on a tabular question answering task. See the up-to-date list of available models on huggingface.co/models.
- Parameters
model (
PreTrainedModel
orTFPreTrainedModel
) – The model that will be used by the pipeline to make predictions. This needs to be a model inheriting fromPreTrainedModel
for PyTorch andTFPreTrainedModel
for TensorFlow.tokenizer (
PreTrainedTokenizer
) – The tokenizer that will be used by the pipeline to encode data for the model. This object inherits fromPreTrainedTokenizer
.modelcard (
str
orModelCard
, optional) – Model card attributed to the model for this pipeline.framework (
str
, optional) –The framework to use, either
"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) – A task-identifier for the pipeline.args_parser (
ArgumentHandler
, optional) – Reference to the object in charge of parsing supplied pipeline parameters.device (
int
, optional, defaults to -1) – Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id.binary_output (
bool
, optional, defaults toFalse
) – Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
-
__call__
(*args, **kwargs)[source]¶ Answers queries according to a table. The pipeline accepts several types of inputs which are detailed below:
pipeline(table, query)
pipeline(table, [query])
pipeline(table=table, query=query)
pipeline(table=table, query=[query])
pipeline({"table": table, "query": query})
pipeline({"table": table, "query": [query]})
pipeline([{"table": table, "query": query}, {"table": table, "query": query}])
The
table
argument should be a dict or a DataFrame built from that dict, containing the whole table:Example:
data = { "actors": ["brad pitt", "leonardo di caprio", "george clooney"], "age": ["56", "45", "59"], "number of movies": ["87", "53", "69"], "date of birth": ["7 february 1967", "10 june 1996", "28 november 1967"], }
This dictionary can be passed in as such, or can be converted to a pandas DataFrame:
Example:
import pandas as pd table = pd.DataFrame.from_dict(data)
- Parameters
table (
pd.DataFrame
orDict
) – Pandas DataFrame or dictionary that will be converted to a DataFrame containing all the table values. See above for an example of dictionary.query (
str
orList[str]
) – Query or list of queries that will be sent to the model alongside the table.sequential (
bool
, optional, defaults toFalse
) – Whether to do inference sequentially or as a batch. Batching is faster, but models like SQA require the inference to be done sequentially to extract relations within sequences, given their conversational nature.padding (
bool
,str
orPaddingStrategy
, optional, defaults toFalse
) –Activates and controls padding. Accepts the following values:
True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (
bool
,str
orTapasTruncationStrategy
, optional, defaults toFalse
) –Activates and controls truncation. Accepts the following values:
True
or'drop_rows_to_fit'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate row by row, removing rows from the table.False
or'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
- Returns
Each result is a dictionary with the following keys:
answer (
str
) – The answer of the query given the table. If there is an aggregator, the answer will be preceded byAGGREGATOR >
.coordinates (
List[Tuple[int, int]]
) – Coordinates of the cells of the answers.cells (
List[str]
) – List of strings made up of the answer cell values.aggregator (
str
) – If the model has an aggregator, this returns the aggregator.
- Return type
A dictionary or a list of dictionaries containing results
TextClassificationPipeline¶
-
class
transformers.
TextClassificationPipeline
(return_all_scores: bool = False, **kwargs)[source]¶ Text classification pipeline using any
ModelForSequenceClassification
. See the sequence classification examples for more information.This text classification pipeline can currently be loaded from
pipeline()
using the following task identifier:"sentiment-analysis"
(for classifying sequences according to positive or negative sentiments).If multiple classification labels are available (
model.config.num_labels >= 2
), the pipeline will run a softmax over the results. If there is a single label, the pipeline will run a sigmoid over the result.The models that this pipeline can use are models that have been fine-tuned on a sequence classification task. See the up-to-date list of available models on huggingface.co/models.
- Parameters
model (
PreTrainedModel
orTFPreTrainedModel
) – The model that will be used by the pipeline to make predictions. This needs to be a model inheriting fromPreTrainedModel
for PyTorch andTFPreTrainedModel
for TensorFlow.tokenizer (
PreTrainedTokenizer
) – The tokenizer that will be used by the pipeline to encode data for the model. This object inherits fromPreTrainedTokenizer
.modelcard (
str
orModelCard
, optional) – Model card attributed to the model for this pipeline.framework (
str
, optional) –The framework to use, either
"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) – A task-identifier for the pipeline.args_parser (
ArgumentHandler
, optional) – Reference to the object in charge of parsing supplied pipeline parameters.device (
int
, optional, defaults to -1) – Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id.binary_output (
bool
, optional, defaults toFalse
) – Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.return_all_scores (
bool
, optional, defaults toFalse
) – Whether to return all prediction scores or just the one of the predicted class.
-
__call__
(*args, **kwargs)[source]¶ Classify the text(s) given as inputs.
- Parameters
args (
str
orList[str]
) – One or several texts (or one list of prompts) to classify.- Returns
Each result comes as list of dictionaries with the following keys:
label (
str
) – The label predicted.score (
float
) – The corresponding probability.
If
self.return_all_scores=True
, one such dictionary is returned per label.- Return type
A list or a list of list of
dict
TextGenerationPipeline¶
-
class
transformers.
TextGenerationPipeline
(*args, return_full_text=True, **kwargs)[source]¶ Language generation pipeline using any
ModelWithLMHead
. This pipeline predicts the words that will follow a specified text prompt.This language generation pipeline can currently be loaded from
pipeline()
using the following task identifier:"text-generation"
.The models that this pipeline can use are models that have been trained with an autoregressive language modeling objective, which includes the uni-directional models in the library (e.g. gpt2). See the list of available models on huggingface.co/models.
- Parameters
model (
PreTrainedModel
orTFPreTrainedModel
) – The model that will be used by the pipeline to make predictions. This needs to be a model inheriting fromPreTrainedModel
for PyTorch andTFPreTrainedModel
for TensorFlow.tokenizer (
PreTrainedTokenizer
) – The tokenizer that will be used by the pipeline to encode data for the model. This object inherits fromPreTrainedTokenizer
.modelcard (
str
orModelCard
, optional) – Model card attributed to the model for this pipeline.framework (
str
, optional) –The framework to use, either
"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) – A task-identifier for the pipeline.args_parser (
ArgumentHandler
, optional) – Reference to the object in charge of parsing supplied pipeline parameters.device (
int
, optional, defaults to -1) – Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id.binary_output (
bool
, optional, defaults toFalse
) – Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
-
__call__
(text_inputs, return_tensors=False, return_text=True, return_full_text=None, clean_up_tokenization_spaces=False, prefix=None, **generate_kwargs)[source]¶ Complete the prompt(s) given as inputs.
- Parameters
args (
str
orList[str]
) – One or several prompts (or one list of prompts) to complete.return_tensors (
bool
, optional, defaults toFalse
) – Whether or not to include the tensors of predictions (as token indices) in the outputs.return_text (
bool
, optional, defaults toTrue
) – Whether or not to include the decoded texts in the outputs.return_full_text (
bool
, optional, defaults toTrue
) – If set toFalse
only added text is returned, otherwise the full text is returned Only meaningful if return_text is set to True.clean_up_tokenization_spaces (
bool
, optional, defaults toFalse
) – Whether or not to clean up the potential extra spaces in the text output.prefix (
str
, optional) – Prefix added to prompt.generate_kwargs – Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).
- Returns
Each result comes as a dictionary with the following keys:
generated_text (
str
, present whenreturn_text=True
) – The generated text.generated_token_ids (
torch.Tensor
ortf.Tensor
, present whenreturn_tensors=True
) – The token ids of the generated text.
- Return type
A list or a list of list of
dict
Text2TextGenerationPipeline¶
-
class
transformers.
Text2TextGenerationPipeline
(*args, **kwargs)[source]¶ Pipeline for text to text generation using seq2seq models.
This Text2TextGenerationPipeline pipeline can currently be loaded from
pipeline()
using the following task identifier:"text2text-generation"
.The models that this pipeline can use are models that have been fine-tuned on a translation task. See the up-to-date list of available models on huggingface.co/models.
Usage:
text2text_generator = pipeline("text2text-generation") text2text_generator("question: What is 42 ? context: 42 is the answer to life, the universe and everything")
- Parameters
model (
PreTrainedModel
orTFPreTrainedModel
) – The model that will be used by the pipeline to make predictions. This needs to be a model inheriting fromPreTrainedModel
for PyTorch andTFPreTrainedModel
for TensorFlow.tokenizer (
PreTrainedTokenizer
) – The tokenizer that will be used by the pipeline to encode data for the model. This object inherits fromPreTrainedTokenizer
.modelcard (
str
orModelCard
, optional) – Model card attributed to the model for this pipeline.framework (
str
, optional) –The framework to use, either
"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) – A task-identifier for the pipeline.args_parser (
ArgumentHandler
, optional) – Reference to the object in charge of parsing supplied pipeline parameters.device (
int
, optional, defaults to -1) – Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id.binary_output (
bool
, optional, defaults toFalse
) – Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
-
__call__
(*args, return_tensors=False, return_text=True, clean_up_tokenization_spaces=False, truncation=<TruncationStrategy.DO_NOT_TRUNCATE: 'do_not_truncate'>, **generate_kwargs)[source]¶ Generate the output text(s) using text(s) given as inputs.
- Parameters
args (
str
orList[str]
) – Input text for the encoder.return_tensors (
bool
, optional, defaults toFalse
) – Whether or not to include the tensors of predictions (as token indices) in the outputs.return_text (
bool
, optional, defaults toTrue
) – Whether or not to include the decoded texts in the outputs.clean_up_tokenization_spaces (
bool
, optional, defaults toFalse
) – Whether or not to clean up the potential extra spaces in the text output.truncation (
TruncationStrategy
, optional, defaults toTruncationStrategy.DO_NOT_TRUNCATE
) – The truncation strategy for the tokenization within the pipeline.TruncationStrategy.DO_NOT_TRUNCATE
(default) will never truncate, but it is sometimes desirable to truncate the input to fit the model’s max_length instead of throwing an error down the line.generate_kwargs – Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).
- Returns
Each result comes as a dictionary with the following keys:
generated_text (
str
, present whenreturn_text=True
) – The generated text.generated_token_ids (
torch.Tensor
ortf.Tensor
, present whenreturn_tensors=True
) – The token ids of the generated text.
- Return type
A list or a list of list of
dict
TokenClassificationPipeline¶
-
class
transformers.
TokenClassificationPipeline
(model: Union[PreTrainedModel, TFPreTrainedModel], tokenizer: transformers.tokenization_utils.PreTrainedTokenizer, modelcard: Optional[transformers.modelcard.ModelCard] = None, framework: Optional[str] = None, args_parser: transformers.pipelines.base.ArgumentHandler = <transformers.pipelines.token_classification.TokenClassificationArgumentHandler object>, device: int = -1, binary_output: bool = False, ignore_labels=['O'], task: str = '', grouped_entities: Optional[bool] = None, ignore_subwords: Optional[bool] = None, aggregation_strategy: Optional[transformers.pipelines.token_classification.AggregationStrategy] = None)[source]¶ Named Entity Recognition pipeline using any
ModelForTokenClassification
. See the named entity recognition examples for more information.This token recognition pipeline can currently be loaded from
pipeline()
using the following task identifier:"ner"
(for predicting the classes of tokens in a sequence: person, organisation, location or miscellaneous).The models that this pipeline can use are models that have been fine-tuned on a token classification task. See the up-to-date list of available models on huggingface.co/models.
- Parameters
model (
PreTrainedModel
orTFPreTrainedModel
) – The model that will be used by the pipeline to make predictions. This needs to be a model inheriting fromPreTrainedModel
for PyTorch andTFPreTrainedModel
for TensorFlow.tokenizer (
PreTrainedTokenizer
) – The tokenizer that will be used by the pipeline to encode data for the model. This object inherits fromPreTrainedTokenizer
.modelcard (
str
orModelCard
, optional) – Model card attributed to the model for this pipeline.framework (
str
, optional) –The framework to use, either
"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) – A task-identifier for the pipeline.args_parser (
ArgumentHandler
, optional) – Reference to the object in charge of parsing supplied pipeline parameters.device (
int
, optional, defaults to -1) – Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id.binary_output (
bool
, optional, defaults toFalse
) – Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.ignore_labels (
List[str]
, defaults to["O"]
) – A list of labels to ignore.grouped_entities (
bool
, optional, defaults toFalse
) – DEPRECATED, useaggregation_strategy
instead. Whether or not to group the tokens corresponding to the same entity together in the predictions or not.aggregation_strategy (
str
, optional, defaults to"none"
) –The strategy to fuse (or not) tokens based on the model prediction.
”none” : Will simply not do any aggregation and simply return raw results from the model
”simple” : Will attempt to group entities following the default schema. (A, B-TAG), (B, I-TAG), (C, I-TAG), (D, B-TAG2) (E, B-TAG2) will end up being [{“word”: ABC, “entity”: “TAG”}, {“word”: “D”, “entity”: “TAG2”}, {“word”: “E”, “entity”: “TAG2”}] Notice that two consecutive B tags will end up as different entities. On word based languages, we might end up splitting words undesirably : Imagine Microsoft being tagged as [{“word”: “Micro”, “entity”: “ENTERPRISE”}, {“word”: “soft”, “entity”: “NAME”}]. Look for FIRST, MAX, AVERAGE for ways to mitigate that and disambiguate words (on languages that support that meaning, which is basically tokens separated by a space). These mitigations will only work on real words, “New york” might still be tagged with two different entities.
”first” : (works only on word based models) Will use the
SIMPLE
strategy except that words, cannot end up with different tags. Words will simply use the tag of the first token of the word when there is ambiguity.”average” : (works only on word based models) Will use the
SIMPLE
strategy except that words, cannot end up with different tags. scores will be averaged first across tokens, and then the maximum label is applied.”max” : (works only on word based models) Will use the
SIMPLE
strategy except that words, cannot end up with different tags. Word entity will simply be the token with the maximum score.
-
__call__
(inputs: Union[str, List[str]], **kwargs)[source]¶ Classify each token of the text(s) given as inputs.
- Parameters
inputs (
str
orList[str]
) – One or several texts (or one list of texts) for token classification.- Returns
Each result comes as a list of dictionaries (one for each token in the corresponding input, or each entity if this pipeline was instantiated with an aggregation_strategy) with the following keys:
word (
str
) – The token/word classified.score (
float
) – The corresponding probability forentity
.entity (
str
) – The entity predicted for that token/word (it is named entity_group when aggregation_strategy is not"none"
.index (
int
, only present whenaggregation_strategy="none"
) – The index of the corresponding token in the sentence.start (
int
, optional) – The index of the start of the corresponding entity in the sentence. Only exists if the offsets are available within the tokenizerend (
int
, optional) – The index of the end of the corresponding entity in the sentence. Only exists if the offsets are available within the tokenizer
- Return type
A list or a list of list of
dict
-
aggregate_words
(entities: List[dict], aggregation_strategy: transformers.pipelines.token_classification.AggregationStrategy) → List[dict][source]¶ Override tokens from a given word that disagree to force agreement on word boundaries.
Example: micro|soft| com|pany| B-ENT I-NAME I-ENT I-ENT will be rewritten with first strategy as microsoft| company| B-ENT I-ENT
-
gather_pre_entities
(sentence: str, input_ids: numpy.ndarray, scores: numpy.ndarray, offset_mapping: Optional[List[Tuple[int, int]]], special_tokens_mask: numpy.ndarray) → List[dict][source]¶ Fuse various numpy arrays into dicts with all the information needed for aggregation
TranslationPipeline¶
-
class
transformers.
TranslationPipeline
(*args, src_lang=None, tgt_lang=None, **kwargs)[source]¶ Translates from one language to another.
This translation pipeline can currently be loaded from
pipeline()
using the following task identifier:"translation_xx_to_yy"
.The models that this pipeline can use are models that have been fine-tuned on a translation task. See the up-to-date list of available models on huggingface.co/models.
- Usage::
en_fr_translator = pipeline(“translation_en_to_fr”) en_fr_translator(“How old are you?”)
- Parameters
model (
PreTrainedModel
orTFPreTrainedModel
) – The model that will be used by the pipeline to make predictions. This needs to be a model inheriting fromPreTrainedModel
for PyTorch andTFPreTrainedModel
for TensorFlow.tokenizer (
PreTrainedTokenizer
) – The tokenizer that will be used by the pipeline to encode data for the model. This object inherits fromPreTrainedTokenizer
.modelcard (
str
orModelCard
, optional) – Model card attributed to the model for this pipeline.framework (
str
, optional) –The framework to use, either
"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) – A task-identifier for the pipeline.args_parser (
ArgumentHandler
, optional) – Reference to the object in charge of parsing supplied pipeline parameters.device (
int
, optional, defaults to -1) – Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id.binary_output (
bool
, optional, defaults toFalse
) – Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
-
__call__
(*args, return_tensors=False, return_text=True, clean_up_tokenization_spaces=False, truncation=<TruncationStrategy.DO_NOT_TRUNCATE: 'do_not_truncate'>, src_lang=None, tgt_lang=None, **generate_kwargs)[source]¶ Translate the text(s) given as inputs.
- Parameters
args (
str
orList[str]
) – Texts to be translated.return_tensors (
bool
, optional, defaults toFalse
) – Whether or not to include the tensors of predictions (as token indices) in the outputs.return_text (
bool
, optional, defaults toTrue
) – Whether or not to include the decoded texts in the outputs.clean_up_tokenization_spaces (
bool
, optional, defaults toFalse
) – Whether or not to clean up the potential extra spaces in the text output.src_lang (
str
, optional) – The language of the input. Might be required for multilingual models. Will not have any effect for single pair translation modelstgt_lang (
str
, optional) – The language of the desired output. Might be required for multilingual models. Will not have any effect for single pair translation modelsgenerate_kwargs – Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).
- Returns
Each result comes as a dictionary with the following keys:
translation_text (
str
, present whenreturn_text=True
) – The translation.translation_token_ids (
torch.Tensor
ortf.Tensor
, present whenreturn_tensors=True
) – The token ids of the translation.
- Return type
A list or a list of list of
dict
ZeroShotClassificationPipeline¶
-
class
transformers.
ZeroShotClassificationPipeline
(args_parser=<transformers.pipelines.zero_shot_classification.ZeroShotClassificationArgumentHandler object>, *args, **kwargs)[source]¶ NLI-based zero-shot classification pipeline using a
ModelForSequenceClassification
trained on NLI (natural language inference) tasks.Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis pair and passed to the pretrained model. Then, the logit for entailment is taken as the logit for the candidate label being valid. Any NLI model can be used, but the id of the entailment label must be included in the model config’s
label2id
.This NLI pipeline can currently be loaded from
pipeline()
using the following task identifier:"zero-shot-classification"
.The models that this pipeline can use are models that have been fine-tuned on an NLI task. See the up-to-date list of available models on huggingface.co/models.
- Parameters
model (
PreTrainedModel
orTFPreTrainedModel
) – The model that will be used by the pipeline to make predictions. This needs to be a model inheriting fromPreTrainedModel
for PyTorch andTFPreTrainedModel
for TensorFlow.tokenizer (
PreTrainedTokenizer
) – The tokenizer that will be used by the pipeline to encode data for the model. This object inherits fromPreTrainedTokenizer
.modelcard (
str
orModelCard
, optional) – Model card attributed to the model for this pipeline.framework (
str
, optional) –The framework to use, either
"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) – A task-identifier for the pipeline.args_parser (
ArgumentHandler
, optional) – Reference to the object in charge of parsing supplied pipeline parameters.device (
int
, optional, defaults to -1) – Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id.binary_output (
bool
, optional, defaults toFalse
) – Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
-
__call__
(sequences: Union[str, List[str]], candidate_labels, hypothesis_template='This example is {}.', multi_label=False, **kwargs)[source]¶ Classify the sequence(s) given as inputs. See the
ZeroShotClassificationPipeline
documentation for more information.- Parameters
sequences (
str
orList[str]
) – The sequence(s) to classify, will be truncated if the model input is too large.candidate_labels (
str
orList[str]
) – The set of possible class labels to classify each sequence into. Can be a single label, a string of comma-separated labels, or a list of labels.hypothesis_template (
str
, optional, defaults to"This example is {}."
) – The template used to turn each label into an NLI-style hypothesis. This template must include a {} or similar syntax for the candidate label to be inserted into the template. For example, the default template is"This example is {}."
With the candidate label"sports"
, this would be fed into the model like"<cls> sequence to classify <sep> This example is sports . <sep>"
. The default template works well in many cases, but it may be worthwhile to experiment with different templates depending on the task setting.multi_label (
bool
, optional, defaults toFalse
) – Whether or not multiple candidate labels can be true. IfFalse
, the scores are normalized such that the sum of the label likelihoods for each sequence is 1. IfTrue
, the labels are considered independent and probabilities are normalized for each candidate by doing a softmax of the entailment score vs. the contradiction score.
- Returns
Each result comes as a dictionary with the following keys:
sequence (
str
) – The sequence for which this is the output.labels (
List[str]
) – The labels sorted by order of likelihood.scores (
List[float]
) – The probabilities for each of the labels.
- Return type
A
dict
or a list ofdict
Parent class: Pipeline
¶
-
class
transformers.
Pipeline
(model: Union[PreTrainedModel, TFPreTrainedModel], tokenizer: Optional[transformers.tokenization_utils.PreTrainedTokenizer] = None, feature_extractor: Optional[SequenceFeatureExtractor] = None, modelcard: Optional[transformers.modelcard.ModelCard] = None, framework: Optional[str] = None, task: str = '', args_parser: transformers.pipelines.base.ArgumentHandler = None, device: int = - 1, binary_output: bool = False)[source]¶ The Pipeline class is the class from which all pipelines inherit. Refer to this class for methods shared across different pipelines.
Base class implementing pipelined operations. Pipeline workflow is defined as a sequence of the following operations:
Input -> Tokenization -> Model Inference -> Post-Processing (task dependent) -> Output
Pipeline supports running on CPU or GPU through the device argument (see below).
Some pipeline, like for instance
FeatureExtractionPipeline
('feature-extraction'
) output large tensor object as nested-lists. In order to avoid dumping such large structure as textual data we provide thebinary_output
constructor argument. If set toTrue
, the output will be stored in the pickle format.- Parameters
model (
PreTrainedModel
orTFPreTrainedModel
) – The model that will be used by the pipeline to make predictions. This needs to be a model inheriting fromPreTrainedModel
for PyTorch andTFPreTrainedModel
for TensorFlow.tokenizer (
PreTrainedTokenizer
) – The tokenizer that will be used by the pipeline to encode data for the model. This object inherits fromPreTrainedTokenizer
.modelcard (
str
orModelCard
, optional) – Model card attributed to the model for this pipeline.framework (
str
, optional) –The framework to use, either
"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) – A task-identifier for the pipeline.args_parser (
ArgumentHandler
, optional) – Reference to the object in charge of parsing supplied pipeline parameters.device (
int
, optional, defaults to -1) – Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id.binary_output (
bool
, optional, defaults toFalse
) – Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
-
check_model_type
(supported_models: Union[List[str], dict])[source]¶ Check if the model class is in supported by the pipeline.
- Parameters
supported_models (
List[str]
ordict
) – The list of models supported by the pipeline, or a dictionary with model class values.
-
device_placement
()[source]¶ Context Manager allowing tensor allocation on the user-specified device in framework agnostic way.
- Returns
Context manager
Examples:
# Explicitly ask for tensor allocation on CUDA device :0 pipe = pipeline(..., device=0) with pipe.device_placement(): # Every framework specific tensor allocation will be done on the request device output = pipe(...)
-
ensure_tensor_on_device
(**inputs)[source]¶ Ensure PyTorch tensors are on the specified device.
- Parameters
inputs (keyword arguments that should be
torch.Tensor
) – The tensors to place onself.device
.- Returns
The same as
inputs
but on the proper device.- Return type
Dict[str, torch.Tensor]
-
predict
(X)[source]¶ Scikit / Keras interface to transformers’ pipelines. This method will forward to __call__().