Optimum documentation

IPU Pipelines

You are viewing v1.8.6 version. A newer version v1.19.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

IPU Pipelines

There are a number of 🤗 pipelines that have been adapted for use with IPUs. The available IPU pipelines are:

API reference

IPUFillMaskPipeline

Based on the 🤗 FillMaskPipeline pipeline.

class optimum.graphcore.IPUFillMaskPipeline

< >

( model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')] tokenizer: typing.Optional[transformers.tokenization_utils.PreTrainedTokenizer] = None feature_extractor: typing.Optional[ForwardRef('SequenceFeatureExtractor')] = None modelcard: typing.Optional[transformers.modelcard.ModelCard] = None framework: typing.Optional[str] = None task: str = '' args_parser: ArgumentHandler = None device: typing.Union[int, str, ForwardRef('torch.device')] = -1 binary_output: bool = False **kwargs )

IPUText2TextGenerationPipeline

Based on the 🤗 Text2TextGenerationPipeline pipeline.

class optimum.graphcore.pipelines.IPUText2TextGenerationPipeline

< >

( *args **kwargs )

IPUSummarizationPipeline

Based on the 🤗 SummarizationPipeline pipeline.

class optimum.graphcore.pipelines.IPUSummarizationPipeline

< >

( *args **kwargs )

IPUTranslationPipeline

Based on the 🤗 TranslationPipeline pipeline.

class optimum.graphcore.pipelines.IPUTranslationPipeline

< >

( *args **kwargs )

IPUTokenClassificationPipeline

Based on the 🤗 TokenClassificationPipeline pipeline.

class optimum.graphcore.IPUTokenClassificationPipeline

< >

( args_parser = <transformers.pipelines.token_classification.TokenClassificationArgumentHandler object at 0x7f9a0984d100> *args **kwargs )

IPUZeroShotClassificationPipeline

Based on the 🤗 ZeroShotClassificationPipeline pipeline.

class optimum.graphcore.pipelines.IPUZeroShotClassificationPipeline

< >

( args_parser = <transformers.pipelines.zero_shot_classification.ZeroShotClassificationArgumentHandler object at 0x7f9a0984dd30> *args **kwargs )

Parameters

  • model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
  • tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
  • modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
  • framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.

    If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.

  • task (str, defaults to "") — A task-identifier for the pipeline.
  • num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
  • batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
  • args_parser (~pipelines.ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
  • device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
  • binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.