Transformers documentation

SpeechT5

You are viewing v4.27.1 version. A newer version v4.46.2 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

SpeechT5

Overview

The SpeechT5 model was proposed in SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.

The abstract from the paper is the following:

Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.

This model was contributed by Matthijs. The original code can be found here.

SpeechT5Config

class transformers.SpeechT5Config

< >

( vocab_size = 81 hidden_size = 768 encoder_layers = 12 encoder_attention_heads = 12 encoder_ffn_dim = 3072 encoder_layerdrop = 0.1 decoder_layers = 6 decoder_ffn_dim = 3072 decoder_attention_heads = 12 decoder_layerdrop = 0.1 hidden_act = 'gelu' positional_dropout = 0.1 hidden_dropout = 0.1 attention_dropout = 0.1 activation_dropout = 0.1 initializer_range = 0.02 layer_norm_eps = 1e-05 scale_embedding = False feat_extract_norm = 'group' feat_proj_dropout = 0.0 feat_extract_activation = 'gelu' conv_dim = (512, 512, 512, 512, 512, 512, 512) conv_stride = (5, 2, 2, 2, 2, 2, 2) conv_kernel = (10, 3, 3, 3, 3, 2, 2) conv_bias = False num_conv_pos_embeddings = 128 num_conv_pos_embedding_groups = 16 apply_spec_augment = True mask_time_prob = 0.05 mask_time_length = 10 mask_time_min_masks = 2 mask_feature_prob = 0.0 mask_feature_length = 10 mask_feature_min_masks = 0 pad_token_id = 1 bos_token_id = 0 eos_token_id = 2 decoder_start_token_id = 2 num_mel_bins = 80 speech_decoder_prenet_layers = 2 speech_decoder_prenet_units = 256 speech_decoder_prenet_dropout = 0.5 speaker_embedding_dim = 512 speech_decoder_postnet_layers = 5 speech_decoder_postnet_units = 256 speech_decoder_postnet_kernel = 5 speech_decoder_postnet_dropout = 0.5 reduction_factor = 2 max_speech_positions = 4000 max_text_positions = 450 encoder_max_relative_position = 160 use_cache = True is_encoder_decoder = True **kwargs )

Parameters

  • vocab_size (int, optional, defaults to 81) — Vocabulary size of the SpeechT5 model. Defines the number of different tokens that can be represented by the inputs_ids passed to the forward method of SpeechT5Model.
  • hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.
  • encoder_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.
  • encoder_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.
  • encoder_ffn_dim (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
  • encoder_layerdrop (float, optional, defaults to 0.1) — The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details.
  • decoder_layers (int, optional, defaults to 6) — Number of hidden layers in the Transformer decoder.
  • decoder_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer decoder.
  • decoder_ffn_dim (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer decoder.
  • decoder_layerdrop (float, optional, defaults to 0.1) — The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details.
  • hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported.
  • positional_dropout (float, optional, defaults to 0.1) — The dropout probability for the text position encoding layers.
  • hidden_dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
  • attention_dropout (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities.
  • activation_dropout (float, optional, defaults to 0.1) — The dropout ratio for activations inside the fully connected layer.
  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
  • layer_norm_eps (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers.
  • scale_embedding (bool, optional, defaults to False) — Scale embeddings by diving by sqrt(d_model).
  • feat_extract_norm (str, optional, defaults to "group") — The norm to be applied to 1D convolutional layers in the speech encoder pre-net. One of "group" for group normalization of only the first 1D convolutional layer or "layer" for layer normalization of all 1D convolutional layers.
  • feat_proj_dropout (float, optional, defaults to 0.0) — The dropout probability for output of the speech encoder pre-net.
  • feat_extract_activation (str, optional, defaults to “gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, “gelu”, “relu”, “selu”and“gelu_new”` are supported.
  • conv_dim (Tuple[int] or List[int], optional, defaults to (512, 512, 512, 512, 512, 512, 512)) — A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the speech encoder pre-net. The length of conv_dim defines the number of 1D convolutional layers.
  • conv_stride (Tuple[int] or List[int], optional, defaults to (5, 2, 2, 2, 2, 2, 2)) — A tuple of integers defining the stride of each 1D convolutional layer in the speech encoder pre-net. The length of conv_stride defines the number of convolutional layers and has to match the length of conv_dim.
  • conv_kernel (Tuple[int] or List[int], optional, defaults to (10, 3, 3, 3, 3, 3, 3)) — A tuple of integers defining the kernel size of each 1D convolutional layer in the speech encoder pre-net. The length of conv_kernel defines the number of convolutional layers and has to match the length of conv_dim.
  • conv_bias (bool, optional, defaults to False) — Whether the 1D convolutional layers have a bias.
  • num_conv_pos_embeddings (int, optional, defaults to 128) — Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer.
  • num_conv_pos_embedding_groups (int, optional, defaults to 16) — Number of groups of 1D convolutional positional embeddings layer.
  • apply_spec_augment (bool, optional, defaults to True) — Whether to apply SpecAugment data augmentation to the outputs of the speech encoder pre-net. For reference see SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition.
  • mask_time_prob (float, optional, defaults to 0.05) — Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ”mask_time_problen(time_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, mask_time_prob should be `prob_vector_startmask_time_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True`.
  • mask_time_length (int, optional, defaults to 10) — Length of vector span along the time axis.
  • mask_time_min_masks (int, optional, defaults to 2), — The minimum number of masks of length mask_feature_length generated along the time axis, each time step, irrespectively of mask_feature_prob. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length < mask_time_min_masks”
  • mask_feature_prob (float, optional, defaults to 0.0) — Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ”mask_feature_problen(feature_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, mask_feature_prob should be `prob_vector_startmask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True`.
  • mask_feature_length (int, optional, defaults to 10) — Length of vector span along the feature axis.
  • mask_feature_min_masks (int, optional, defaults to 0), — The minimum number of masks of length mask_feature_length generated along the feature axis, each time step, irrespectively of mask_feature_prob. Only relevant if ”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks”
  • num_mel_bins (int, optional, defaults to 80) — Number of mel features used per input features. Used by the speech decoder pre-net. Should correspond to the value used in the SpeechT5Processor class.
  • speech_decoder_prenet_layers (int, optional, defaults to 2) — Number of layers in the speech decoder pre-net.
  • speech_decoder_prenet_units (int, optional, defaults to 256) — Dimensionality of the layers in the speech decoder pre-net.
  • speech_decoder_prenet_dropout (float, optional, defaults to 0.5) — The dropout probability for the speech decoder pre-net layers.
  • speaker_embedding_dim (int, optional, defaults to 512) — Dimensionality of the XVector embedding vectors.
  • speech_decoder_postnet_layers (int, optional, defaults to 5) — Number of layers in the speech decoder post-net.
  • speech_decoder_postnet_units (int, optional, defaults to 256) — Dimensionality of the layers in the speech decoder post-net.
  • speech_decoder_postnet_kernel (int, optional, defaults to 5) — Number of convolutional filter channels in the speech decoder post-net.
  • speech_decoder_postnet_dropout (float, optional, defaults to 0.5) — The dropout probability for the speech decoder post-net layers.
  • reduction_factor (int, optional, defaults to 2) — Spectrogram length reduction factor for the speech decoder post-net.
  • max_speech_positions (int, optional, defaults to 4000) — The maximum sequence length of speech features that this model might ever be used with.
  • max_text_positions (int, optional, defaults to 450) — The maximum sequence length of text features that this model might ever be used with.
  • encoder_max_relative_position (int, optional, defaults to 160) — Maximum distance for relative position embedding in the encoder.
  • use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models).

This is the configuration class to store the configuration of a SpeechT5Model. It is used to instantiate a SpeechT5 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SpeechT5 microsoft/speecht5_asr architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

>>> from transformers import SpeechT5Model, SpeechT5Config

>>> # Initializing a "microsoft/speecht5_asr" style configuration
>>> configuration = SpeechT5Config()

>>> # Initializing a model (with random weights) from the "microsoft/speecht5_asr" style configuration
>>> model = SpeechT5Model(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

SpeechT5HifiGanConfig

class transformers.SpeechT5HifiGanConfig

< >

( model_in_dim = 80 sampling_rate = 16000 upsample_initial_channel = 512 upsample_rates = [4, 4, 4, 4] upsample_kernel_sizes = [8, 8, 8, 8] resblock_kernel_sizes = [3, 7, 11] resblock_dilation_sizes = [[1, 3, 5], [1, 3, 5], [1, 3, 5]] initializer_range = 0.01 leaky_relu_slope = 0.1 normalize_before = True **kwargs )

Parameters

  • model_in_dim (int, optional, defaults to 80) — The number of frequency bins in the input log-mel spectrogram.
  • sampling_rate (int, optional, defaults to 16000) — The sampling rate at which the output audio will be generated, expressed in hertz (Hz).
  • upsample_initial_channel (int, optional, defaults to 512) — The number of input channels into the upsampling network.
  • upsample_rates (Tuple[int] or List[int], optional, defaults to [4, 4, 4, 4]) — A tuple of integers defining the stride of each 1D convolutional layer in the upsampling network. The length of upsample_rates defines the number of convolutional layers and has to match the length of upsample_kernel_sizes.
  • upsample_kernel_sizes (Tuple[int] or List[int], optional, defaults to [8, 8, 8, 8]) — A tuple of integers defining the kernel size of each 1D convolutional layer in the upsampling network. The length of upsample_kernel_sizes defines the number of convolutional layers and has to match the length of upsample_rates.
  • resblock_kernel_sizes (Tuple[int] or List[int], optional, defaults to [3, 7, 11]) — A tuple of integers defining the kernel sizes of the 1D convolutional layers in the multi-receptive field fusion (MRF) module.
  • resblock_dilation_sizes (Tuple[Tuple[int]] or List[List[int]], optional, defaults to [[1, 3, 5], [1, 3, 5], [1, 3, 5]]) — A nested tuple of integers defining the dilation rates of the dilated 1D convolutional layers in the multi-receptive field fusion (MRF) module.
  • initializer_range (float, optional, defaults to 0.01) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
  • leaky_relu_slope (float, optional, defaults to 0.1) — The angle of the negative slope used by the leaky ReLU activation.
  • normalize_before (bool, optional, defaults to True) — Whether or not to normalize the spectrogram before vocoding using the vocoder’s learned mean and variance.

This is the configuration class to store the configuration of a SpeechT5HifiGanModel. It is used to instantiate a SpeechT5 HiFi-GAN vocoder model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SpeechT5 microsoft/speecht5_hifigan architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

>>> from transformers import SpeechT5HifiGan, SpeechT5HifiGanConfig

>>> # Initializing a "microsoft/speecht5_hifigan" style configuration
>>> configuration = SpeechT5HifiGanConfig()

>>> # Initializing a model (with random weights) from the "microsoft/speecht5_hifigan" style configuration
>>> model = SpeechT5HifiGan(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

SpeechT5Tokenizer

class transformers.SpeechT5Tokenizer

< >

( vocab_file bos_token = '<s>' eos_token = '</s>' unk_token = '<unk>' pad_token = '<pad>' sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None **kwargs )

Parameters

  • vocab_file (str) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer.
  • eos_token (str, optional, defaults to "</s>") — The end of sequence token.
  • bos_token (str, optional, defaults to "<s>") — The begin of sequence token.
  • unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
  • pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths.
  • sp_model_kwargs (dict, optional) — Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for SentencePiece can be used, among other things, to set:

    • enable_sampling: Enable subword regularization.

    • nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.

      • nbest_size = {0,1}: No sampling is performed.
      • nbest_size > 1: samples from the nbest_size results.
      • nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm.
    • alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout.

  • sp_model (SentencePieceProcessor) — The SentencePiece processor that is used for every conversion (string, tokens and IDs).

Construct a SpeechT5 tokenizer. Based on SentencePiece.

This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

__call__

< >

( text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None text_pair: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None text_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None text_pair_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None max_length: typing.Optional[int] = None stride: int = 0 is_split_into_words: bool = False pad_to_multiple_of: typing.Optional[int] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None return_token_type_ids: typing.Optional[bool] = None return_attention_mask: typing.Optional[bool] = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True **kwargs ) BatchEncoding

Parameters

  • text (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
  • text_pair (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
  • text_target (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
  • text_pair_target (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
  • add_special_tokens (bool, optional, defaults to True) — Whether or not to encode the sequences with the special tokens relative to their model.
  • padding (bool, str or PaddingStrategy, optional, defaults to False) — Activates and controls padding. Accepts the following values:

    • True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
    • 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
    • False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
  • truncation (bool, str or TruncationStrategy, optional, defaults to False) — Activates and controls truncation. Accepts the following values:

    • True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
    • 'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
    • 'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
    • False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
  • max_length (int, optional) — Controls the maximum length to use by one of the truncation/padding parameters.

    If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.

  • stride (int, optional, defaults to 0) — If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
  • is_split_into_words (bool, optional, defaults to False) — Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
  • pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
  • return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:

    • 'tf': Return TensorFlow tf.constant objects.
    • 'pt': Return PyTorch torch.Tensor objects.
    • 'np': Return Numpy np.ndarray objects.
  • return_token_type_ids (bool, optional) — Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs attribute.

    What are token type IDs?

  • return_attention_mask (bool, optional) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.

    What are attention masks?

  • return_overflowing_tokens (bool, optional, defaults to False) — Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead of returning overflowing tokens.
  • return_special_tokens_mask (bool, optional, defaults to False) — Whether or not to return special tokens mask information.
  • return_offsets_mapping (bool, optional, defaults to False) — Whether or not to return (char_start, char_end) for each token.

    This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using Python’s tokenizer, this method will raise NotImplementedError.

  • return_length (bool, optional, defaults to False) — Whether or not to return the lengths of the encoded inputs.
  • verbose (bool, optional, defaults to True) — Whether or not to print more information and warnings. **kwargs — passed to the self.tokenize() method

Returns

BatchEncoding

A BatchEncoding with the following fields:

  • input_ids — List of token ids to be fed to a model.

    What are input IDs?

  • token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or if “token_type_ids” is in self.model_input_names).

    What are token type IDs?

  • attention_mask — List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names).

    What are attention masks?

  • overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and return_overflowing_tokens=True).

  • num_truncated_tokens — Number of tokens truncated (when a max_length is specified and return_overflowing_tokens=True).

  • special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).

  • length — The length of the inputs (when return_length=True)

Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.

save_vocabulary

< >

( save_directory: str filename_prefix: typing.Optional[str] = None )

decode

< >

( token_ids: typing.Union[int, typing.List[int], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')] skip_special_tokens: bool = False clean_up_tokenization_spaces: bool = True **kwargs ) str

Parameters

  • token_ids (Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]) — List of tokenized input ids. Can be obtained using the __call__ method.
  • skip_special_tokens (bool, optional, defaults to False) — Whether or not to remove special tokens in the decoding.
  • clean_up_tokenization_spaces (bool, optional, defaults to True) — Whether or not to clean up the tokenization spaces.
  • kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.

Returns

str

The decoded sentence.

Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.

Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).

batch_decode

< >

( sequences: typing.Union[typing.List[int], typing.List[typing.List[int]], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')] skip_special_tokens: bool = False clean_up_tokenization_spaces: bool = True **kwargs ) List[str]

Parameters

  • sequences (Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]) — List of tokenized input ids. Can be obtained using the __call__ method.
  • skip_special_tokens (bool, optional, defaults to False) — Whether or not to remove special tokens in the decoding.
  • clean_up_tokenization_spaces (bool, optional, defaults to True) — Whether or not to clean up the tokenization spaces.
  • kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.

Returns

List[str]

The list of decoded sentences.

Convert a list of lists of token ids into a list of strings by calling decode.

SpeechT5FeatureExtractor

class transformers.SpeechT5FeatureExtractor

< >

( feature_size: int = 1 sampling_rate: int = 16000 padding_value: float = 0.0 do_normalize: bool = False num_mel_bins: int = 80 hop_length: int = 16 win_length: int = 64 win_function: str = 'hann_window' frame_signal_scale: float = 1.0 fmin: float = 80 fmax: float = 7600 mel_floor: float = 1e-10 reduction_factor: int = 2 return_attention_mask: bool = True **kwargs )

Parameters

  • feature_size (int, optional, defaults to 1) — The feature dimension of the extracted features.
  • sampling_rate (int, optional, defaults to 16000) — The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
  • padding_value (float, optional, defaults to 0.0) — The value that is used to fill the padding values.
  • do_normalize (bool, optional, defaults to False) — Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly improve the performance for some models.
  • num_mel_bins (int, optional, defaults to 80) — The number of mel-frequency bins in the extracted spectrogram features.
  • hop_length (int, optional, defaults to 16) — Number of ms between windows. Otherwise referred to as “shift” in many papers.
  • win_length (int, optional, defaults to 64) — Number of ms per window.
  • win_function (str, optional, defaults to "hann_window") — Name for the window function used for windowing, must be accessible via torch.{win_function}
  • frame_signal_scale (float, optional, defaults to 1.0) — Constant multiplied in creating the frames before applying DFT.
  • fmin (float, optional, defaults to 80) — Minimum mel frequency in Hz.
  • fmax (float, optional, defaults to 7600) — Maximum mel frequency in Hz.
  • mel_floor (float, optional, defaults to 1e-10) — Minimum value of mel frequency banks.
  • reduction_factor (int, optional, defaults to 2) — Spectrogram length reduction factor.
  • return_attention_mask (bool, optional, defaults to True) — Whether or not call() should return attention_mask.

Constructs a SpeechT5 feature extractor.

This class can pre-process a raw speech signal by (optionally) normalizing to zero-mean unit-variance, for use by the SpeechT5 speech encoder prenet.

This class can also extract log-mel filter bank features from raw speech, for use by the SpeechT5 speech decoder prenet.

This feature extractor inherits from SequenceFeatureExtractor which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

__call__

< >

( audio: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]], NoneType] = None audio_target: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]], NoneType] = None padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False max_length: typing.Optional[int] = None truncation: bool = False pad_to_multiple_of: typing.Optional[int] = None return_attention_mask: typing.Optional[bool] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None sampling_rate: typing.Optional[int] = None **kwargs )

Parameters

  • audio (np.ndarray, List[float], List[np.ndarray], List[List[float]], optional) — The sequence or batch of sequences to be processed. Each sequence can be a numpy array, a list of float values, a list of numpy arrays or a list of list of float values. This outputs waveform features.
  • audio_target (np.ndarray, List[float], List[np.ndarray], List[List[float]], optional) — The sequence or batch of sequences to be processed as targets. Each sequence can be a numpy array, a list of float values, a list of numpy arrays or a list of list of float values. This outputs log-mel spectrogram features.
  • padding (bool, str or PaddingStrategy, optional, defaults to False) — Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among:

    • True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
    • 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
    • False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
  • max_length (int, optional) — Maximum length of the returned list and optionally padding length (see above).
  • truncation (bool) — Activates truncation to cut input sequences longer than max_length to max_length.
  • pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value.

    This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.

  • return_attention_mask (bool, optional) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific feature_extractor’s default.

    What are attention masks?

  • return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:

    • 'tf': Return TensorFlow tf.constant objects.
    • 'pt': Return PyTorch torch.Tensor objects.
    • 'np': Return Numpy np.ndarray objects.
  • sampling_rate (int, optional) — The sampling rate at which the audio or audio_target input was sampled. It is strongly recommended to pass sampling_rate at the forward call to prevent silent errors.

Main method to featurize and prepare for the model one or several sequence(s).

Pass in a value for audio to extract waveform features. Pass in a value for audio_target to extract log-mel spectrogram features.

SpeechT5Processor

class transformers.SpeechT5Processor

< >

( feature_extractor tokenizer )

Parameters

  • feature_extractor (SpeechT5FeatureExtractor) — An instance of SpeechT5FeatureExtractor. The feature extractor is a required input.
  • tokenizer (SpeechT5Tokenizer) — An instance of SpeechT5Tokenizer. The tokenizer is a required input.

Constructs a SpeechT5 processor which wraps a feature extractor and a tokenizer into a single processor.

SpeechT5Processor offers all the functionalities of SpeechT5FeatureExtractor and SpeechT5Tokenizer. See the docstring of call() and decode() for more information.

__call__

< >

( *args **kwargs )

Processes audio and text input, as well as audio and text targets.

You can process audio by using the argument audio, or process audio targets by using the argument audio_target. This forwards the arguments to SpeechT5FeatureExtractor’s call().

You can process text by using the argument text, or process text labels by using the argument text_target. This forwards the arguments to SpeechT5Tokenizer’s call().

Valid input combinations are:

  • text only
  • audio only
  • text_target only
  • audio_target only
  • text and audio_target
  • audio and audio_target
  • text and text_target
  • audio and text_target

Please refer to the docstring of the above two methods for more information.

pad

< >

( *args **kwargs )

This method forwards all its arguments to SpeechT5FeatureExtractor’s pad() and returns its output.

You can process your labels by using the argument text (either in the same call as your audio inputs, or in a separate call). This forwards its arguments to SpeechT5Tokenizer’s pad().

Please refer to the docstring of the above two methods for more information.

from_pretrained

< >

( pretrained_model_name_or_path **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — This can be either:

    • a string, the model id of a pretrained feature_extractor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
    • a path to a directory containing a feature extractor file saved using the save_pretrained() method, e.g., ./my_model_directory/.
    • a path or url to a saved feature extractor JSON file, e.g., ./my_model_directory/preprocessor_config.json. **kwargs — Additional keyword arguments passed along to both from_pretrained() and ~tokenization_utils_base.PreTrainedTokenizer.from_pretrained.

Instantiate a processor associated with a pretrained model.

This class method is simply calling the feature extractor from_pretrained(), image processor ImageProcessingMixin and the tokenizer ~tokenization_utils_base.PreTrainedTokenizer.from_pretrained methods. Please refer to the docstrings of the methods above for more information.

save_pretrained

< >

( save_directory push_to_hub: bool = False **kwargs )

Parameters

  • save_directory (str or os.PathLike) — Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will be created if it does not exist).
  • push_to_hub (bool, optional, defaults to False) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace). kwargs — Additional key word arguments passed along to the push_to_hub() method.

Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it can be reloaded using the from_pretrained() method.

This class method is simply calling save_pretrained() and save_pretrained(). Please refer to the docstrings of the methods above for more information.

batch_decode

< >

( *args **kwargs )

This method forwards all its arguments to SpeechT5Tokenizer’s batch_decode(). Please refer to the docstring of this method for more information.

decode

< >

( *args **kwargs )

This method forwards all its arguments to SpeechT5Tokenizer’s decode(). Please refer to the docstring of this method for more information.

SpeechT5Model

class transformers.SpeechT5Model

< >

( config: SpeechT5Config encoder: typing.Optional[torch.nn.modules.module.Module] = None decoder: typing.Optional[torch.nn.modules.module.Module] = None )

Parameters

  • config (SpeechT5Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • encoder (SpeechT5EncoderWithSpeechPrenet or SpeechT5EncoderWithTextPrenet or None) — The Transformer encoder module that applies the appropiate speech or text encoder prenet. If None, SpeechT5EncoderWithoutPrenet will be used and the input_values are assumed to be hidden states.
  • decoder (SpeechT5DecoderWithSpeechPrenet or SpeechT5DecoderWithTextPrenet or None) — The Transformer decoder module that applies the appropiate speech or text decoder prenet. If None, SpeechT5DecoderWithoutPrenet will be used and the decoder_input_values are assumed to be hidden states.

The bare SpeechT5 Encoder-Decoder Model outputting raw hidden-states without any specific pre- or post-nets. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_values: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_values: typing.Optional[torch.Tensor] = None decoder_attention_mask: typing.Optional[torch.BoolTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None decoder_head_mask: typing.Optional[torch.FloatTensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None use_cache: typing.Optional[bool] = None speaker_embeddings: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)

Parameters

  • attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

    attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not.

  • decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_values. Causal mask will also be used by default.

    If you want to change padding behavior, you should read SpeechT5Decoder._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the paper for more information on the default strategy.

  • head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,
    • 0 indicates the head is masked.
  • decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,
    • 0 indicates the head is masked.
  • cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,
    • 0 indicates the head is masked.
  • encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
  • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).

    Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

    If past_key_values are used, the user can optionally input only the last decoder_input_values (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_values of shape (batch_size, sequence_length). decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional): Optionally, instead of passing decoder_input_values you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_values indices into associated vectors than the model’s internal embedding lookup matrix.

  • use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
  • input_values (torch.FloatTensor of shape (batch_size, sequence_length)) — Depending on which encoder is being used, the input_values are either: float values of the input raw speech waveform, or indices of input sequence tokens in the vocabulary, or hidden states.
  • decoder_input_values (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Depending on which decoder is being used, the decoder_input_values are either: float values of log-mel filterbank features extracted from the raw speech waveform, or indices of decoder input sequence tokens in the vocabulary, or hidden states.
  • speaker_embeddings (torch.FloatTensor of shape (batch_size, config.speaker_embedding_dim), optional) — Tensor containing the speaker embeddings.

Returns

transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)

A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (SpeechT5Config) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.

    If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.

  • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).

    Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

  • decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.

  • decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.

  • cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.

  • encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.

  • encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.

  • encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.

The SpeechT5Model forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

SpeechT5ForSpeechToText

class transformers.SpeechT5ForSpeechToText

< >

( config: SpeechT5Config )

Parameters

  • config (SpeechT5Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

SpeechT5 Model with a speech encoder and a text decoder. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_values: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_ids: typing.Optional[torch.Tensor] = None decoder_attention_mask: typing.Optional[torch.BoolTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None decoder_head_mask: typing.Optional[torch.FloatTensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None labels: typing.Optional[torch.Tensor] = None ) transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)

Parameters

  • attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

    attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not.

  • decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_values. Causal mask will also be used by default.

    If you want to change padding behavior, you should read SpeechT5Decoder._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the paper for more information on the default strategy.

  • head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,
    • 0 indicates the head is masked.
  • decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,
    • 0 indicates the head is masked.
  • cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,
    • 0 indicates the head is masked.
  • encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
  • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).

    Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

    If past_key_values are used, the user can optionally input only the last decoder_input_values (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_values of shape (batch_size, sequence_length). decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional): Optionally, instead of passing decoder_input_values you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_values indices into associated vectors than the model’s internal embedding lookup matrix.

  • use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
  • input_values (torch.FloatTensor of shape (batch_size, sequence_length)) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the SpeechT5Processor should be used for padding and conversion into a tensor of type torch.FloatTensor. See SpeechT5Processor.call() for details.
  • decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary.

    Indices can be obtained using SpeechT5Tokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are decoder input IDs?

    SpeechT5 uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values).

  • labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].

    Label indices can be obtained using SpeechT5Tokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

Returns

transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)

A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (SpeechT5Config) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.

  • logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

  • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).

    Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

  • decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.

  • decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.

  • cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.

  • encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.

  • encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.

  • encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.

The SpeechT5ForSpeechToText forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

>>> from transformers import SpeechT5Processor, SpeechT5ForSpeechToText
>>> from datasets import load_dataset

>>> dataset = load_dataset(
...     "hf-internal-testing/librispeech_asr_demo", "clean", split="validation"
... )  # doctest: +IGNORE_RESULT
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate

>>> processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_asr")
>>> model = SpeechT5ForSpeechToText.from_pretrained("microsoft/speecht5_asr")

>>> # audio file is decoded on the fly
>>> inputs = processor(audio=dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> predicted_ids = model.generate(**inputs, max_length=100)

>>> # transcribe speech
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
>>> transcription[0]
'mister quilter is the apostle of the middle classes and we are glad to welcome his gospel'
>>> inputs["labels"] = processor(text_target=dataset[0]["text"], return_tensors="pt").input_ids

>>> # compute loss
>>> loss = model(**inputs).loss
>>> round(loss.item(), 2)
19.88

SpeechT5ForTextToSpeech

class transformers.SpeechT5ForTextToSpeech

< >

( config: SpeechT5Config )

Parameters

  • config (SpeechT5Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

SpeechT5 Model with a text encoder and a speech decoder. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_values: typing.Optional[torch.Tensor] = None decoder_attention_mask: typing.Optional[torch.BoolTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None decoder_head_mask: typing.Optional[torch.FloatTensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None speaker_embeddings: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None stop_labels: typing.Optional[torch.Tensor] = None ) transformers.modeling_outputs.Seq2SeqSpectrogramOutput or tuple(torch.FloatTensor)

Parameters

  • attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

    attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not.

  • decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_values. Causal mask will also be used by default.

    If you want to change padding behavior, you should read SpeechT5Decoder._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the paper for more information on the default strategy.

  • head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,
    • 0 indicates the head is masked.
  • decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,
    • 0 indicates the head is masked.
  • cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,
    • 0 indicates the head is masked.
  • encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
  • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).

    Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

    If past_key_values are used, the user can optionally input only the last decoder_input_values (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_values of shape (batch_size, sequence_length). decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional): Optionally, instead of passing decoder_input_values you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_values indices into associated vectors than the model’s internal embedding lookup matrix.

  • use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. The batch_size should be 1 currently.

    Indices can be obtained using SpeechT5Tokenizer. See encode() and call() for details.

    What are input IDs?

  • decoder_input_values (torch.FloatTensor of shape (batch_size, sequence_length, config.num_mel_bins)) — Float values of input mel spectrogram.

    SpeechT5 uses an all-zero spectrum as the starting token for decoder_input_values generation. If past_key_values is used, optionally only the last decoder_input_values have to be input (see past_key_values).

  • speaker_embeddings (torch.FloatTensor of shape (batch_size, config.speaker_embedding_dim), optional) — Tensor containing the speaker embeddings.
  • labels (torch.FloatTensor of shape (batch_size, sequence_length, config.num_mel_bins), optional) — Float values of target mel spectrogram. Spectrograms can be obtained using SpeechT5Processor. See SpeechT5Processor.call() for details.
  • stop_labels (torch.FloatTensor of shape (batch_size, unreduced_sequence_length), optional) — Labels for computing the stop token loss. Values are 0.0 until the end of the sequence, after which they become 1.0. The sequence length of this tensor is config.reduction_factor times larger than the length of the target mel spectrogram. Labels can be obtained using SpeechT5Processor. See SpeechT5Processor.call() for details.

Returns

transformers.modeling_outputs.Seq2SeqSpectrogramOutput or tuple(torch.FloatTensor)

A transformers.modeling_outputs.Seq2SeqSpectrogramOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (SpeechT5Config) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Spectrogram generation loss.

  • spectrogram (torch.FloatTensor of shape (batch_size, sequence_length, num_bins)) — The predicted spectrogram.

  • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).

    Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

  • decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.

  • decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.

  • cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.

  • encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.

  • encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.

  • encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.

The SpeechT5ForTextToSpeech forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

>>> from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan, set_seed
>>> import torch

>>> processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts")
>>> model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts")
>>> vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")

>>> inputs = processor(text="Hello, my dog is cute", return_tensors="pt")
>>> speaker_embeddings = torch.zeros((1, 512))  # or load xvectors from a file

>>> set_seed(555)  # make deterministic

>>> # generate speech
>>> speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder)
>>> speech.shape
torch.Size([16384])

generate_speech

< >

( input_ids: LongTensor speaker_embeddings: typing.Optional[torch.FloatTensor] = None threshold: float = 0.5 minlenratio: float = 0.0 maxlenratio: float = 20.0 vocoder: typing.Optional[torch.nn.modules.module.Module] = None ) torch.FloatTensor

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. The batch_size should be 1 currently.

    Indices can be obtained using SpeechT5Tokenizer. See encode() and call() for details.

    What are input IDs?

  • speaker_embeddings (torch.FloatTensor of shape (batch_size, config.speaker_embedding_dim), optional) — Tensor containing the speaker embeddings.
  • threshold (float, optional, defaults to 0.5) — The generated sequence ends when the predicted stop token probability exceeds this value.
  • minlenratio (float, optional, defaults to 0.0) — Used to calculate the minimum required length for the output sequence.
  • maxlenratio (float, optional, defaults to 20.0) — Used to calculate the maximum allowed length for the output sequence.
  • vocoder (nn.Module, optional, defaults to None) — The vocoder that converts the mel spectrogram into a speech waveform. If None, the output is the mel spectrogram.

Returns

torch.FloatTensor

Tensor of shape (output_sequence_length, config.num_mel_bins) containing the predicted mel spectrogram, or a tensor with shape (num_frames,) containing the speech waveform.

Converts a sequence of input tokens into a sequence of mel spectrograms, which are subsequently turned into a speech waveform using a vocoder.

SpeechT5ForSpeechToSpeech

class transformers.SpeechT5ForSpeechToSpeech

< >

( config: SpeechT5Config )

Parameters

  • config (SpeechT5Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

SpeechT5 Model with a speech encoder and a speech decoder. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_values: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_values: typing.Optional[torch.Tensor] = None decoder_attention_mask: typing.Optional[torch.BoolTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None decoder_head_mask: typing.Optional[torch.FloatTensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None speaker_embeddings: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None stop_labels: typing.Optional[torch.Tensor] = None ) transformers.modeling_outputs.Seq2SeqSpectrogramOutput or tuple(torch.FloatTensor)

Parameters

  • attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

    attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not.

  • decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_values. Causal mask will also be used by default.

    If you want to change padding behavior, you should read SpeechT5Decoder._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the paper for more information on the default strategy.

  • head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,
    • 0 indicates the head is masked.
  • decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,
    • 0 indicates the head is masked.
  • cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,
    • 0 indicates the head is masked.
  • encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
  • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).

    Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

    If past_key_values are used, the user can optionally input only the last decoder_input_values (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_values of shape (batch_size, sequence_length). decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional): Optionally, instead of passing decoder_input_values you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_values indices into associated vectors than the model’s internal embedding lookup matrix.

  • use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
  • input_values (torch.FloatTensor of shape (batch_size, sequence_length)) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the SpeechT5Processor should be used for padding and conversion into a tensor of type torch.FloatTensor. See SpeechT5Processor.call() for details.
  • decoder_input_values (torch.FloatTensor of shape (batch_size, sequence_length, config.num_mel_bins)) — Float values of input mel spectrogram.

    SpeechT5 uses an all-zero spectrum as the starting token for decoder_input_values generation. If past_key_values is used, optionally only the last decoder_input_values have to be input (see past_key_values).

  • speaker_embeddings (torch.FloatTensor of shape (batch_size, config.speaker_embedding_dim), optional) — Tensor containing the speaker embeddings.
  • labels (torch.FloatTensor of shape (batch_size, sequence_length, config.num_mel_bins), optional) — Float values of target mel spectrogram. Spectrograms can be obtained using SpeechT5Processor. See SpeechT5Processor.call() for details.
  • stop_labels (torch.FloatTensor of shape (batch_size, unreduced_sequence_length), optional) — Labels for computing the stop token loss. Values are 0.0 until the end of the sequence, after which they become 1.0. The sequence length of this tensor is config.reduction_factor times larger than the length of the target mel spectrogram. Labels can be obtained using SpeechT5Processor. See SpeechT5Processor.call() for details.

Returns

transformers.modeling_outputs.Seq2SeqSpectrogramOutput or tuple(torch.FloatTensor)

A transformers.modeling_outputs.Seq2SeqSpectrogramOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (SpeechT5Config) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Spectrogram generation loss.

  • spectrogram (torch.FloatTensor of shape (batch_size, sequence_length, num_bins)) — The predicted spectrogram.

  • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).

    Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

  • decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.

  • decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.

  • cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.

  • encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.

  • encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.

  • encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.

The SpeechT5ForSpeechToSpeech forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

>>> from transformers import SpeechT5Processor, SpeechT5ForSpeechToSpeech, SpeechT5HifiGan, set_seed
>>> from datasets import load_dataset
>>> import torch

>>> dataset = load_dataset(
...     "hf-internal-testing/librispeech_asr_demo", "clean", split="validation"
... )  # doctest: +IGNORE_RESULT
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate

>>> processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_vc")
>>> model = SpeechT5ForSpeechToSpeech.from_pretrained("microsoft/speecht5_vc")
>>> vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")

>>> # audio file is decoded on the fly
>>> inputs = processor(audio=dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")

>>> speaker_embeddings = torch.zeros((1, 512))  # or load xvectors from a file

>>> set_seed(555)  # make deterministic

>>> # generate speech
>>> speech = model.generate_speech(inputs["input_values"], speaker_embeddings, vocoder=vocoder)
>>> speech.shape
torch.Size([77824])

generate_speech

< >

( input_values: FloatTensor speaker_embeddings: typing.Optional[torch.FloatTensor] = None threshold: float = 0.5 minlenratio: float = 0.0 maxlenratio: float = 20.0 vocoder: typing.Optional[torch.nn.modules.module.Module] = None ) torch.FloatTensor

Parameters

  • input_values (torch.FloatTensor of shape (batch_size, sequence_length)) — Float values of input raw speech waveform. The batch_size should be 1 currently.

    Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the SpeechT5Processor should be used for padding and conversion into a tensor of type torch.FloatTensor. See SpeechT5Processor.call() for details.

  • speaker_embeddings (torch.FloatTensor of shape (batch_size, config.speaker_embedding_dim), optional) — Tensor containing the speaker embeddings.
  • threshold (float, optional, defaults to 0.5) — The generated sequence ends when the predicted stop token probability exceeds this value.
  • minlenratio (float, optional, defaults to 0.0) — Used to calculate the minimum required length for the output sequence.
  • maxlenratio (float, optional, defaults to 20.0) — Used to calculate the maximum allowed length for the output sequence.
  • vocoder (nn.Module, optional, defaults to None) — The vocoder that converts the mel spectrogram into a speech waveform. If None, the output is the mel spectrogram.

Returns

torch.FloatTensor

Tensor of shape (output_sequence_length, config.num_mel_bins) containing the predicted mel spectrogram, or a tensor with shape (num_frames,) containing the speech waveform.

Converts a raw speech waveform into a sequence of mel spectrograms, which are subsequently turned back into a speech waveform using a vocoder.

SpeechT5HifiGan

class transformers.SpeechT5HifiGan

< >

( config: SpeechT5HifiGanConfig )

Parameters

  • config (SpeechT5HifiGanConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

HiFi-GAN vocoder. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( spectrogram ) torch.FloatTensor

Parameters

  • spectrogram (torch.FloatTensor) — Tensor containing the log-mel spectrograms. Can be batched and of shape (batch_size, sequence_length, config.model_in_dim), or un-batched and of shape (sequence_length, config.model_in_dim).

Returns

torch.FloatTensor

Tensor containing the speech waveform. If the input spectrogram is batched, will be of shape (batch_size, num_frames,). If un-batched, will be of shape (num_frames,).

Converts a log-mel spectrogram into a speech waveform. Passing a batch of log-mel spectrograms returns a batch of speech waveforms. Passing a single, un-batched log-mel spectrogram returns a single, un-batched speech waveform.