HubertΒΆ

OverviewΒΆ

Hubert was proposed in HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.

The abstract from the paper is the following:

Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.

Tips:

  • Hubert is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.

  • Hubert model was fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using Wav2Vec2CTCTokenizer.

This model was contributed by patrickvonplaten.

HubertConfigΒΆ

class transformers.HubertConfig(vocab_size=32, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act='gelu', hidden_dropout=0.1, activation_dropout=0.1, attention_dropout=0.1, feat_proj_dropout=0.1, final_dropout=0.1, layerdrop=0.1, initializer_range=0.02, layer_norm_eps=1e-05, feat_extract_norm='group', feat_extract_activation='gelu', conv_dim=512, 512, 512, 512, 512, 512, 512, conv_stride=5, 2, 2, 2, 2, 2, 2, conv_kernel=10, 3, 3, 3, 3, 2, 2, conv_bias=False, num_conv_pos_embeddings=128, num_conv_pos_embedding_groups=16, do_stable_layer_norm=False, apply_spec_augment=True, mask_time_prob=0.05, mask_time_length=10, mask_feature_prob=0.0, mask_feature_length=10, ctc_loss_reduction='sum', ctc_zero_infinity=False, gradient_checkpointing=False, pad_token_id=0, bos_token_id=1, eos_token_id=2, **kwargs)[source]ΒΆ

This is the configuration class to store the configuration of a HubertModel. It is used to instantiate an Hubert model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Hubert facebook/hubert-base-ls960 architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Parameters
  • vocab_size (int, optional, defaults to 32) – Vocabulary size of the Hubert model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling HubertModel. Vocabulary size of the model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method of HubertModel.

  • hidden_size (int, optional, defaults to 768) – Dimensionality of the encoder layers and the pooler layer.

  • num_hidden_layers (int, optional, defaults to 12) – Number of hidden layers in the Transformer encoder.

  • num_attention_heads (int, optional, defaults to 12) – Number of attention heads for each attention layer in the Transformer encoder.

  • intermediate_size (int, optional, defaults to 3072) – Dimensionality of the β€œintermediate” (i.e., feed-forward) layer in the Transformer encoder.

  • hidden_act (str or function, optional, defaults to "gelu") – The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported.

  • hidden_dropout_prob (float, optional, defaults to 0.1) – The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.

  • attention_probs_dropout_prob (float, optional, defaults to 0.1) – The dropout ratio for the attention probabilities.

  • initializer_range (float, optional, defaults to 0.02) – The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • layer_norm_eps (float, optional, defaults to 1e-12) – The epsilon used by the layer normalization layers.

  • feat_extract_norm (str, optional, defaults to "group") – The norm to be applied to 1D convolutional layers in feature extractor. One of "group" for group normalization of only the first 1D convolutional layer or "layer" for layer normalization of all 1D convolutional layers.

  • feat_extract_dropout (float, optional, defaults to 0.0) – The dropout probabilitiy for all 1D convolutional layers in feature extractor.

  • feat_extract_activation (str, `optional, defaults to "gelu") – The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, "gelu", "relu", "selu" and "gelu_new" are supported.

  • conv_dim (Tuple[int], optional, defaults to (512, 512, 512, 512, 512, 512, 512)) – A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature extractor. The length of conv_dim defines the number of 1D convolutional layers.

  • conv_stride (Tuple[int], optional, defaults to (5, 2, 2, 2, 2, 2, 2)) – A tuple of integers defining the stride of each 1D convolutional layer in the feature extractor. The length of conv_stride defines the number of convolutional layers and has to match the the length of conv_dim.

  • conv_kernel (Tuple[int], optional, defaults to (10, 3, 3, 3, 3, 3, 3)) – A tuple of integers defining the kernel size of each 1D convolutional layer in the feature extractor. The length of conv_kernel defines the number of convolutional layers and has to match the the length of conv_dim.

  • conv_bias (bool, optional, defaults to False) – Whether the 1D convolutional layers have a bias.

  • num_conv_pos_embeddings (int, optional, defaults to 128) – Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer.

  • num_conv_pos_embedding_groups (int, optional, defaults to 16) – Number of groups of 1D convolutional positional embeddings layer.

  • do_stable_layer_norm (bool, optional, defaults to False) – Whether do apply stable layer norm architecture of the Transformer encoder. do_stable_layer_norm is True corresponds to applying layer norm before the attention layer, whereas do_stable_layer_norm is False corresponds to applying layer norm after the attention layer.

  • apply_spec_augment (bool, optional, defaults to True) – Whether to apply SpecAugment data augmentation to the outputs of the feature extractor. For reference see SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition.

  • mask_time_prob (float, optional, defaults to 0.05) – Propability of each feature vector along the time axis to be chosen as the start of the vector span to be masked. Approximately mask_time_prob * sequence_length // mask_time_length feature vectors will be masked along the time axis. This is only relevant if apply_spec_augment is True.

  • mask_time_length (int, optional, defaults to 10) – Length of vector span along the time axis.

  • mask_feature_prob (float, optional, defaults to 0.0) – Propability of each feature vector along the feature axis to be chosen as the start of the vector span to be masked. Approximately mask_time_prob * hidden_size // mask_time_length feature vectors will be masked along the time axis. This is only relevant if apply_spec_augment is True.

  • mask_feature_length (int, optional, defaults to 10) – Length of vector span along the feature axis.

  • ctc_loss_reduction (str, optional, defaults to "sum") – Specifies the reduction to apply to the output of torch.nn.CTCLoss. Only relevant when training an instance of HubertForCTC.

  • ctc_zero_infinity (bool, optional, defaults to False) – Whether to zero infinite losses and the associated gradients of torch.nn.CTCLoss. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of HubertForCTC.

  • gradient_checkpointing (bool, optional, defaults to False) – If True, use gradient checkpointing to save memory at the expense of slower backward pass.

Example:

>>> from transformers import HubertModel, HubertConfig

>>> # Initializing a Hubert facebook/hubert-base-ls960 style configuration
>>> configuration = HubertConfig()

>>> # Initializing a model from the facebook/hubert-base-ls960 style configuration
>>> model = HubertModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

HubertModelΒΆ

class transformers.HubertModel(config: transformers.models.hubert.configuration_hubert.HubertConfig)[source]ΒΆ

The bare Hubert Model transformer outputting raw hidden-states without any specific head on top. Hubert was proposed in HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (HubertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_values, attention_mask=None, mask_time_indices=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]ΒΆ

The HubertModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_values (torch.FloatTensor of shape (batch_size, sequence_length)) – Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the Wav2Vec2Processor should be used for padding and conversion into a tensor of type torch.FloatTensor. See transformers.Wav2Vec2Processor.__call__() for details.

  • attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) –

    Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    What are attention masks?

    Warning

    attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as hubert-base, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not.

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

Returns

A BaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (HubertConfig) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Example:

>>> from transformers import Wav2Vec2Processor, HubertModel
>>> from datasets import load_dataset
>>> import soundfile as sf

>>> processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-large-ls960-ft")
>>> model = HubertModel.from_pretrained("facebook/hubert-large-ls960-ft")

>>> def map_to_array(batch):
...     speech, _ = sf.read(batch["file"])
...     batch["speech"] = speech
...     return batch

>>> ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
>>> ds = ds.map(map_to_array)

>>> input_values = processor(ds["speech"][0], return_tensors="pt").input_values  # Batch size 1
>>> hidden_states = model(input_values).last_hidden_state

Return type

BaseModelOutput or tuple(torch.FloatTensor)

HubertForCTCΒΆ

class transformers.HubertForCTC(config)[source]ΒΆ

Hubert Model with a language modeling head on top for Connectionist Temporal Classification (CTC). Hubert was proposed in HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (HubertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_values, attention_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None)[source]ΒΆ

The HubertForCTC forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_values (torch.FloatTensor of shape (batch_size, sequence_length)) – Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the Wav2Vec2Processor should be used for padding and conversion into a tensor of type torch.FloatTensor. See transformers.Wav2Vec2Processor.__call__() for details.

  • attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) –

    Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    What are attention masks?

    Warning

    attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as hubert-base, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not.

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

  • labels (torch.LongTensor of shape (batch_size, target_length), optional) – Labels for connectionist temporal classification. Note that target_length has to be smaller or equal to the sequence length of the output logits. Indices are selected in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1].

Returns

A CausalLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (HubertConfig) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) – Language modeling loss (for next-token prediction).

  • logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Example:

>>> import torch
>>> from transformers import Wav2Vec2Processor, HubertForCTC
>>> from datasets import load_dataset
>>> import soundfile as sf

>>> processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-large-ls960-ft")
>>> model = HubertForCTC.from_pretrained("facebook/hubert-large-ls960-ft")

>>> def map_to_array(batch):
...     speech, _ = sf.read(batch["file"])
...     batch["speech"] = speech
...     return batch

>>> ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
>>> ds = ds.map(map_to_array)

>>> input_values = processor(ds["speech"][0], return_tensors="pt").input_values  # Batch size 1
>>> logits = model(input_values).logits
>>> predicted_ids = torch.argmax(logits, dim=-1)

>>> transcription = processor.decode(predicted_ids[0])

>>> # compute loss
>>> target_transcription = "A MAN SAID TO THE UNIVERSE SIR I EXIST"

>>> # wrap processor as target processor to encode labels
>>> with processor.as_target_processor():
...     labels = processor(target_transcription, return_tensors="pt").input_ids

>>> loss = model(input_values, labels=labels).loss

Return type

CausalLMOutput or tuple(torch.FloatTensor)