Wav2Vec2ΒΆ

OverviewΒΆ

The Wav2Vec2 model was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.

The abstract from the paper is the following:

We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.

Tips:

  • Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.

  • Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using Wav2Vec2Tokenizer.

Wav2Vec2ConfigΒΆ

class transformers.Wav2Vec2Config(vocab_size=32, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act='gelu', hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, initializer_range=0.02, layer_norm_eps=1e-05, feat_extract_norm='group', feat_extract_dropout=0.0, feat_extract_activation='gelu', conv_dim=512, 512, 512, 512, 512, 512, 512, conv_stride=5, 2, 2, 2, 2, 2, 2, conv_kernel=10, 3, 3, 3, 3, 2, 2, conv_bias=False, num_conv_pos_embeddings=128, num_conv_pos_embedding_groups=16, do_stable_layer_norm=False, pad_token_id=0, bos_token_id=1, eos_token_id=2, **kwargs)[source]ΒΆ

This is the configuration class to store the configuration of a Wav2Vec2Model. It is used to instantiate an Wav2Vec2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Wav2Vec2 facebook/wav2vec2-base-960h architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Parameters
  • vocab_size (int, optional, defaults to 32) – Vocabulary size of the Wav2Vec2 model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling Wav2Vec2Model or TFWav2Vec2Model. Vocabulary size of the model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method of Wav2Vec2Model.

  • hidden_size (int, optional, defaults to 768) – Dimensionality of the encoder layers and the pooler layer.

  • num_hidden_layers (int, optional, defaults to 12) – Number of hidden layers in the Transformer encoder.

  • num_attention_heads (int, optional, defaults to 12) – Number of attention heads for each attention layer in the Transformer encoder.

  • intermediate_size (int, optional, defaults to 3072) – Dimensionality of the β€œintermediate” (i.e., feed-forward) layer in the Transformer encoder.

  • hidden_act (str or function, optional, defaults to "gelu") – The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported.

  • hidden_dropout_prob (float, optional, defaults to 0.1) – The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.

  • attention_probs_dropout_prob (float, optional, defaults to 0.1) – The dropout ratio for the attention probabilities.

  • initializer_range (float, optional, defaults to 0.02) – The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • layer_norm_eps (float, optional, defaults to 1e-12) – The epsilon used by the layer normalization layers.

  • feat_extract_norm (str, optional, defaults to "group") – The norm to be applied to 1D convolutional layers in feature extractor. One of "group" for group normalization of only the first 1D convolutional layer or "layer" for layer normalization of all 1D convolutional layers.

  • feat_extract_dropout (float, optional, defaults to 0.0) – The dropout probabilitiy for all 1D convolutional layers in feature extractor.

  • feat_extract_activation (str, `optional, defaults to "gelu") – The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, "gelu", "relu", "selu" and "gelu_new" are supported.

  • conv_dim (Tuple[int], optional, defaults to (512, 512, 512, 512, 512, 512, 512)) – A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature extractor. The length of conv_dim defines the number of 1D convolutional layers.

  • conv_stride (Tuple[int], optional, defaults to (5, 2, 2, 2, 2, 2, 2)) – A tuple of integers defining the stride of each 1D convolutional layer in the feature extractor. The length of conv_stride defines the number of convolutional layers and has to match the the length of conv_dim.

  • conv_kernel (Tuple[int], optional, defaults to (10, 3, 3, 3, 3, 3, 3)) – A tuple of integers defining the kernel size of each 1D convolutional layer in the feature extractor. The length of conv_kernel defines the number of convolutional layers and has to match the the length of conv_dim.

  • conv_bias (bool, optional, defaults to False) – Whether the 1D convolutional layers have a bias.

  • num_conv_pos_embeddings (int, optional, defaults to 128) – Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer.

  • num_conv_pos_embedding_groups (int, optional, defaults to 16) – Number of groups of 1D convolutional positional embeddings layer.

  • do_stable_layer_norm (bool, optional, defaults to False) – Whether do apply stable layer norm architecture of the Transformer encoder. do_stable_layer_norm is True corresponds to applying layer norm before the attention layer, whereas do_stable_layer_norm is False corresponds to applying layer norm after the attention layer.

Example:

>>> from transformers import Wav2Vec2Model, Wav2Vec2Config

>>> # Initializing a Wav2Vec2 facebook/wav2vec2-base-960h style configuration
>>> configuration = Wav2Vec2Config()

>>> # Initializing a model from the facebook/wav2vec2-base-960h style configuration
>>> model = Wav2Vec2Model(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

Wav2Vec2TokenizerΒΆ

class transformers.Wav2Vec2Tokenizer(vocab_file, bos_token='<s>', eos_token='</s>', unk_token='<unk>', pad_token='<pad>', word_delimiter_token='|', do_lower_case=False, **kwargs)[source]ΒΆ

Constructs a Wav2Vec2 tokenizer.

This tokenizer inherits from PreTrainedTokenizer which contains some of the main methods. Users should refer to the superclass for more information regarding such methods.

Parameters
  • vocab_file (str) – File containing the vocabulary.

  • bos_token (str, optional, defaults to "<s>") – The beginning of sentence token.

  • eos_token (str, optional, defaults to "</s>") – The end of sentence token.

  • unk_token (str, optional, defaults to "<unk>") – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

  • pad_token (str, optional, defaults to "<pad>") – The token used for padding, for example when batching sequences of different lengths.

  • word_delimiter_token (str, optional, defaults to "|") – The token used for defining the end of a word.

  • **kwargs – Additional keyword arguments passed along to PreTrainedTokenizer

__call__(raw_speech: Union[numpy.ndarray, List[float], List[numpy.ndarray], List[List[float]]], padding: Union[bool, str, transformers.tokenization_utils_base.PaddingStrategy] = False, max_length: Optional[int] = None, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, transformers.tokenization_utils_base.TensorType]] = None, verbose: bool = True, **kwargs) → transformers.tokenization_utils_base.BatchEncoding[source]ΒΆ

Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.

Parameters
  • raw_speech (np.ndarray, List[float], List[np.ndarray], List[List[float]]) – The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float values, a list of numpy arrayr or a list of list of float values.

  • padding (bool, str or PaddingStrategy, optional, defaults to False) –

    Activates and controls padding. Accepts the following values:

    • True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).

    • 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.

    • False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).

  • max_length (int, optional) –

    Controls the maximum length to use by one of the truncation/padding parameters.

    If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.

  • pad_to_multiple_of (int, optional) – If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).

  • return_tensors (str or TensorType, optional) –

    If set, will return tensors instead of list of python integers. Acceptable values are:

    • 'tf': Return TensorFlow tf.constant objects.

    • 'pt': Return PyTorch torch.Tensor objects.

    • 'np': Return Numpy np.ndarray objects.

  • verbose (bool, optional, defaults to True) – Whether or not to print more information and warnings.

save_vocabulary(save_directory: str, filename_prefix: Optional[str] = None) → Tuple[str][source]ΒΆ

Save only the vocabulary of the tokenizer (vocabulary + added tokens).

This method won’t save the configuration and special token mappings of the tokenizer. Use _save_pretrained() to save the whole state of the tokenizer.

Parameters
  • save_directory (str) – The directory in which to save the vocabulary.

  • filename_prefix (str, optional) – An optional prefix to add to the named of the saved files.

Returns

Paths to the files saved.

Return type

Tuple(str)

Wav2Vec2ModelΒΆ

class transformers.Wav2Vec2Model(config)[source]ΒΆ

The bare Wav2Vec2 Model transformer outputting raw hidden-states without any specific head on top. Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (Wav2Vec2Config) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_values, output_attentions=None, output_hidden_states=None, return_dict=None)[source]ΒΆ

The Wav2Vec2Model forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_values (torch.FloatTensor of shape (batch_size, sequence_length)) – Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the Wav2Vec2Tokenizer should be used for padding and conversion into a tensor of type torch.FloatTensor. See transformers.Wav2Vec2Tokenizer.__call__() for details.

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

Returns

A BaseModelOutput (if return_dict=True is passed or when config.return_dict=True) or a tuple of torch.FloatTensor comprising various elements depending on the configuration (Wav2Vec2Config) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Example:

>>> from transformers import Wav2Vec2Tokenizer, Wav2Vec2Model
>>> from datasets import load_dataset
>>> import soundfile as sf

>>> tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
>>> model = Wav2Vec2Model.from_pretrained("facebook/wav2vec2-base-960h")

>>> def map_to_array(batch):
>>>     speech, _ = sf.read(batch["file"])
>>>     batch["speech"] = speech
>>>     return batch

>>> ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
>>> ds = ds.map(map_to_array)

>>> input_values = tokenizer(ds["speech"][0], return_tensors="pt").input_values  # Batch size 1
>>> hidden_states = model(input_values).last_hidden_state

Return type

BaseModelOutput or tuple(torch.FloatTensor)

Wav2Vec2ForCTCΒΆ

class transformers.Wav2Vec2ForCTC(config)[source]ΒΆ

Wav2Vec2 Model with a language modeling head on top for Connectionist Temporal Classification (CTC). Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (Wav2Vec2Config) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_values, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None)[source]ΒΆ

The Wav2Vec2ForCTC forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_values (torch.FloatTensor of shape (batch_size, sequence_length)) – Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the Wav2Vec2Tokenizer should be used for padding and conversion into a tensor of type torch.FloatTensor. See transformers.Wav2Vec2Tokenizer.__call__() for details.

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

  • labels (Float.LongTensor of shape (batch_size, sequence_length, hidden_size), optional) – TODO(PVP): Fill out when adding training

Returns

A BaseModelOutput (if return_dict=True is passed or when config.return_dict=True) or a tuple of torch.FloatTensor comprising various elements depending on the configuration (Wav2Vec2Config) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Example:

>>> from transformers import Wav2Vec2Tokenizer, Wav2Vec2Model
>>> from datasets import load_dataset
>>> import soundfile as sf

>>> tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
>>> model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")

>>> def map_to_array(batch):
>>>     speech, _ = sf.read(batch["file"])
>>>     batch["speech"] = speech
>>>     return batch

>>> ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
>>> ds = ds.map(map_to_array)

>>> input_values = tokenizer(ds["speech"][0], return_tensors="pt").input_values  # Batch size 1
>>> logits = model(input_values).logits

>>> predicted_ids = torch.argmax(logits, dim=-1)
>>> transcription = tokenizer.decode(predicted_ids[0])

Return type

BaseModelOutput or tuple(torch.FloatTensor)