Wav2Vec2-Conformer
Overview
The Wav2Vec2-Conformer was added to an updated version of fairseq S2T: Fast Speech-to-Text Modeling with fairseq by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
The official results of the model can be found in Table 3 and Table 4 of the paper.
The Wav2Vec2-Conformer weights were released by the Meta AI team within the Fairseq library.
Tips:
- Wav2Vec2-Conformer follows the same architecture as Wav2Vec2, but replaces the Attention-block with a Conformer-block as introduced in Conformer: Convolution-augmented Transformer for Speech Recognition.
- For the same number of layers, Wav2Vec2-Conformer requires more parameters than Wav2Vec2, but also yields an improved word error rate.
- Wav2Vec2-Conformer uses the same tokenizer and feature extractor as Wav2Vec2.
- Wav2Vec2-Conformer can use either no relative position embeddings, Transformer-XL-like position embeddings, or
rotary position embeddings by setting the correct
config.position_embeddings_type
.
This model was contributed by patrickvonplaten. The original code can be found here.
Wav2Vec2ConformerConfig
class transformers.Wav2Vec2ConformerConfig
< source >( vocab_size = None hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout = 0.1 activation_dropout = 0.1 attention_dropout = 0.1 feat_proj_dropout = 0.0 feat_quantizer_dropout = 0.0 final_dropout = 0.1 layerdrop = 0.1 initializer_range = 0.02 layer_norm_eps = 1e-05 feat_extract_norm = 'group' feat_extract_activation = 'gelu' conv_dim = (512, 512, 512, 512, 512, 512, 512) conv_stride = (5, 2, 2, 2, 2, 2, 2) conv_kernel = (10, 3, 3, 3, 3, 2, 2) conv_bias = False num_conv_pos_embeddings = 128 num_conv_pos_embedding_groups = 16 apply_spec_augment = True mask_time_prob = 0.05 mask_time_length = 10 mask_time_min_masks = 2 mask_feature_prob = 0.0 mask_feature_length = 10 mask_feature_min_masks = 0 num_codevectors_per_group = 320 num_codevector_groups = 2 contrastive_logits_temperature = 0.1 num_negatives = 100 codevector_dim = 256 proj_codevector_dim = 256 diversity_loss_weight = 0.1 ctc_loss_reduction = 'sum' ctc_zero_infinity = False use_weighted_layer_sum = False classifier_proj_size = 256 tdnn_dim = (512, 512, 512, 512, 1500) tdnn_kernel = (5, 3, 3, 1, 1) tdnn_dilation = (1, 2, 3, 1, 1) xvector_output_dim = 512 pad_token_id = 0 bos_token_id = 1 eos_token_id = 2 add_adapter = False adapter_kernel_size = 3 adapter_stride = 2 num_adapter_layers = 3 output_hidden_size = None position_embeddings_type = 'relative' rotary_embedding_base = 10000 max_source_positions = 5000 conv_depthwise_kernel_size = 31 conformer_conv_dropout = 0.1 **kwargs )
Parameters
-
vocab_size (
int
, optional) — Vocabulary size of the Wav2Vec2Conformer model. Defines the number of different tokens that can be represented by theinputs_ids
passed when calling Wav2Vec2ConformerModel. Vocabulary size of the model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method of Wav2Vec2ConformerModel. - hidden_size (
int
, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. - num_hidden_layers (
int
, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. -
num_attention_heads (
int
, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. -
intermediate_size (
int
, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. - hidden_act (
str
orfunction
, optional, defaults to"gelu"
) — The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu"
,"relu"
,"selu"
and"gelu_new"
are supported. - hidden_dropout (
float
, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. -
attention_dropout (
float
, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. -
final_dropout (
float
, optional, defaults to 0.1) — The dropout probability for the final projection layer of Wav2Vec2ConformerForCTC. -
initializer_range (
float
, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. -
layer_norm_eps (
float
, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. -
feat_extract_norm (
str
, optional, defaults to"group"
) — The norm to be applied to 1D convolutional layers in feature encoder. One of"group"
for group normalization of only the first 1D convolutional layer or"layer"
for layer normalization of all 1D convolutional layers. -
feat_proj_dropout (
float
, optional, defaults to 0.0) — The dropout probability for output of the feature encoder. -
feat_extract_activation (
str,
optional, defaults to
“gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string,
“gelu”,
“relu”,
“selu”and
“gelu_new”` are supported. -
feat_quantizer_dropout (
float
, optional, defaults to 0.0) — The dropout probabilitiy for quantized feature encoder states. -
conv_dim (
Tuple[int]
orList[int]
, optional, defaults to(512, 512, 512, 512, 512, 512, 512)
) — A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of conv_dim defines the number of 1D convolutional layers. -
conv_stride (
Tuple[int]
orList[int]
, optional, defaults to(5, 2, 2, 2, 2, 2, 2)
) — A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of conv_stride defines the number of convolutional layers and has to match the length of conv_dim. -
conv_kernel (
Tuple[int]
orList[int]
, optional, defaults to(10, 3, 3, 3, 3, 3, 3)
) — A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The length of conv_kernel defines the number of convolutional layers and has to match the length of conv_dim. -
conv_bias (
bool
, optional, defaults toFalse
) — Whether the 1D convolutional layers have a bias. -
num_conv_pos_embeddings (
int
, optional, defaults to 128) — Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer. -
num_conv_pos_embedding_groups (
int
, optional, defaults to 16) — Number of groups of 1D convolutional positional embeddings layer. -
apply_spec_augment (
bool
, optional, defaults toTrue
) — Whether to apply SpecAugment data augmentation to the outputs of the feature encoder. For reference see SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition. -
mask_time_prob (
float
, optional, defaults to 0.05) — Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ”mask_time_problen(time_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, mask_time_prob should be `prob_vector_startmask_time_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if
apply_spec_augment is True`. -
mask_time_length (
int
, optional, defaults to 10) — Length of vector span along the time axis. -
mask_time_min_masks (
int
, optional, defaults to 2), — The minimum number of masks of lengthmask_feature_length
generated along the time axis, each time step, irrespectively ofmask_feature_prob
. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length < mask_time_min_masks” -
mask_feature_prob (
float
, optional, defaults to 0.0) — Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ”mask_feature_problen(feature_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, mask_feature_prob should be `prob_vector_startmask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if
apply_spec_augment is True`. -
mask_feature_length (
int
, optional, defaults to 10) — Length of vector span along the feature axis. -
mask_feature_min_masks (
int
, optional, defaults to 0), — The minimum number of masks of lengthmask_feature_length
generated along the feature axis, each time step, irrespectively ofmask_feature_prob
. Only relevant if ”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks” -
num_codevectors_per_group (
int
, optional, defaults to 320) — Number of entries in each quantization codebook (group). -
num_codevector_groups (
int
, optional, defaults to 2) — Number of codevector groups for product codevector quantization. -
contrastive_logits_temperature (
float
, optional, defaults to 0.1) — The temperature kappa in the contrastive loss. -
feat_quantizer_dropout (
float
, optional, defaults to 0.0) — The dropout probabilitiy for the output of the feature encoder that’s used by the quantizer. -
num_negatives (
int
, optional, defaults to 100) — Number of negative samples for the contrastive loss. -
codevector_dim (
int
, optional, defaults to 256) — Dimensionality of the quantized feature vectors. -
proj_codevector_dim (
int
, optional, defaults to 256) — Dimensionality of the final projection of both the quantized and the transformer features. -
diversity_loss_weight (
int
, optional, defaults to 0.1) — The weight of the codebook diversity loss component. -
ctc_loss_reduction (
str
, optional, defaults to"sum"
) — Specifies the reduction to apply to the output oftorch.nn.CTCLoss
. Only relevant when training an instance of Wav2Vec2ConformerForCTC. -
ctc_zero_infinity (
bool
, optional, defaults toFalse
) — Whether to zero infinite losses and the associated gradients oftorch.nn.CTCLoss
. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of Wav2Vec2ConformerForCTC. -
use_weighted_layer_sum (
bool
, optional, defaults toFalse
) — Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of Wav2Vec2ConformerForSequenceClassification. -
classifier_proj_size (
int
, optional, defaults to 256) — Dimensionality of the projection before token mean-pooling for classification. -
tdnn_dim (
Tuple[int]
orList[int]
, optional, defaults to(512, 512, 512, 512, 1500)
) — A tuple of integers defining the number of output channels of each 1D convolutional layer in the TDNN module of the XVector model. The length of tdnn_dim defines the number of TDNN layers. -
tdnn_kernel (
Tuple[int]
orList[int]
, optional, defaults to(5, 3, 3, 1, 1)
) — A tuple of integers defining the kernel size of each 1D convolutional layer in the TDNN module of the XVector model. The length of tdnn_kernel has to match the length of tdnn_dim. -
tdnn_dilation (
Tuple[int]
orList[int]
, optional, defaults to(1, 2, 3, 1, 1)
) — A tuple of integers defining the dilation factor of each 1D convolutional layer in TDNN module of the XVector model. The length of tdnn_dilation has to match the length of tdnn_dim. -
xvector_output_dim (
int
, optional, defaults to 512) — Dimensionality of the XVector embedding vectors. -
add_adapter (
bool
, optional, defaults toFalse
) — Whether a convolutional network should be stacked on top of the Wav2Vec2Conformer Encoder. Can be very useful for warm-starting Wav2Vec2Conformer for SpeechEncoderDecoder models. -
adapter_kernel_size (
int
, optional, defaults to 3) — Kernel size of the convolutional layers in the adapter network. Only relevant ifadd_adapter is True
. -
adapter_stride (
int
, optional, defaults to 2) — Stride of the convolutional layers in the adapter network. Only relevant ifadd_adapter is True
. -
num_adapter_layers (
int
, optional, defaults to 3) — Number of convolutional layers that should be used in the adapter network. Only relevant ifadd_adapter is True
. - output_hidden_size (
int
, optional) — Dimensionality of the encoder output layer. If not defined, this defaults to hidden-size. Only relevant ifadd_adapter is True
. -
position_embeddings_type (
str
, optional, defaults to"relative"
) — Can be specified torelative
orrotary
for relative or rotary position embeddings respectively. If leftNone
no relative position embedding is applied. -
rotary_embedding_base (
int
, optional, defaults to 10000) — If"rotary"
position embeddings are used, defines the size of the embedding base. -
max_source_positions (
int
, optional, defaults to 5000) — if"relative"
position embeddings are used, defines the maximum source input positions. -
conv_depthwise_kernel_size (
int
, defaults to 31) — Kernel size of convolutional depthwise 1D layer in Conformer blocks. -
conformer_conv_dropout (
float
, defaults to 0.1) — The dropout probability for all convolutional layers in Conformer blocks.
This is the configuration class to store the configuration of a Wav2Vec2ConformerModel. It is used to instantiate an Wav2Vec2Conformer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Wav2Vec2Conformer facebook/wav2vec2-conformer-rel-pos-large architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import Wav2Vec2ConformerModel, Wav2Vec2ConformerConfig
>>> # Initializing a Wav2Vec2Conformer facebook/wav2vec2-conformer-rel-pos-large style configuration
>>> configuration = Wav2Vec2ConformerConfig()
>>> # Initializing a model from the facebook/wav2vec2-conformer-rel-pos-large style configuration
>>> model = Wav2Vec2ConformerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
Wav2Vec2Conformer specific outputs
class transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput
< source >( loss: typing.Optional[torch.FloatTensor] = None projected_states: FloatTensor = None projected_quantized_states: FloatTensor = None codevector_perplexity: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None contrastive_loss: typing.Optional[torch.FloatTensor] = None diversity_loss: typing.Optional[torch.FloatTensor] = None )
Parameters
-
loss (optional, returned when
sample_negative_indices
are passed,torch.FloatTensor
of shape(1,)
) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official paper . (classification) loss. -
projected_states (
torch.FloatTensor
of shape(batch_size, sequence_length, config.proj_codevector_dim)
) — Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked projected quantized states. -
projected_quantized_states (
torch.FloatTensor
of shape(batch_size, sequence_length, config.proj_codevector_dim)
) — Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive target vectors for contrastive loss. - hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
-
contrastive_loss (optional, returned when
sample_negative_indices
are passed,torch.FloatTensor
of shape(1,)
) — The contrastive loss (L_m) as stated in the official paper . -
diversity_loss (optional, returned when
sample_negative_indices
are passed,torch.FloatTensor
of shape(1,)
) — The diversity loss (L_d) as stated in the official paper .
Output type of Wav2Vec2ConformerForPreTraining, with potential hidden states and attentions.
Wav2Vec2ConformerModel
class transformers.Wav2Vec2ConformerModel
< source >( config: Wav2Vec2ConformerConfig )
Parameters
- config (Wav2Vec2ConformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare Wav2Vec2Conformer Model transformer outputting raw hidden-states without any specific head on top. Wav2Vec2Conformer was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
mask_time_indices: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
β
transformers.modeling_outputs.Wav2Vec2BaseModelOutput or tuple(torch.FloatTensor)
Parameters
-
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the Wav2Vec2Processor should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details. -
attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
attention_mask
should only be passed if the corresponding processor hasconfig.return_attention_mask == True
. For all models whose processor hasconfig.return_attention_mask == False
, such as wav2vec2-conformer-rel-pos-large,attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such modelsinput_values
should simply be padded with 0 and passed withoutattention_mask
. Be aware that these models also yield slightly different results depending on whetherinput_values
is padded or not. -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Wav2Vec2BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Wav2Vec2BaseModelOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (Wav2Vec2ConformerConfig) and inputs.
-
last_hidden_state (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
) β Sequence of hidden-states at the output of the last layer of the model. -
extract_features (
torch.FloatTensor
of shape(batch_size, sequence_length, conv_dim[-1])
) β Sequence of extracted feature vectors of the last convolutional layer of the model. -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The Wav2Vec2ConformerModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import Wav2Vec2Processor, Wav2Vec2ConformerModel
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
>>> model = Wav2Vec2ConformerModel.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
>>> # audio file is decoded on the fly
>>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 292, 1024]
Wav2Vec2ConformerForCTC
class transformers.Wav2Vec2ConformerForCTC
< source >( config )
Parameters
- config (Wav2Vec2ConformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Wav2Vec2Conformer Model with a language modeling
head on top for Connectionist Temporal Classification (CTC).
Wav2Vec2Conformer was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
β
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
Parameters
-
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the Wav2Vec2Processor should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details. -
attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
attention_mask
should only be passed if the corresponding processor hasconfig.return_attention_mask == True
. For all models whose processor hasconfig.return_attention_mask == False
, such as wav2vec2-conformer-rel-pos-large,attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such modelsinput_values
should simply be padded with 0 and passed withoutattention_mask
. Be aware that these models also yield slightly different results depending on whetherinput_values
is padded or not. -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. -
labels (
torch.LongTensor
of shape(batch_size, target_length)
, optional) — Labels for connectionist temporal classification. Note thattarget_length
has to be smaller or equal to the sequence length of the output logits. Indices are selected in[-100, 0, ..., config.vocab_size - 1]
. All labels set to-100
are ignored (masked), the loss is only computed for labels in[0, ..., config.vocab_size - 1]
.
Returns
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (Wav2Vec2ConformerConfig) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) β Language modeling loss (for next-token prediction). -
logits (
torch.FloatTensor
of shape(batch_size, sequence_length, config.vocab_size)
) β Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The Wav2Vec2ConformerForCTC forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import Wav2Vec2Processor, Wav2Vec2ConformerForCTC
>>> from datasets import load_dataset
>>> import torch
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
>>> model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
>>> # audio file is decoded on the fly
>>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_ids = torch.argmax(logits, dim=-1)
>>> # transcribe speech
>>> transcription = processor.batch_decode(predicted_ids)
>>> transcription[0]
'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'
Wav2Vec2ConformerForSequenceClassification
class transformers.Wav2Vec2ConformerForSequenceClassification
< source >( config )
Parameters
- config (Wav2Vec2ConformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Wav2Vec2Conformer Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting.
Wav2Vec2Conformer was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
β
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
-
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the Wav2Vec2Processor should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details. -
attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
attention_mask
should only be passed if the corresponding processor hasconfig.return_attention_mask == True
. For all models whose processor hasconfig.return_attention_mask == False
, such as wav2vec2-conformer-rel-pos-large,attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such modelsinput_values
should simply be padded with 0 and passed withoutattention_mask
. Be aware that these models also yield slightly different results depending on whetherinput_values
is padded or not. -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. -
labels (
torch.LongTensor
of shape(batch_size,)
, optional) — Labels for computing the sequence classification/regression loss. Indices should be in[0, ..., config.num_labels - 1]
. Ifconfig.num_labels == 1
a regression loss is computed (Mean-Square loss), Ifconfig.num_labels > 1
a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (Wav2Vec2ConformerConfig) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) β Classification (or regression if config.num_labels==1) loss. -
logits (
torch.FloatTensor
of shape(batch_size, config.num_labels)
) β Classification (or regression if config.num_labels==1) scores (before SoftMax). -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The Wav2Vec2ConformerForSequenceClassification forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2ConformerForSequenceClassification
>>> from datasets import load_dataset
>>> import torch
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("hf-internal-testing/wav2vec2-conformer-seq-class")
>>> model = Wav2Vec2ConformerForSequenceClassification.from_pretrained("hf-internal-testing/wav2vec2-conformer-seq-class")
>>> # audio file is decoded on the fly
>>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_ids = torch.argmax(logits, dim=-1).item()
>>> predicted_label = model.config.id2label[predicted_class_ids]
>>> predicted_label
'LABEL_0'
Wav2Vec2ConformerForAudioFrameClassification
class transformers.Wav2Vec2ConformerForAudioFrameClassification
< source >( config )
Parameters
- config (Wav2Vec2ConformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Wav2Vec2Conformer Model with a frame classification head on top for tasks like Speaker Diarization.
Wav2Vec2Conformer was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
β
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
-
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the Wav2Vec2Processor should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details. -
attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
attention_mask
should only be passed if the corresponding processor hasconfig.return_attention_mask == True
. For all models whose processor hasconfig.return_attention_mask == False
, such as wav2vec2-conformer-rel-pos-large,attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such modelsinput_values
should simply be padded with 0 and passed withoutattention_mask
. Be aware that these models also yield slightly different results depending on whetherinput_values
is padded or not. -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. -
labels (
torch.LongTensor
of shape(batch_size,)
, optional) — Labels for computing the sequence classification/regression loss. Indices should be in[0, ..., config.num_labels - 1]
. Ifconfig.num_labels == 1
a regression loss is computed (Mean-Square loss), Ifconfig.num_labels > 1
a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (Wav2Vec2ConformerConfig) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) β Classification loss. -
logits (
torch.FloatTensor
of shape(batch_size, sequence_length, config.num_labels)
) β Classification scores (before SoftMax). -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The Wav2Vec2ConformerForAudioFrameClassification forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2ConformerForAudioFrameClassification
>>> from datasets import load_dataset
>>> import torch
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("hf-internal-testing/wav2vec2-conformer-frame-class")
>>> model = Wav2Vec2ConformerForAudioFrameClassification.from_pretrained("hf-internal-testing/wav2vec2-conformer-frame-class")
>>> # audio file is decoded on the fly
>>> inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt", sampling_rate=sampling_rate)
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> probabilities = torch.sigmoid(logits[0])
>>> # labels is a one-hot array of shape (num_frames, num_speakers)
>>> labels = (probabilities > 0.5).long()
>>> labels[0].tolist()
[1, 0]
Wav2Vec2ConformerForXVector
class transformers.Wav2Vec2ConformerForXVector
< source >( config )
Parameters
- config (Wav2Vec2ConformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Wav2Vec2Conformer Model with an XVector feature extraction head on top for tasks like Speaker Verification.
Wav2Vec2Conformer was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
β
transformers.modeling_outputs.XVectorOutput or tuple(torch.FloatTensor)
Parameters
-
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the Wav2Vec2Processor should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details. -
attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
attention_mask
should only be passed if the corresponding processor hasconfig.return_attention_mask == True
. For all models whose processor hasconfig.return_attention_mask == False
, such as wav2vec2-conformer-rel-pos-large,attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such modelsinput_values
should simply be padded with 0 and passed withoutattention_mask
. Be aware that these models also yield slightly different results depending on whetherinput_values
is padded or not. -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. -
labels (
torch.LongTensor
of shape(batch_size,)
, optional) — Labels for computing the sequence classification/regression loss. Indices should be in[0, ..., config.num_labels - 1]
. Ifconfig.num_labels == 1
a regression loss is computed (Mean-Square loss), Ifconfig.num_labels > 1
a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.XVectorOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.XVectorOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (Wav2Vec2ConformerConfig) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) β Classification loss. -
logits (
torch.FloatTensor
of shape(batch_size, config.xvector_output_dim)
) β Classification hidden states before AMSoftmax. -
embeddings (
torch.FloatTensor
of shape(batch_size, config.xvector_output_dim)
) β Utterance embeddings used for vector similarity-based retrieval. -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The Wav2Vec2ConformerForXVector forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2ConformerForXVector
>>> from datasets import load_dataset
>>> import torch
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("hf-internal-testing/wav2vec2-conformer-xvector")
>>> model = Wav2Vec2ConformerForXVector.from_pretrained("hf-internal-testing/wav2vec2-conformer-xvector")
>>> # audio file is decoded on the fly
>>> inputs = feature_extractor(
... [d["array"] for d in dataset[:2]["audio"]], sampling_rate=sampling_rate, return_tensors="pt", padding=True
... )
>>> with torch.no_grad():
... embeddings = model(**inputs).embeddings
>>> embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu()
>>> # the resulting embeddings can be used for cosine similarity-based retrieval
>>> cosine_sim = torch.nn.CosineSimilarity(dim=-1)
>>> similarity = cosine_sim(embeddings[0], embeddings[1])
>>> threshold = 0.7 # the optimal threshold is dataset-dependent
>>> if similarity < threshold:
... print("Speakers are not the same!")
>>> round(similarity.item(), 2)
1.0
Wav2Vec2ConformerForPreTraining
class transformers.Wav2Vec2ConformerForPreTraining
< source >( config: Wav2Vec2ConformerConfig )
Parameters
- config (Wav2Vec2ConformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Wav2Vec2Conformer Model with a quantizer and VQ
head on top.
Wav2Vec2Conformer was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
mask_time_indices: typing.Optional[torch.BoolTensor] = None
sampled_negative_indices: typing.Optional[torch.BoolTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
β
transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
-
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the Wav2Vec2Processor should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details. -
attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
attention_mask
should only be passed if the corresponding processor hasconfig.return_attention_mask == True
. For all models whose processor hasconfig.return_attention_mask == False
, such as wav2vec2-conformer-rel-pos-large,attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such modelsinput_values
should simply be padded with 0 and passed withoutattention_mask
. Be aware that these models also yield slightly different results depending on whetherinput_values
is padded or not. -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. -
mask_time_indices (
torch.BoolTensor
of shape(batch_size, sequence_length)
, optional) — Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in config.proj_codevector_dim space. -
sampled_negative_indices (
torch.BoolTensor
of shape(batch_size, sequence_length, num_negatives)
, optional) — Indices indicating which quantized target vectors are used as negative sampled vectors in contrastive loss. Required input for pre-training.
Returns
transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (Wav2Vec2ConformerConfig) and inputs.
-
loss (optional, returned when
sample_negative_indices
are passed,torch.FloatTensor
of shape(1,)
) β Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official paper . (classification) loss. -
projected_states (
torch.FloatTensor
of shape(batch_size, sequence_length, config.proj_codevector_dim)
) β Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked projected quantized states. -
projected_quantized_states (
torch.FloatTensor
of shape(batch_size, sequence_length, config.proj_codevector_dim)
) β Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive target vectors for contrastive loss. -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
-
contrastive_loss (optional, returned when
sample_negative_indices
are passed,torch.FloatTensor
of shape(1,)
) β The contrastive loss (L_m) as stated in the official paper . -
diversity_loss (optional, returned when
sample_negative_indices
are passed,torch.FloatTensor
of shape(1,)
) β The diversity loss (L_d) as stated in the official paper .
The Wav2Vec2ConformerForPreTraining forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> import torch
>>> from transformers import AutoFeatureExtractor, Wav2Vec2ConformerForPreTraining
>>> from transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer import (
... _compute_mask_indices,
... _sample_negative_indices,
... )
>>> from datasets import load_dataset
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-conformer-rel-pos-large")
>>> model = Wav2Vec2ConformerForPreTraining.from_pretrained("facebook/wav2vec2-conformer-rel-pos-large")
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1
>>> # compute masked indices
>>> batch_size, raw_sequence_length = input_values.shape
>>> sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length).item()
>>> mask_time_indices = _compute_mask_indices(
... shape=(batch_size, sequence_length), mask_prob=0.2, mask_length=2
... )
>>> sampled_negative_indices = _sample_negative_indices(
... features_shape=(batch_size, sequence_length),
... num_negatives=model.config.num_negatives,
... mask_time_indices=mask_time_indices,
... )
>>> mask_time_indices = torch.tensor(data=mask_time_indices, device=input_values.device, dtype=torch.long)
>>> sampled_negative_indices = torch.tensor(
... data=sampled_negative_indices, device=input_values.device, dtype=torch.long
... )
>>> with torch.no_grad():
... outputs = model(input_values, mask_time_indices=mask_time_indices)
>>> # compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states)
>>> cosine_sim = torch.cosine_similarity(outputs.projected_states, outputs.projected_quantized_states, dim=-1)
>>> # show that cosine similarity is much higher than random
>>> cosine_sim[mask_time_indices.to(torch.bool)].mean() > 0.5
tensor(True)
>>> # for contrastive loss training model should be put into train mode
>>> model = model.train()
>>> loss = model(
... input_values, mask_time_indices=mask_time_indices, sampled_negative_indices=sampled_negative_indices
... ).loss