UniSpeech-SAT
Overview
The UniSpeech-SAT model was proposed in UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware Pre-Training by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu .
The abstract from the paper is the following:
Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks.
Tips:
- UniSpeechSat is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use Wav2Vec2Processor for the feature extraction.
- UniSpeechSat model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using Wav2Vec2CTCTokenizer.
- UniSpeechSat performs especially well on speaker verification, speaker identification, and speaker diarization tasks.
This model was contributed by patrickvonplaten. The Authorsβ code can be found here.
UniSpeechSatConfig
( vocab_size = 32 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout = 0.1 activation_dropout = 0.1 attention_dropout = 0.1 feat_proj_dropout = 0.0 feat_quantizer_dropout = 0.0 final_dropout = 0.1 layerdrop = 0.1 initializer_range = 0.02 layer_norm_eps = 1e-05 feat_extract_norm = 'group' feat_extract_activation = 'gelu' conv_dim = (512, 512, 512, 512, 512, 512, 512) conv_stride = (5, 2, 2, 2, 2, 2, 2) conv_kernel = (10, 3, 3, 3, 3, 2, 2) conv_bias = False num_conv_pos_embeddings = 128 num_conv_pos_embedding_groups = 16 do_stable_layer_norm = False apply_spec_augment = True mask_time_prob = 0.05 mask_time_length = 10 mask_time_min_masks = 2 mask_feature_prob = 0.0 mask_feature_length = 10 mask_feature_min_masks = 0 num_codevectors_per_group = 320 num_codevector_groups = 2 contrastive_logits_temperature = 0.1 num_negatives = 100 codevector_dim = 256 proj_codevector_dim = 256 diversity_loss_weight = 0.1 ctc_loss_reduction = 'mean' ctc_zero_infinity = False use_weighted_layer_sum = False classifier_proj_size = 256 pad_token_id = 0 bos_token_id = 1 eos_token_id = 2 num_clusters = 504 **kwargs )
Parameters
-
vocab_size (
int
, optional, defaults to 32) — Vocabulary size of the UniSpeechSat model. Defines the number of different tokens that can be represented by theinputs_ids
passed when calling UniSpeechSatModel. Vocabulary size of the model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method of UniSpeechSatModel. - hidden_size (
int
, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. - num_hidden_layers (
int
, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. -
num_attention_heads (
int
, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. -
intermediate_size (
int
, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. - hidden_act (
str
orfunction
, optional, defaults to"gelu"
) — The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu"
,"relu"
,"selu"
and"gelu_new"
are supported. - hidden_dropout (
float
, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. -
attention_dropout (
float
, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. -
final_dropout (
float
, optional, defaults to 0.1) — The dropout probability for the final projection layer of UniSpeechSatForCTC. -
initializer_range (
float
, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. -
layer_norm_eps (
float
, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. -
feat_extract_norm (
str
, optional, defaults to"group"
) — The norm to be applied to 1D convolutional layers in feature extractor. One of"group"
for group normalization of only the first 1D convolutional layer or"layer"
for layer normalization of all 1D convolutional layers. -
feat_proj_dropout (
float
, optional, defaults to 0.0) — The dropout probability for output of the feature extractor. -
feat_extract_activation (
str,
optional, defaults to
“gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string,
“gelu”,
“relu”,
“selu”and
“gelu_new”` are supported. - feat_quantizer_dropout (obj —float, optional, defaults to 0.0): The dropout probabilitiy for quantized feature extractor states.
-
conv_dim (
Tuple[int]
, optional, defaults to(512, 512, 512, 512, 512, 512, 512)
) — A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature extractor. The length of conv_dim defines the number of 1D convolutional layers. -
conv_stride (
Tuple[int]
, optional, defaults to(5, 2, 2, 2, 2, 2, 2)
) — A tuple of integers defining the stride of each 1D convolutional layer in the feature extractor. The length of conv_stride defines the number of convolutional layers and has to match the the length of conv_dim. -
conv_kernel (
Tuple[int]
, optional, defaults to(10, 3, 3, 3, 3, 3, 3)
) — A tuple of integers defining the kernel size of each 1D convolutional layer in the feature extractor. The length of conv_kernel defines the number of convolutional layers and has to match the the length of conv_dim. -
conv_bias (
bool
, optional, defaults toFalse
) — Whether the 1D convolutional layers have a bias. -
num_conv_pos_embeddings (
int
, optional, defaults to 128) — Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer. -
num_conv_pos_embedding_groups (
int
, optional, defaults to 16) — Number of groups of 1D convolutional positional embeddings layer. -
do_stable_layer_norm (
bool
, optional, defaults toFalse
) — Whether to apply stable layer norm architecture of the Transformer encoder.do_stable_layer_norm is True
corresponds to applying layer norm before the attention layer, whereasdo_stable_layer_norm is False
corresponds to applying layer norm after the attention layer. -
apply_spec_augment (
bool
, optional, defaults toTrue
) — Whether to apply SpecAugment data augmentation to the outputs of the feature extractor. For reference see SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition. -
mask_time_prob (
float
, optional, defaults to 0.05) — Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ”masktime_prob*len(time_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, _mask_time_prob should beprob_vector_start*mask_time_length
. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant ifapply_spec_augment is True
. -
mask_time_length (
int
, optional, defaults to 10) — Length of vector span along the time axis. -
mask_time_min_masks (
int
, optional, defaults to 2), — The minimum number of masks of lengthmask_feature_length
generated along the time axis, each time step, irrespectively ofmask_feature_prob
. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length < mask_time_min_masks” -
mask_feature_prob (
float
, optional, defaults to 0.0) — Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ”maskfeature_prob*len(feature_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, _mask_feature_prob should beprob_vector_start*mask_feature_length
. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant ifapply_spec_augment is True
. -
mask_feature_length (
int
, optional, defaults to 10) — Length of vector span along the feature axis. -
mask_feature_min_masks (
int
, optional, defaults to 0), — The minimum number of masks of lengthmask_feature_length
generated along the feature axis, each time step, irrespectively ofmask_feature_prob
. Only relevant if ”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks” -
num_codevectors_per_group (
int
, optional, defaults to 320) — Number of entries in each quantization codebook (group). -
num_codevector_groups (
int
, optional, defaults to 2) — Number of codevector groups for product codevector quantization. -
contrastive_logits_temperature (
float
, optional, defaults to 0.1) — The temperature kappa in the contrastive loss. -
feat_quantizer_dropout (
float
, optional, defaults to 0.0) — The dropout probabilitiy for the output of the feature extractor that’s used by the quantizer. -
num_negatives (
int
, optional, defaults to 100) — Number of negative samples for the contrastive loss. -
codevector_dim (
int
, optional, defaults to 256) — Dimensionality of the quantized feature vectors. -
proj_codevector_dim (
int
, optional, defaults to 256) — Dimensionality of the final projection of both the quantized and the transformer features. -
diversity_loss_weight (
int
, optional, defaults to 0.1) — The weight of the codebook diversity loss component. -
ctc_loss_reduction (
str
, optional, defaults to"mean"
) — Specifies the reduction to apply to the output oftorch.nn.CTCLoss
. Only relevant when training an instance of UniSpeechSatForCTC. -
ctc_zero_infinity (
bool
, optional, defaults toFalse
) — Whether to zero infinite losses and the associated gradients oftorch.nn.CTCLoss
. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of UniSpeechSatForCTC. -
use_weighted_layer_sum (
bool
, optional, defaults toFalse
) — Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of UniSpeechSatForSequenceClassification. -
classifier_proj_size (
int
, optional, defaults to 256) — Dimensionality of the projection before token mean-pooling for classification.
This is the configuration class to store the configuration of a UniSpeechSatModel. It is used to instantiate an UniSpeechSat model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the UniSpeechSat facebook/unispeech_sat-base-960h architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import UniSpeechSatModel, UniSpeechSatConfig
>>> # Initializing a UniSpeechSat facebook/unispeech_sat-base-960h style configuration
>>> configuration = UniSpeechSatConfig()
>>> # Initializing a model from the facebook/unispeech_sat-base-960h style configuration
>>> model = UniSpeechSatModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
UniSpeechSat specific outputs
( last_hidden_state: FloatTensor = None extract_features: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )
Parameters
- last_hidden_state (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the model. -
extract_features (
torch.FloatTensor
of shape(batch_size, sequence_length, conv_dim[-1])
) — Sequence of extracted feature vectors of the last convolutional layer of the model. - hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Output type of UniSpeechSatBaseModelOutput
, with potential hidden states and attentions.
( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None projected_states: FloatTensor = None projected_quantized_states: FloatTensor = None codevector_perplexity: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )
Parameters
-
loss (optional, returned when model is in train mode,
torch.FloatTensor
of shape(1,)
) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official paper . (classification) loss. -
projected_states (
torch.FloatTensor
of shape(batch_size, sequence_length, config.proj_codevector_dim)
) — Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked projected quantized states. -
projected_quantized_states (
torch.FloatTensor
of shape(batch_size, sequence_length, config.proj_codevector_dim)
) — Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive target vectors for contrastive loss. - hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Output type of UniSpeechSatForPreTrainingOutput
, with potential hidden states and
attentions.
UniSpeechSatModel
( config: UniSpeechSatConfig )
Parameters
- config (UniSpeechSatConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare UniSpeechSat Model transformer outputting raw hidden-states without any specific head on top. UniSpeechSat was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
input_values
attention_mask = None
mask_time_indices = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
β
UniSpeechSatBaseModelOutput or tuple(torch.FloatTensor)
Parameters
-
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, theUniSpeechSatProcessor
should be used for padding and conversion into a tensor of type torch.FloatTensor. Seetransformers.UniSpeechSatProcessor.__call__
for details. -
attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
Returns
UniSpeechSatBaseModelOutput or tuple(torch.FloatTensor)
A UniSpeechSatBaseModelOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising
various elements depending on the configuration (UniSpeechSatConfig) and inputs.
-
last_hidden_state (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
) β Sequence of hidden-states at the output of the last layer of the model. -
extract_features (
torch.FloatTensor
of shape(batch_size, sequence_length, conv_dim[-1])
) β Sequence of extracted feature vectors of the last convolutional layer of the model. -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The UniSpeechSatModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post
processing steps while the latter silently ignores them.
Example:
>>> from transformers import Wav2Vec2Processor, UniSpeechSatModel
>>> from datasets import load_dataset
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> processor = Wav2Vec2Processor.from_pretrained('microsoft/unispeech-sat-base-plus')
>>> model = UniSpeechSatModel.from_pretrained('microsoft/unispeech-sat-base-plus')
>>> # audio file is decoded on the fly
>>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
UniSpeechSatForCTC
( config )
Parameters
- config (UniSpeechSatConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
UniSpeechSat Model with a language modeling head on top for Connectionist Temporal Classification (CTC). UniSpeechSat was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
input_values
attention_mask = None
output_attentions = None
output_hidden_states = None
return_dict = None
labels = None
)
β
CausalLMOutput or tuple(torch.FloatTensor)
Parameters
-
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, theUniSpeechSatProcessor
should be used for padding and conversion into a tensor of type torch.FloatTensor. Seetransformers.UniSpeechSatProcessor.__call__
for details. -
attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
Returns
CausalLMOutput or tuple(torch.FloatTensor)
A CausalLMOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising
various elements depending on the configuration (UniSpeechSatConfig) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) β Language modeling loss (for next-token prediction). -
logits (
torch.FloatTensor
of shape(batch_size, sequence_length, config.vocab_size)
) β Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The UniSpeechSatForCTC forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post
processing steps while the latter silently ignores them.
Example:
>>> from transformers import Wav2Vec2Processor, UniSpeechSatForCTC
>>> from datasets import load_dataset
>>> import torch
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> processor = Wav2Vec2Processor.from_pretrained('microsoft/unispeech-sat-base-plus')
>>> model = UniSpeechSatForCTC.from_pretrained('microsoft/unispeech-sat-base-plus')
>>> # audio file is decoded on the fly
>>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> logits = model(**inputs).logits
>>> predicted_ids = torch.argmax(logits, dim=-1)
>>> # transcribe speech
>>> transcription = processor.batch_decode(predicted_ids)
>>> # compute loss
>>> with processor.as_target_processor():
... inputs["labels"] = processor(dataset[0]["text"], return_tensors="pt").input_ids
>>> loss = model(**inputs).loss
UniSpeechSatForSequenceClassification
( config )
Parameters
- config (UniSpeechSatConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
UniSpeechSat Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting.
UniSpeechSat was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
input_values
attention_mask = None
output_attentions = None
output_hidden_states = None
return_dict = None
labels = None
)
β
SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
-
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, theUniSpeechSatProcessor
should be used for padding and conversion into a tensor of type torch.FloatTensor. Seetransformers.UniSpeechSatProcessor.__call__
for details. -
attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
Returns
SequenceClassifierOutput or tuple(torch.FloatTensor)
A SequenceClassifierOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising
various elements depending on the configuration (UniSpeechSatConfig) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) β Classification (or regression if config.num_labels==1) loss. -
logits (
torch.FloatTensor
of shape(batch_size, config.num_labels)
) β Classification (or regression if config.num_labels==1) scores (before SoftMax). -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The UniSpeechSatForSequenceClassification forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post
processing steps while the latter silently ignores them.
Example:
>>> from transformers import Wav2Vec2FeatureExtractor, UniSpeechSatForSequenceClassification
>>> from datasets import load_dataset
>>> import torch
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/unispeech-sat-base-plus')
>>> model = UniSpeechSatForSequenceClassification.from_pretrained('microsoft/unispeech-sat-base-plus')
>>> # audio file is decoded on the fly
>>> inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt")
>>> logits = model(**inputs).logits
>>> predicted_class_ids = torch.argmax(logits, dim=-1)
>>> predicted_label = model.config.id2label[predicted_class_ids]
>>> # compute loss - target_label is e.g. "down"
>>> target_label = model.config.id2label[0]
>>> inputs["labels"] = torch.tensor([model.config.label2id[target_label]])
>>> loss = model(**inputs).loss
UniSpeechSatForPreTraining
( config: UniSpeechSatConfig )
Parameters
- config (UniSpeechSatConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
UniSpeechSat Model with a quantizer and VQ head on top. UniSpeechSat was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
input_values
attention_mask = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
β
UniSpeechSatForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
-
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, theUniSpeechSatProcessor
should be used for padding and conversion into a tensor of type torch.FloatTensor. Seetransformers.UniSpeechSatProcessor.__call__
for details. -
attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
Returns
UniSpeechSatForPreTrainingOutput or tuple(torch.FloatTensor)
A UniSpeechSatForPreTrainingOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising
various elements depending on the configuration (UniSpeechSatConfig) and inputs.
-
loss (optional, returned when model is in train mode,
torch.FloatTensor
of shape(1,)
) β Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official paper . (classification) loss. -
projected_states (
torch.FloatTensor
of shape(batch_size, sequence_length, config.proj_codevector_dim)
) β Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked projected quantized states. -
projected_quantized_states (
torch.FloatTensor
of shape(batch_size, sequence_length, config.proj_codevector_dim)
) β Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive target vectors for contrastive loss. -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The UniSpeechSatForPreTraining forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post
processing steps while the latter silently ignores them.
Example:
>>> import torch
>>> from transformers import UniSpeechSatFeatureExtractor, UniSpeechSatForPreTraining
>>> from transformers.models.unispeech_sat.modeling_unispeech_sat import _compute_mask_indices
>>> from datasets import load_dataset
>>> import soundfile as sf
>>> feature_extractor = UniSpeechSatFeatureExtractor.from_pretrained("patrickvonplaten/unispeech_sat-base")
>>> model = UniSpeechSatForPreTraining.from_pretrained("patrickvonplaten/unispeech_sat-base")
>>> def map_to_array(batch):
... speech, _ = sf.read(batch["file"])
... batch["speech"] = speech
... return batch
>>> ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
>>> ds = ds.map(map_to_array)
>>> input_values = feature_extractor(ds["speech"][0], return_tensors="pt").input_values # Batch size 1
>>> # compute masked indices
>>> batch_size, raw_sequence_length = input_values.shape
>>> sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length)
>>> mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.2, mask_length=2)
>>> with torch.no_grad():
... outputs = model(input_values, mask_time_indices=mask_time_indices)
>>> # compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states)
>>> cosine_sim = torch.cosine_similarity(
... outputs.projected_states, outputs.projected_quantized_states, dim=-1
... )
>>> # show that cosine similarity is much higher than random
>>> assert cosine_sim[mask_time_indices].mean() > 0.5
>>> # for contrastive loss training model should be put into train mode
>>> model.train()
>>> loss = model(input_values, mask_time_indices=mask_time_indices).loss