SEWΒΆ
OverviewΒΆ
SEW (Squeezed and Efficient Wav2Vec) was proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
The abstract from the paper is the following:
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
Tips:
SEW is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
SEWForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using
Wav2Vec2CTCTokenizer
.
This model was contributed by anton-l.
SEWConfigΒΆ
-
class
transformers.
SEWConfig
(vocab_size=32, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, squeeze_factor=2, hidden_act='gelu', hidden_dropout=0.1, activation_dropout=0.1, attention_dropout=0.1, feat_proj_dropout=0.0, final_dropout=0.1, layerdrop=0.1, initializer_range=0.02, layer_norm_eps=1e-05, feat_extract_norm='group', feat_extract_activation='gelu', conv_dim=64, 128, 128, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512, conv_stride=5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, conv_kernel=10, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1, conv_bias=False, num_conv_pos_embeddings=128, num_conv_pos_embedding_groups=16, apply_spec_augment=True, mask_time_prob=0.05, mask_time_length=10, mask_feature_prob=0.0, mask_feature_length=10, ctc_loss_reduction='mean', ctc_zero_infinity=False, use_weighted_layer_sum=False, classifier_proj_size=256, pad_token_id=0, bos_token_id=1, eos_token_id=2, **kwargs)[source]ΒΆ This is the configuration class to store the configuration of a
SEWModel
. It is used to instantiate a SEW model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SEW asapp/sew-tiny-100k architecture.Configuration objects inherit from
PretrainedConfig
and can be used to control the model outputs. Read the documentation fromPretrainedConfig
for more information.- Parameters
vocab_size (
int
, optional, defaults to 32) β Vocabulary size of the SEW model. Defines the number of different tokens that can be represented by theinputs_ids
passed when callingSEW
.hidden_size (
int
, optional, defaults to 768) β Dimensionality of the encoder layers and the pooler layer.num_hidden_layers (
int
, optional, defaults to 12) β Number of hidden layers in the Transformer encoder.num_attention_heads (
int
, optional, defaults to 12) β Number of attention heads for each attention layer in the Transformer encoder.intermediate_size (
int
, optional, defaults to 3072) β Dimensionality of the βintermediateβ (i.e., feed-forward) layer in the Transformer encoder.squeeze_factor (
int
, optional, defaults to 2) β Sequence length downsampling factor after the encoder and upsampling factor after the transformer.hidden_act (
str
orfunction
, optional, defaults to"gelu"
) β The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu"
,"relu"
,"selu"
and"gelu_new"
are supported.hidden_dropout (
float
, optional, defaults to 0.1) β The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.attention_dropout (
float
, optional, defaults to 0.1) β The dropout ratio for the attention probabilities.final_dropout (
float
, optional, defaults to 0.1) β The dropout probability for the final projection layer ofSEWForCTC
.initializer_range (
float
, optional, defaults to 0.02) β The standard deviation of the truncated_normal_initializer for initializing all weight matrices.layer_norm_eps (
float
, optional, defaults to 1e-12) β The epsilon used by the layer normalization layers.feat_extract_norm (
str
, optional, defaults to"group"
) β The norm to be applied to 1D convolutional layers in feature extractor. One of"group"
for group normalization of only the first 1D convolutional layer or"layer"
for layer normalization of all 1D convolutional layers.feat_proj_dropout (
float
, optional, defaults to 0.0) β The dropout probability for output of the feature extractor.feat_extract_activation (
str, `optional
, defaults to"gelu"
) β The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string,"gelu"
,"relu"
,"selu"
and"gelu_new"
are supported.conv_dim (
Tuple[int]
, optional, defaults to(64, 128, 128, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512)
) β A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature extractor. The length of conv_dim defines the number of 1D convolutional layers.conv_stride (
Tuple[int]
, optional, defaults to(5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1)
) β A tuple of integers defining the stride of each 1D convolutional layer in the feature extractor. The length of conv_stride defines the number of convolutional layers and has to match the the length of conv_dim.conv_kernel (
Tuple[int]
, optional, defaults to(10, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1)
) β A tuple of integers defining the kernel size of each 1D convolutional layer in the feature extractor. The length of conv_kernel defines the number of convolutional layers and has to match the the length of conv_dim.conv_bias (
bool
, optional, defaults toFalse
) β Whether the 1D convolutional layers have a bias.num_conv_pos_embeddings (
int
, optional, defaults to 128) β Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer.num_conv_pos_embedding_groups (
int
, optional, defaults to 16) β Number of groups of 1D convolutional positional embeddings layer.apply_spec_augment (
bool
, optional, defaults toTrue
) β Whether to apply SpecAugment data augmentation to the outputs of the feature extractor. For reference see SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition.mask_time_prob (
float
, optional, defaults to 0.05) β Propability of each feature vector along the time axis to be chosen as the start of the vector span to be masked. Approximatelymask_time_prob * sequence_length // mask_time_length
feature vectors will be masked along the time axis. This is only relevant ifapply_spec_augment is True
.mask_time_length (
int
, optional, defaults to 10) β Length of vector span along the time axis.mask_feature_prob (
float
, optional, defaults to 0.0) β Propability of each feature vector along the feature axis to be chosen as the start of the vector span to be masked. Approximatelymask_time_prob * hidden_size // mask_time_length
feature vectors will be masked along the time axis. This is only relevant ifapply_spec_augment is True
.mask_feature_length (
int
, optional, defaults to 10) β Length of vector span along the feature axis.ctc_loss_reduction (
str
, optional, defaults to"sum"
) β Specifies the reduction to apply to the output oftorch.nn.CTCLoss
. Only relevant when training an instance ofSEWForCTC
.ctc_zero_infinity (
bool
, optional, defaults toFalse
) β Whether to zero infinite losses and the associated gradients oftorch.nn.CTCLoss
. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance ofSEWForCTC
.use_weighted_layer_sum (
bool
, optional, defaults toFalse
) β Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance ofWav2Vec2ForSequenceClassification
.classifier_proj_size (
int
, optional, defaults to 256) β Dimensionality of the projection before token mean-pooling for classification.
Example:
>>> from transformers import SEWModel, SEWConfig >>> # Initializing a SEW asapp/sew-tiny-100k style configuration >>> configuration = SEWConfig() >>> # Initializing a model from the asapp/sew-tiny-100k style configuration >>> model = SEWModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config
SEWModelΒΆ
-
class
transformers.
SEWModel
(config: transformers.models.sew.configuration_sew.SEWConfig)[source]ΒΆ The bare SEW Model transformer outputting raw hidden-states without any specific head on top. SEW was proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
This model inherits from
PreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
SEWConfig
) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
forward
(input_values, attention_mask=None, mask_time_indices=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]ΒΆ The
SEWModel
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) β Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, theWav2Vec2Processor
should be used for padding and conversion into a tensor of type torch.FloatTensor. Seetransformers.Wav2Vec2Processor.__call__()
for details.attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) βMask to avoid performing convolution and attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (
bool
, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) β Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) β Whether or not to return aModelOutput
instead of a plain tuple.
- Returns
A
BaseModelOutput
or a tuple oftorch.FloatTensor
(ifreturn_dict=False
is passed or whenconfig.return_dict=False
) comprising various elements depending on the configuration (SEWConfig
) and inputs.last_hidden_state (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
) β Sequence of hidden-states at the output of the last layer of the model.hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- Return type
BaseModelOutput
ortuple(torch.FloatTensor)
Example:
>>> from transformers import Wav2Vec2Processor, SEWModel >>> from datasets import load_dataset >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> processor = Wav2Vec2Processor.from_pretrained('asapp/sew-tiny-100k') >>> model = SEWModel.from_pretrained('asapp/sew-tiny-100k') >>> # audio file is decoded on the fly >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state
SEWForCTCΒΆ
-
class
transformers.
SEWForCTC
(config)[source]ΒΆ SEW Model with a language modeling head on top for Connectionist Temporal Classification (CTC). SEW was proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
This model inherits from
PreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
SEWConfig
) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
forward
(input_values, attention_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None)[source]ΒΆ The
SEWForCTC
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) β Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, theWav2Vec2Processor
should be used for padding and conversion into a tensor of type torch.FloatTensor. Seetransformers.Wav2Vec2Processor.__call__()
for details.attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) βMask to avoid performing convolution and attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (
bool
, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) β Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) β Whether or not to return aModelOutput
instead of a plain tuple.labels (
torch.LongTensor
of shape(batch_size, target_length)
, optional) β Labels for connectionist temporal classification. Note thattarget_length
has to be smaller or equal to the sequence length of the output logits. Indices are selected in[-100, 0, ..., config.vocab_size - 1]
. All labels set to-100
are ignored (masked), the loss is only computed for labels in[0, ..., config.vocab_size - 1]
.
- Returns
A
CausalLMOutput
or a tuple oftorch.FloatTensor
(ifreturn_dict=False
is passed or whenconfig.return_dict=False
) comprising various elements depending on the configuration (SEWConfig
) and inputs.loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) β Language modeling loss (for next-token prediction).logits (
torch.FloatTensor
of shape(batch_size, sequence_length, config.vocab_size)
) β Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- Return type
CausalLMOutput
ortuple(torch.FloatTensor)
Example:
>>> from transformers import Wav2Vec2Processor, SEWForCTC >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> processor = Wav2Vec2Processor.from_pretrained('asapp/sew-tiny-100k') >>> model = SEWForCTC.from_pretrained('asapp/sew-tiny-100k') >>> # audio file is decoded on the fly >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> logits = model(**inputs).logits >>> predicted_ids = torch.argmax(logits, dim=-1) >>> # transcribe speech >>> transcription = processor.batch_decode(predicted_ids) >>> # compute loss >>> with processor.as_target_processor(): ... inputs["labels"] = processor(dataset[0]["text"], return_tensors="pt").input_ids >>> loss = model(**inputs).loss
SEWForSequenceClassificationΒΆ
-
class
transformers.
SEWForSequenceClassification
(config)[source]ΒΆ SEW Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting.
SEW was proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
This model inherits from
PreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
SEWConfig
) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
forward
(input_values, attention_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None)[source]ΒΆ The
SEWForSequenceClassification
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) β Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, theWav2Vec2Processor
should be used for padding and conversion into a tensor of type torch.FloatTensor. Seetransformers.Wav2Vec2Processor.__call__()
for details.attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) βMask to avoid performing convolution and attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (
bool
, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) β Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) β Whether or not to return aModelOutput
instead of a plain tuple.labels (
torch.LongTensor
of shape(batch_size,)
, optional) β Labels for computing the sequence classification/regression loss. Indices should be in[0, ..., config.num_labels - 1]
. Ifconfig.num_labels == 1
a regression loss is computed (Mean-Square loss), Ifconfig.num_labels > 1
a classification loss is computed (Cross-Entropy).
- Returns
A
SequenceClassifierOutput
or a tuple oftorch.FloatTensor
(ifreturn_dict=False
is passed or whenconfig.return_dict=False
) comprising various elements depending on the configuration (SEWConfig
) and inputs.loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) β Classification (or regression if config.num_labels==1) loss.logits (
torch.FloatTensor
of shape(batch_size, config.num_labels)
) β Classification (or regression if config.num_labels==1) scores (before SoftMax).hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- Return type
SequenceClassifierOutput
ortuple(torch.FloatTensor)
Example:
>>> from transformers import Wav2Vec2FeatureExtractor, SEWForSequenceClassification >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('asapp/sew-tiny-100k') >>> model = SEWForSequenceClassification.from_pretrained('asapp/sew-tiny-100k') >>> # audio file is decoded on the fly >>> inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt") >>> logits = model(**inputs).logits >>> predicted_class_ids = torch.argmax(logits, dim=-1) >>> predicted_label = model.config.id2label[predicted_class_ids] >>> # compute loss - target_label is e.g. "down" >>> target_label = model.config.id2label[0] >>> inputs["labels"] = torch.tensor([model.config.label2id[target_label]]) >>> loss = model(**inputs).loss