Wav2Vec2ΒΆ
OverviewΒΆ
The Wav2Vec2 model was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
The abstract from the paper is the following:
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
Tips:
Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using
Wav2Vec2CTCTokenizer
.
This model was contributed by patrickvonplaten.
Wav2Vec2ConfigΒΆ
-
class
transformers.
Wav2Vec2Config
(vocab_size=32, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act='gelu', hidden_dropout=0.1, activation_dropout=0.1, attention_dropout=0.1, feat_proj_dropout=0.0, feat_quantizer_dropout=0.0, final_dropout=0.1, layerdrop=0.1, initializer_range=0.02, layer_norm_eps=1e-05, feat_extract_norm='group', feat_extract_activation='gelu', conv_dim=512, 512, 512, 512, 512, 512, 512, conv_stride=5, 2, 2, 2, 2, 2, 2, conv_kernel=10, 3, 3, 3, 3, 2, 2, conv_bias=False, num_conv_pos_embeddings=128, num_conv_pos_embedding_groups=16, do_stable_layer_norm=False, apply_spec_augment=True, mask_time_prob=0.05, mask_time_length=10, mask_feature_prob=0.0, mask_feature_length=10, num_codevectors_per_group=320, num_codevector_groups=2, contrastive_logits_temperature=0.1, num_negatives=100, codevector_dim=256, proj_codevector_dim=256, diversity_loss_weight=0.1, ctc_loss_reduction='sum', ctc_zero_infinity=False, use_weighted_layer_sum=False, classifier_proj_size=256, pad_token_id=0, bos_token_id=1, eos_token_id=2, **kwargs)[source]ΒΆ This is the configuration class to store the configuration of a
Wav2Vec2Model
. It is used to instantiate an Wav2Vec2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Wav2Vec2 facebook/wav2vec2-base-960h architecture.Configuration objects inherit from
PretrainedConfig
and can be used to control the model outputs. Read the documentation fromPretrainedConfig
for more information.- Parameters
vocab_size (
int
, optional, defaults to 32) β Vocabulary size of the Wav2Vec2 model. Defines the number of different tokens that can be represented by theinputs_ids
passed when callingWav2Vec2Model
orTFWav2Vec2Model
. Vocabulary size of the model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method ofWav2Vec2Model
.hidden_size (
int
, optional, defaults to 768) β Dimensionality of the encoder layers and the pooler layer.num_hidden_layers (
int
, optional, defaults to 12) β Number of hidden layers in the Transformer encoder.num_attention_heads (
int
, optional, defaults to 12) β Number of attention heads for each attention layer in the Transformer encoder.intermediate_size (
int
, optional, defaults to 3072) β Dimensionality of the βintermediateβ (i.e., feed-forward) layer in the Transformer encoder.hidden_act (
str
orfunction
, optional, defaults to"gelu"
) β The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu"
,"relu"
,"selu"
and"gelu_new"
are supported.hidden_dropout (
float
, optional, defaults to 0.1) β The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.attention_dropout (
float
, optional, defaults to 0.1) β The dropout ratio for the attention probabilities.final_dropout (
float
, optional, defaults to 0.1) β The dropout probability for the final projection layer ofWav2Vec2ForCTC
.initializer_range (
float
, optional, defaults to 0.02) β The standard deviation of the truncated_normal_initializer for initializing all weight matrices.layer_norm_eps (
float
, optional, defaults to 1e-12) β The epsilon used by the layer normalization layers.feat_extract_norm (
str
, optional, defaults to"group"
) β The norm to be applied to 1D convolutional layers in feature extractor. One of"group"
for group normalization of only the first 1D convolutional layer or"layer"
for layer normalization of all 1D convolutional layers.feat_proj_dropout (
float
, optional, defaults to 0.0) β The dropout probability for output of the feature extractor.feat_extract_activation (
str, `optional
, defaults to"gelu"
) β The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string,"gelu"
,"relu"
,"selu"
and"gelu_new"
are supported.(obj (feat_quantizer_dropout) β float, optional, defaults to 0.0): The dropout probabilitiy for quantized feature extractor states.
conv_dim (
Tuple[int]
, optional, defaults to(512, 512, 512, 512, 512, 512, 512)
) β A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature extractor. The length of conv_dim defines the number of 1D convolutional layers.conv_stride (
Tuple[int]
, optional, defaults to(5, 2, 2, 2, 2, 2, 2)
) β A tuple of integers defining the stride of each 1D convolutional layer in the feature extractor. The length of conv_stride defines the number of convolutional layers and has to match the the length of conv_dim.conv_kernel (
Tuple[int]
, optional, defaults to(10, 3, 3, 3, 3, 3, 3)
) β A tuple of integers defining the kernel size of each 1D convolutional layer in the feature extractor. The length of conv_kernel defines the number of convolutional layers and has to match the the length of conv_dim.conv_bias (
bool
, optional, defaults toFalse
) β Whether the 1D convolutional layers have a bias.num_conv_pos_embeddings (
int
, optional, defaults to 128) β Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer.num_conv_pos_embedding_groups (
int
, optional, defaults to 16) β Number of groups of 1D convolutional positional embeddings layer.do_stable_layer_norm (
bool
, optional, defaults toFalse
) β Whether to apply stable layer norm architecture of the Transformer encoder.do_stable_layer_norm is True
corresponds to applying layer norm before the attention layer, whereasdo_stable_layer_norm is False
corresponds to applying layer norm after the attention layer.apply_spec_augment (
bool
, optional, defaults toTrue
) β Whether to apply SpecAugment data augmentation to the outputs of the feature extractor. For reference see SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition.mask_time_prob (
float
, optional, defaults to 0.05) β Propability of each feature vector along the time axis to be chosen as the start of the vector span to be masked. Approximatelymask_time_prob * sequence_length // mask_time_length
feature vectors will be masked along the time axis. This is only relevant ifapply_spec_augment is True
.mask_time_length (
int
, optional, defaults to 10) β Length of vector span along the time axis.mask_feature_prob (
float
, optional, defaults to 0.0) β Propability of each feature vector along the feature axis to be chosen as the start of the vector span to be masked. Approximatelymask_time_prob * hidden_size // mask_time_length
feature vectors will be masked along the time axis. This is only relevant ifapply_spec_augment is True
.mask_feature_length (
int
, optional, defaults to 10) β Length of vector span along the feature axis.num_codevectors_per_group (
int
, optional, defaults to 320) β Number of entries in each quantization codebook (group).num_codevector_groups (
int
, optional, defaults to 2) β Number of codevector groups for product codevector quantization.contrastive_logits_temperature (
float
, optional, defaults to 0.1) β The temperature kappa in the contrastive loss.feat_quantizer_dropout (
float
, optional, defaults to 0.0) β The dropout probabilitiy for the output of the feature extractor thatβs used by the quantizer.num_negatives (
int
, optional, defaults to 100) β Number of negative samples for the contrastive loss.codevector_dim (
int
, optional, defaults to 256) β Dimensionality of the quantized feature vectors.proj_codevector_dim (
int
, optional, defaults to 256) β Dimensionality of the final projection of both the quantized and the transformer features.diversity_loss_weight (
int
, optional, defaults to 0.1) β The weight of the codebook diversity loss component.ctc_loss_reduction (
str
, optional, defaults to"sum"
) β Specifies the reduction to apply to the output oftorch.nn.CTCLoss
. Only relevant when training an instance ofWav2Vec2ForCTC
.ctc_zero_infinity (
bool
, optional, defaults toFalse
) β Whether to zero infinite losses and the associated gradients oftorch.nn.CTCLoss
. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance ofWav2Vec2ForCTC
.use_weighted_layer_sum (
bool
, optional, defaults toFalse
) β Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance ofWav2Vec2ForSequenceClassification
.classifier_proj_size (
int
, optional, defaults to 256) β Dimensionality of the projection before token mean-pooling for classification.
Example:
>>> from transformers import Wav2Vec2Model, Wav2Vec2Config >>> # Initializing a Wav2Vec2 facebook/wav2vec2-base-960h style configuration >>> configuration = Wav2Vec2Config() >>> # Initializing a model from the facebook/wav2vec2-base-960h style configuration >>> model = Wav2Vec2Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config
Wav2Vec2CTCTokenizerΒΆ
-
class
transformers.
Wav2Vec2CTCTokenizer
(vocab_file, bos_token='<s>', eos_token='</s>', unk_token='<unk>', pad_token='<pad>', word_delimiter_token='|', do_lower_case=False, **kwargs)[source]ΒΆ Constructs a Wav2Vec2CTC tokenizer.
This tokenizer inherits from
PreTrainedTokenizer
which contains some of the main methods. Users should refer to the superclass for more information regarding such methods.- Parameters
vocab_file (
str
) β File containing the vocabulary.bos_token (
str
, optional, defaults to"<s>"
) β The beginning of sentence token.eos_token (
str
, optional, defaults to"</s>"
) β The end of sentence token.unk_token (
str
, optional, defaults to"<unk>"
) β The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.pad_token (
str
, optional, defaults to"<pad>"
) β The token used for padding, for example when batching sequences of different lengths.word_delimiter_token (
str
, optional, defaults to"|"
) β The token used for defining the end of a word.do_lower_case (
bool
, optional, defaults toFalse
) β Whether or not to accept lowercase input and lowercase the output when decoding.**kwargs β Additional keyword arguments passed along to
PreTrainedTokenizer
-
__call__
(text: Union[str, List[str], List[List[str]]], text_pair: Optional[Union[str, List[str], List[List[str]]]] = None, add_special_tokens: bool = True, padding: Union[bool, str, transformers.file_utils.PaddingStrategy] = False, truncation: Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = False, max_length: Optional[int] = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, transformers.file_utils.TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs) → transformers.tokenization_utils_base.BatchEncodingΒΆ Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.
- Parameters
text (
str
,List[str]
,List[List[str]]
) β The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must setis_split_into_words=True
(to lift the ambiguity with a batch of sequences).text_pair (
str
,List[str]
,List[List[str]]
) β The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must setis_split_into_words=True
(to lift the ambiguity with a batch of sequences).add_special_tokens (
bool
, optional, defaults toTrue
) β Whether or not to encode the sequences with the special tokens relative to their model.padding (
bool
,str
orPaddingStrategy
, optional, defaults toFalse
) βActivates and controls padding. Accepts the following values:
True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (
bool
,str
orTruncationStrategy
, optional, defaults toFalse
) βActivates and controls truncation. Accepts the following values:
True
or'longest_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.'only_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.'only_second'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.False
or'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (
int
, optional) βControls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to
None
, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.stride (
int
, optional, defaults to 0) β If set to a number along withmax_length
, the overflowing tokens returned whenreturn_overflowing_tokens=True
will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.is_split_into_words (
bool
, optional, defaults toFalse
) β Whether or not the input is already pre-tokenized (e.g., split into words). If set toTrue
, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.pad_to_multiple_of (
int
, optional) β If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).return_tensors (
str
orTensorType
, optional) βIf set, will return tensors instead of list of python integers. Acceptable values are:
'tf'
: Return TensorFlowtf.constant
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return Numpynp.ndarray
objects.
return_token_type_ids (
bool
, optional) βWhether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizerβs default, defined by the
return_outputs
attribute.return_attention_mask (
bool
, optional) βWhether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizerβs default, defined by the
return_outputs
attribute.return_overflowing_tokens (
bool
, optional, defaults toFalse
) β Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided withtruncation_strategy = longest_first
orTrue
, an error is raised instead of returning overflowing tokens.return_special_tokens_mask (
bool
, optional, defaults toFalse
) β Whether or not to return special tokens mask information.return_offsets_mapping (
bool
, optional, defaults toFalse
) βWhether or not to return
(char_start, char_end)
for each token.This is only available on fast tokenizers inheriting from
PreTrainedTokenizerFast
, if using Pythonβs tokenizer, this method will raiseNotImplementedError
.return_length (
bool
, optional, defaults toFalse
) β Whether or not to return the lengths of the encoded inputs.verbose (
bool
, optional, defaults toTrue
) β Whether or not to print more information and warnings.**kwargs β passed to the
self.tokenize()
method
- Returns
A
BatchEncoding
with the following fields:input_ids β List of token ids to be fed to a model.
token_type_ids β List of token type ids to be fed to a model (when
return_token_type_ids=True
or if βtoken_type_idsβ is inself.model_input_names
).attention_mask β List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True
or if βattention_maskβ is inself.model_input_names
).overflowing_tokens β List of overflowing tokens sequences (when a
max_length
is specified andreturn_overflowing_tokens=True
).num_truncated_tokens β Number of tokens truncated (when a
max_length
is specified andreturn_overflowing_tokens=True
).special_tokens_mask β List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when
add_special_tokens=True
andreturn_special_tokens_mask=True
).length β The length of the inputs (when
return_length=True
)
- Return type
-
save_vocabulary
(save_directory: str, filename_prefix: Optional[str] = None) → Tuple[str][source]ΒΆ Save only the vocabulary of the tokenizer (vocabulary + added tokens).
This method wonβt save the configuration and special token mappings of the tokenizer. Use
_save_pretrained()
to save the whole state of the tokenizer.- Parameters
save_directory (
str
) β The directory in which to save the vocabulary.filename_prefix (
str
, optional) β An optional prefix to add to the named of the saved files.
- Returns
Paths to the files saved.
- Return type
Tuple(str)
Wav2Vec2FeatureExtractorΒΆ
-
class
transformers.
Wav2Vec2FeatureExtractor
(feature_size=1, sampling_rate=16000, padding_value=0.0, return_attention_mask=False, do_normalize=True, **kwargs)[source]ΒΆ Constructs a Wav2Vec2 feature extractor.
This feature extractor inherits from
SequenceFeatureExtractor
which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.- Parameters
feature_size (
int
, defaults to 1) β The feature dimension of the extracted features.sampling_rate (
int
, defaults to 16000) β The sampling rate at which the audio files should be digitalized expressed in Hertz per second (Hz).padding_value (
float
, defaults to 0.0) β The value that is used to fill the padding values.do_normalize (
bool
, optional, defaults toFalse
) β Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly improve the performance for some models, e.g., wav2vec2-lv60.return_attention_mask (
bool
, optional, defaults toFalse
) βWhether or not
__call__()
should returnattention_mask
.Note
Wav2Vec2 models that have set
config.feat_extract_norm == "group"
, such as wav2vec2-base, have not been trained usingattention_mask
. For such models,input_values
should simply be padded with 0 and noattention_mask
should be passed.For Wav2Vec2 models that have set
config.feat_extract_norm == "layer"
, such as wav2vec2-lv60,attention_mask
should be passed for batched inference.
-
__call__
(raw_speech: Union[numpy.ndarray, List[float], List[numpy.ndarray], List[List[float]]], padding: Union[bool, str, transformers.file_utils.PaddingStrategy] = False, max_length: Optional[int] = None, truncation: bool = False, pad_to_multiple_of: Optional[int] = None, return_attention_mask: Optional[bool] = None, return_tensors: Optional[Union[str, transformers.file_utils.TensorType]] = None, sampling_rate: Optional[int] = None, **kwargs) → transformers.feature_extraction_utils.BatchFeature[source]ΒΆ Main method to featurize and prepare for the model one or several sequence(s). sequences.
- Parameters
raw_speech (
np.ndarray
,List[float]
,List[np.ndarray]
,List[List[float]]
) β The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float values, a list of numpy arrays or a list of list of float values.padding (
bool
,str
orPaddingStrategy
, optional, defaults toFalse
) βSelect a strategy to pad the returned sequences (according to the modelβs padding side and padding index) among:
True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
max_length (
int
, optional) β Maximum length of the returned list and optionally padding length (see above).truncation (
bool
) β Activates truncation to cut input sequences longer than max_length to max_length.pad_to_multiple_of (
int
, optional) βIf set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
return_attention_mask (
bool
, optional) βWhether to return the attention mask. If left to the default, will return the attention mask according to the specific feature_extractorβs default.
Note
Wav2Vec2 models that have set
config.feat_extract_norm == "group"
, such as wav2vec2-base, have not been trained usingattention_mask
. For such models,input_values
should simply be padded with 0 and noattention_mask
should be passed.For Wav2Vec2 models that have set
config.feat_extract_norm == "layer"
, such as wav2vec2-lv60,attention_mask
should be passed for batched inference.return_tensors (
str
orTensorType
, optional) βIf set, will return tensors instead of list of python integers. Acceptable values are:
'tf'
: Return TensorFlowtf.constant
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return Numpynp.ndarray
objects.
sampling_rate (
int
, optional) β The sampling rate at which theraw_speech
input was sampled. It is strongly recommended to passsampling_rate
at the forward call to prevent silent errors.padding_value (
float
, defaults to 0.0) β
Wav2Vec2ProcessorΒΆ
-
class
transformers.
Wav2Vec2Processor
(feature_extractor, tokenizer)[source]ΒΆ Constructs a Wav2Vec2 processor which wraps a Wav2Vec2 feature extractor and a Wav2Vec2 CTC tokenizer into a single processor.
Wav2Vec2Processor
offers all the functionalities ofWav2Vec2FeatureExtractor
andWav2Vec2CTCTokenizer
. See the docstring of__call__()
anddecode()
for more information.- Parameters
feature_extractor (
Wav2Vec2FeatureExtractor
) β An instance ofWav2Vec2FeatureExtractor
. The feature extractor is a required input.tokenizer (
Wav2Vec2CTCTokenizer
) β An instance ofWav2Vec2CTCTokenizer
. The tokenizer is a required input.
-
__call__
(*args, **kwargs)[source]ΒΆ When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractorβs
__call__()
and returns its output. If used in the contextas_target_processor()
this method forwards all its arguments to Wav2Vec2CTCTokenizerβs__call__()
. Please refer to the docstring of the above two methods for more information.
-
as_target_processor
()[source]ΒΆ Temporarily sets the tokenizer for processing the input. Useful for encoding the labels when fine-tuning Wav2Vec2.
-
batch_decode
(*args, **kwargs)[source]ΒΆ This method forwards all its arguments to Wav2Vec2CTCTokenizerβs
batch_decode()
. Please refer to the docstring of this method for more information.
-
decode
(*args, **kwargs)[source]ΒΆ This method forwards all its arguments to Wav2Vec2CTCTokenizerβs
decode()
. Please refer to the docstring of this method for more information.
-
classmethod
from_pretrained
(pretrained_model_name_or_path, **kwargs)[source]ΒΆ Instantiate a
Wav2Vec2Processor
from a pretrained Wav2Vec2 processor.Note
This class method is simply calling Wav2Vec2FeatureExtractorβs
from_pretrained()
and Wav2Vec2CTCTokenizerβsfrom_pretrained()
. Please refer to the docstrings of the methods above for more information.- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) βThis can be either:
a string, the model id of a pretrained feature_extractor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.a path to a directory containing a feature extractor file saved using the
save_pretrained()
method, e.g.,./my_model_directory/
.a path or url to a saved feature extractor JSON file, e.g.,
./my_model_directory/preprocessor_config.json
.
**kwargs β Additional keyword arguments passed along to both
SequenceFeatureExtractor
andPreTrainedTokenizer
-
pad
(*args, **kwargs)[source]ΒΆ When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractorβs
pad()
and returns its output. If used in the contextas_target_processor()
this method forwards all its arguments to Wav2Vec2CTCTokenizerβspad()
. Please refer to the docstring of the above two methods for more information.
-
save_pretrained
(save_directory)[source]ΒΆ Save a Wav2Vec2 feature_extractor object and Wav2Vec2 tokenizer object to the directory
save_directory
, so that it can be re-loaded using thefrom_pretrained()
class method.Note
This class method is simply calling
save_pretrained()
andsave_pretrained()
. Please refer to the docstrings of the methods above for more information.- Parameters
save_directory (
str
oros.PathLike
) β Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will be created if it does not exist).
Wav2Vec2 specific outputsΒΆ
-
class
transformers.models.wav2vec2.modeling_wav2vec2.
Wav2Vec2BaseModelOutput
(last_hidden_state: torch.FloatTensor = None, extract_features: torch.FloatTensor = None, hidden_states: Optional[Tuple[torch.FloatTensor]] = None, attentions: Optional[Tuple[torch.FloatTensor]] = None)[source]ΒΆ Output type of
Wav2Vec2BaseModelOutput
, with potential hidden states and attentions.- Parameters
last_hidden_state (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
) β Sequence of hidden-states at the output of the last layer of the model.extract_features (
torch.FloatTensor
of shape(batch_size, sequence_length, conv_dim[-1])
) β Sequence of extracted feature vectors of the last convolutional layer of the model.hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) βTuple of
torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) βTuple of
torch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
-
class
transformers.models.wav2vec2.modeling_wav2vec2.
Wav2Vec2ForPreTrainingOutput
(loss: Optional[torch.FloatTensor] = None, projected_states: torch.FloatTensor = None, projected_quantized_states: torch.FloatTensor = None, codevector_perplexity: torch.FloatTensor = None, hidden_states: Optional[Tuple[torch.FloatTensor]] = None, attentions: Optional[Tuple[torch.FloatTensor]] = None)[source]ΒΆ Output type of
Wav2Vec2ForPreTrainingOutput
, with potential hidden states and attentions.- Parameters
loss (optional, returned when model is in train mode,
torch.FloatTensor
of shape(1,)
) β Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official paper . (classification) loss.projected_states (
torch.FloatTensor
of shape(batch_size, sequence_length, config.proj_codevector_dim)
) β Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked projected quantized states.projected_quantized_states (
torch.FloatTensor
of shape(batch_size, sequence_length, config.proj_codevector_dim)
) β Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive target vectors for contrastive loss.hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) βTuple of
torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) βTuple of
torch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
-
class
transformers.models.wav2vec2.modeling_flax_wav2vec2.
FlaxWav2Vec2BaseModelOutput
(last_hidden_state: jax._src.numpy.lax_numpy.ndarray = None, extract_features: jax._src.numpy.lax_numpy.ndarray = None, hidden_states: Optional[Tuple[jax._src.numpy.lax_numpy.ndarray]] = None, attentions: Optional[Tuple[jax._src.numpy.lax_numpy.ndarray]] = None)[source]ΒΆ Output type of
FlaxWav2Vec2BaseModelOutput
, with potential hidden states and attentions.- Parameters
last_hidden_state (
jnp.ndarray
of shape(batch_size, sequence_length, hidden_size)
) β Sequence of hidden-states at the output of the last layer of the model.extract_features (
jnp.ndarray
of shape(batch_size, sequence_length, last_conv_dim)
) β Sequence of extracted feature vectors of the last convolutional layer of the model withlast_conv_dim
being the dimension of the last convolutional layer.hidden_states (
tuple(jnp.ndarray)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) βTuple of
jnp.ndarray
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(jnp.ndarray)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) βTuple of
jnp.ndarray
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
-
replace
(**updates)ΒΆ βReturns a new object replacing the specified fields with new values.
-
class
transformers.models.wav2vec2.modeling_flax_wav2vec2.
FlaxWav2Vec2ForPreTrainingOutput
(projected_states: jax._src.numpy.lax_numpy.ndarray = None, projected_quantized_states: jax._src.numpy.lax_numpy.ndarray = None, codevector_perplexity: jax._src.numpy.lax_numpy.ndarray = None, hidden_states: Optional[Tuple[jax._src.numpy.lax_numpy.ndarray]] = None, attentions: Optional[Tuple[jax._src.numpy.lax_numpy.ndarray]] = None)[source]ΒΆ Output type of
FlaxWav2Vec2ForPreTrainingOutput
, with potential hidden states and attentions.- Parameters
loss (optional, returned when model is in train mode,
jnp.ndarray
of shape(1,)
) β Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official paper . (classification) loss.projected_states (
jnp.ndarray
of shape(batch_size, sequence_length, config.proj_codevector_dim)
) β Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked projected quantized states.projected_quantized_states (
jnp.ndarray
of shape(batch_size, sequence_length, config.proj_codevector_dim)
) β Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive target vectors for contrastive loss.hidden_states (
tuple(jnp.ndarray)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) βTuple of
jnp.ndarray
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(jnp.ndarray)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) βTuple of
jnp.ndarray
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
-
replace
(**updates)ΒΆ βReturns a new object replacing the specified fields with new values.
Wav2Vec2ModelΒΆ
-
class
transformers.
Wav2Vec2Model
(config: transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config)[source]ΒΆ The bare Wav2Vec2 Model transformer outputting raw hidden-states without any specific head on top. Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from
PreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
Wav2Vec2Config
) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
forward
(input_values, attention_mask=None, mask_time_indices=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]ΒΆ The
Wav2Vec2Model
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) β Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, theWav2Vec2Processor
should be used for padding and conversion into a tensor of type torch.FloatTensor. Seetransformers.Wav2Vec2Processor.__call__()
for details.attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) βMask to avoid performing convolution and attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
Warning
attention_mask
should only be passed if the corresponding processor hasconfig.return_attention_mask == True
. For all models whose processor hasconfig.return_attention_mask == False
, such as wav2vec2-base,attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such modelsinput_values
should simply be padded with 0 and passed withoutattention_mask
. Be aware that these models also yield slightly different results depending on whetherinput_values
is padded or not.output_attentions (
bool
, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) β Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) β Whether or not to return aModelOutput
instead of a plain tuple.
- Returns
A
BaseModelOutput
or a tuple oftorch.FloatTensor
(ifreturn_dict=False
is passed or whenconfig.return_dict=False
) comprising various elements depending on the configuration (Wav2Vec2Config
) and inputs.last_hidden_state (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
) β Sequence of hidden-states at the output of the last layer of the model.hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Example:
>>> from transformers import Wav2Vec2Processor, Wav2Vec2Model >>> from datasets import load_dataset >>> import soundfile as sf >>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") >>> model = Wav2Vec2Model.from_pretrained("facebook/wav2vec2-base-960h") >>> def map_to_array(batch): >>> speech, _ = sf.read(batch["file"]) >>> batch["speech"] = speech >>> return batch >>> ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> input_values = processor(ds["speech"][0], return_tensors="pt").input_values # Batch size 1 >>> hidden_states = model(input_values).last_hidden_state
- Return type
BaseModelOutput
ortuple(torch.FloatTensor)
Wav2Vec2ForCTCΒΆ
-
class
transformers.
Wav2Vec2ForCTC
(config)[source]ΒΆ Wav2Vec2 Model with a language modeling head on top for Connectionist Temporal Classification (CTC). Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from
PreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
Wav2Vec2Config
) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
forward
(input_values, attention_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None)[source]ΒΆ The
Wav2Vec2ForCTC
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) β Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, theWav2Vec2Processor
should be used for padding and conversion into a tensor of type torch.FloatTensor. Seetransformers.Wav2Vec2Processor.__call__()
for details.attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) βMask to avoid performing convolution and attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
Warning
attention_mask
should only be passed if the corresponding processor hasconfig.return_attention_mask == True
. For all models whose processor hasconfig.return_attention_mask == False
, such as wav2vec2-base,attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such modelsinput_values
should simply be padded with 0 and passed withoutattention_mask
. Be aware that these models also yield slightly different results depending on whetherinput_values
is padded or not.output_attentions (
bool
, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) β Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) β Whether or not to return aModelOutput
instead of a plain tuple.labels (
torch.LongTensor
of shape(batch_size, target_length)
, optional) β Labels for connectionist temporal classification. Note thattarget_length
has to be smaller or equal to the sequence length of the output logits. Indices are selected in[-100, 0, ..., config.vocab_size - 1]
. All labels set to-100
are ignored (masked), the loss is only computed for labels in[0, ..., config.vocab_size - 1]
.
- Returns
A
CausalLMOutput
or a tuple oftorch.FloatTensor
(ifreturn_dict=False
is passed or whenconfig.return_dict=False
) comprising various elements depending on the configuration (Wav2Vec2Config
) and inputs.loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) β Language modeling loss (for next-token prediction).logits (
torch.FloatTensor
of shape(batch_size, sequence_length, config.vocab_size)
) β Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Example:
>>> import torch >>> from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC >>> from datasets import load_dataset >>> import soundfile as sf >>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") >>> model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") >>> def map_to_array(batch): >>> speech, _ = sf.read(batch["file"]) >>> batch["speech"] = speech >>> return batch >>> ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> input_values = processor(ds["speech"][0], return_tensors="pt").input_values # Batch size 1 >>> logits = model(input_values).logits >>> predicted_ids = torch.argmax(logits, dim=-1) >>> transcription = processor.decode(predicted_ids[0]) >>> # compute loss >>> target_transcription = "A MAN SAID TO THE UNIVERSE SIR I EXIST" >>> # wrap processor as target processor to encode labels >>> with processor.as_target_processor(): >>> labels = processor(target_transcription, return_tensors="pt").input_ids >>> loss = model(input_values, labels=labels).loss
- Return type
CausalLMOutput
ortuple(torch.FloatTensor)
Wav2Vec2ForSequenceClassificationΒΆ
-
class
transformers.
Wav2Vec2ForSequenceClassification
(config)[source]ΒΆ Wav2Vec2 Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting.
Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from
PreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
Wav2Vec2Config
) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
forward
(input_values, attention_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None)[source]ΒΆ The
Wav2Vec2ForSequenceClassification
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) β Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, theWav2Vec2Processor
should be used for padding and conversion into a tensor of type torch.FloatTensor. Seetransformers.Wav2Vec2Processor.__call__()
for details.attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) βMask to avoid performing convolution and attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
Warning
attention_mask
should only be passed if the corresponding processor hasconfig.return_attention_mask == True
. For all models whose processor hasconfig.return_attention_mask == False
, such as wav2vec2-base,attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such modelsinput_values
should simply be padded with 0 and passed withoutattention_mask
. Be aware that these models also yield slightly different results depending on whetherinput_values
is padded or not.output_attentions (
bool
, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) β Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) β Whether or not to return aModelOutput
instead of a plain tuple.labels (
torch.LongTensor
of shape(batch_size,)
, optional) β Labels for computing the sequence classification/regression loss. Indices should be in[0, ..., config.num_labels - 1]
. Ifconfig.num_labels == 1
a regression loss is computed (Mean-Square loss), Ifconfig.num_labels > 1
a classification loss is computed (Cross-Entropy).
- Returns
A
SequenceClassifierOutput
or a tuple oftorch.FloatTensor
(ifreturn_dict=False
is passed or whenconfig.return_dict=False
) comprising various elements depending on the configuration (Wav2Vec2Config
) and inputs.loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) β Classification (or regression if config.num_labels==1) loss.logits (
torch.FloatTensor
of shape(batch_size, config.num_labels)
) β Classification (or regression if config.num_labels==1) scores (before SoftMax).hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Example:
>>> import torch >>> from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2ForSequenceClassification >>> from datasets import load_dataset >>> processor = Wav2Vec2FeatureExtractor.from_pretrained("superb/wav2vec2-base-superb-ks") >>> model = Wav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-base-superb-ks") >>> ds = load_dataset("anton-l/superb_dummy", "ks", split="test") >>> input_values = processor(ds["speech"][4], return_tensors="pt").input_values # Batch size 1 >>> logits = model(input_values).logits >>> predicted_class_ids = torch.argmax(logits, dim=-1) >>> # compute loss >>> target_label = "down" >>> labels = torch.tensor([model.config.label2id[target_label]]) >>> loss = model(input_values, labels=labels).loss
- Return type
SequenceClassifierOutput
ortuple(torch.FloatTensor)
Wav2Vec2ForPreTrainingΒΆ
-
class
transformers.
Wav2Vec2ForPreTraining
(config: transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config)[source]ΒΆ Wav2Vec2 Model with a quantizer and VQ head on top. Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from
PreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
Wav2Vec2Config
) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
forward
(input_values, attention_mask=None, mask_time_indices=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]ΒΆ The
Wav2Vec2ForPreTraining
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_values (
torch.FloatTensor
of shape(batch_size, sequence_length)
) β Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, theWav2Vec2Processor
should be used for padding and conversion into a tensor of type torch.FloatTensor. Seetransformers.Wav2Vec2Processor.__call__()
for details.attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) βMask to avoid performing convolution and attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
Warning
attention_mask
should only be passed if the corresponding processor hasconfig.return_attention_mask == True
. For all models whose processor hasconfig.return_attention_mask == False
, such as wav2vec2-base,attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such modelsinput_values
should simply be padded with 0 and passed withoutattention_mask
. Be aware that these models also yield slightly different results depending on whetherinput_values
is padded or not.output_attentions (
bool
, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) β Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) β Whether or not to return aModelOutput
instead of a plain tuple.mask_time_indices (
torch.BoolTensor
of shape(batch_size, sequence_length)
, optional) β Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in config.proj_codevector_dim space.
- Returns
A
Wav2Vec2ForPreTrainingOutput
or a tuple oftorch.FloatTensor
(ifreturn_dict=False
is passed or whenconfig.return_dict=False
) comprising various elements depending on the configuration (Wav2Vec2Config
) and inputs.loss (optional, returned when model is in train mode,
torch.FloatTensor
of shape(1,)
) β Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official paper . (classification) loss.projected_states (
torch.FloatTensor
of shape(batch_size, sequence_length, config.proj_codevector_dim)
) β Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked projected quantized states.projected_quantized_states (
torch.FloatTensor
of shape(batch_size, sequence_length, config.proj_codevector_dim)
) β Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive target vectors for contrastive loss.hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Example:
>>> import torch >>> from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2ForPreTraining >>> from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices >>> from datasets import load_dataset >>> import soundfile as sf >>> feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("patrickvonplaten/wav2vec2-base") >>> model = Wav2Vec2ForPreTraining.from_pretrained("patrickvonplaten/wav2vec2-base") >>> def map_to_array(batch): ... speech, _ = sf.read(batch["file"]) ... batch["speech"] = speech ... return batch >>> ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> input_values = feature_extractor(ds["speech"][0], return_tensors="pt").input_values # Batch size 1 >>> # compute masked indices >>> batch_size, raw_sequence_length = input_values.shape >>> sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length) >>> mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.2, mask_length=2, device=model.device) >>> with torch.no_grad(): ... outputs = model(input_values, mask_time_indices=mask_time_indices) >>> # compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states) >>> cosine_sim = torch.cosine_similarity( ... outputs.projected_states, outputs.projected_quantized_states, dim=-1 ... ) >>> # show that cosine similarity is much higher than random >>> assert cosine_sim[mask_time_indices].mean() > 0.5 >>> # for contrastive loss training model should be put into train mode >>> model.train() >>> loss = model(input_values, mask_time_indices=mask_time_indices).loss
- Return type
Wav2Vec2ForPreTrainingOutput
ortuple(torch.FloatTensor)
TFWav2Vec2ModelΒΆ
-
class
transformers.
TFWav2Vec2Model
(*args, **kwargs)[source]ΒΆ The bare TFWav2Vec2 Model transformer outputing raw hidden-states without any specific head on top.
This model inherits from
TFPreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
Note
TF 2.0 models accepts two formats as inputs:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using
tf.keras.Model.fit()
method which currently requires having all the tensors in the first argument of the model call function:model(inputs)
.If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
a single Tensor with
input_values
only and nothing else:model(inputs_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_values, attention_mask])
ormodel([input_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_values": input_values, "token_type_ids": token_type_ids})
- Parameters
config (
Wav2Vec2Config
) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
call
(input_values: tensorflow.python.framework.ops.Tensor, attention_mask: Optional[tensorflow.python.framework.ops.Tensor] = None, token_type_ids: Optional[tensorflow.python.framework.ops.Tensor] = None, position_ids: Optional[tensorflow.python.framework.ops.Tensor] = None, head_mask: Optional[tensorflow.python.framework.ops.Tensor] = None, inputs_embeds: Optional[tensorflow.python.framework.ops.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, training: bool = False) → Union[transformers.modeling_tf_outputs.TFBaseModelOutput, Tuple[tensorflow.python.framework.ops.Tensor]][source]ΒΆ The
TFWav2Vec2Model
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_values (
np.ndarray
,tf.Tensor
,List[tf.Tensor]
Dict[str, tf.Tensor]
orDict[str, np.ndarray]
and each example must have the shape({0})
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.__call__()
andtransformers.PreTrainedTokenizer.encode()
for details.attention_mask (
np.ndarray
ortf.Tensor
of shape({0})
, optional) βMask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (
np.ndarray
ortf.Tensor
of shape({0})
, optional) βSegment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]
:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (
np.ndarray
ortf.Tensor
of shape({0})
, optional) βIndices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.max_position_embeddings - 1]
.head_mask (
np.ndarray
ortf.Tensor
of shape(num_heads,)
or(num_layers, num_heads)
, optional) βMask to nullify selected heads of the self-attention modules. Mask values selected in
[0, 1]
:1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (
np.ndarray
ortf.Tensor
of shape({0}, hidden_size)
, optional) β Optionally, instead of passinginput_values
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_values
indices into associated vectors than the modelβs internal embedding lookup matrix.output_attentions (
bool
, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.output_hidden_states (
bool
, optional) β Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.return_dict (
bool
, optional) β Whether or not to return aModelOutput
instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.training (
bool
, optional, defaults toFalse
) β Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
- Returns
A
TFBaseModelOutput
or a tuple oftf.Tensor
(ifreturn_dict=False
is passed or whenconfig.return_dict=False
) comprising various elements depending on the configuration (Wav2Vec2Config
) and inputs.last_hidden_state (
tf.Tensor
of shape(batch_size, sequence_length, hidden_size)
) β Sequence of hidden-states at the output of the last layer of the model.hidden_states (
tuple(tf.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftf.Tensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Example:
>>> from transformers import Wav2Vec2Processor, TFWav2Vec2Model >>> from datasets import load_dataset >>> import soundfile as sf >>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") >>> model = TFWav2Vec2Model.from_pretrained("facebook/wav2vec2-base-960h") >>> def map_to_array(batch): >>> speech, _ = sf.read(batch["file"]) >>> batch["speech"] = speech >>> return batch >>> ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> input_values = processor(ds["speech"][0], return_tensors="tf").input_values # Batch size 1 >>> hidden_states = model(input_values).last_hidden_state
- Return type
TFBaseModelOutput
ortuple(tf.Tensor)
TFWav2Vec2ForCTCΒΆ
-
class
transformers.
TFWav2Vec2ForCTC
(*args, **kwargs)[source]ΒΆ TFWav2Vec2 Model with a language modeling head on top for Connectionist Temporal Classification (CTC).
This model inherits from
TFPreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
Note
TF 2.0 models accepts two formats as inputs:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using
tf.keras.Model.fit()
method which currently requires having all the tensors in the first argument of the model call function:model(inputs)
.If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
a single Tensor with
input_values
only and nothing else:model(inputs_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_values, attention_mask])
ormodel([input_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_values": input_values, "token_type_ids": token_type_ids})
- Parameters
config (
Wav2Vec2Config
) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
call
(input_values: tensorflow.python.framework.ops.Tensor, attention_mask: Optional[tensorflow.python.framework.ops.Tensor] = None, token_type_ids: Optional[tensorflow.python.framework.ops.Tensor] = None, position_ids: Optional[tensorflow.python.framework.ops.Tensor] = None, head_mask: Optional[tensorflow.python.framework.ops.Tensor] = None, inputs_embeds: Optional[tensorflow.python.framework.ops.Tensor] = None, output_attentions: Optional[bool] = None, labels: Optional[tensorflow.python.framework.ops.Tensor] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, training: Optional[bool] = False) → Union[transformers.modeling_tf_outputs.TFCausalLMOutput, Tuple[tensorflow.python.framework.ops.Tensor]][source]ΒΆ The
TFWav2Vec2ForCTC
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_values (
np.ndarray
,tf.Tensor
,List[tf.Tensor]
Dict[str, tf.Tensor]
orDict[str, np.ndarray]
and each example must have the shape({0})
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.__call__()
andtransformers.PreTrainedTokenizer.encode()
for details.attention_mask (
np.ndarray
ortf.Tensor
of shape({0})
, optional) βMask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (
np.ndarray
ortf.Tensor
of shape({0})
, optional) βSegment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]
:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (
np.ndarray
ortf.Tensor
of shape({0})
, optional) βIndices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.max_position_embeddings - 1]
.head_mask (
np.ndarray
ortf.Tensor
of shape(num_heads,)
or(num_layers, num_heads)
, optional) βMask to nullify selected heads of the self-attention modules. Mask values selected in
[0, 1]
:1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (
np.ndarray
ortf.Tensor
of shape({0}, hidden_size)
, optional) β Optionally, instead of passinginput_values
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_values
indices into associated vectors than the modelβs internal embedding lookup matrix.output_attentions (
bool
, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.output_hidden_states (
bool
, optional) β Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.return_dict (
bool
, optional) β Whether or not to return aModelOutput
instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.training (
bool
, optional, defaults toFalse
) β Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).labels (
tf.Tensor
ornp.ndarray
of shape(batch_size, sequence_length)
, optional) β Labels for computing the masked language modeling loss. Indices should be in[-100, 0, ..., config.vocab_size]
(seeinput_values
docstring) Tokens with indices set to-100
are ignored (masked), the loss is only computed for the tokens with labels in[0, ..., config.vocab_size]
- Returns
A
TFCausalLMOutput
or a tuple oftf.Tensor
(ifreturn_dict=False
is passed or whenconfig.return_dict=False
) comprising various elements depending on the configuration (Wav2Vec2Config
) and inputs.loss (
tf.Tensor
of shape(n,)
, optional, where n is the number of non-masked labels, returned whenlabels
is provided) β Language modeling loss (for next-token prediction).logits (
tf.Tensor
of shape(batch_size, sequence_length, config.vocab_size)
) β Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftf.Tensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Example:
>>> import tensorflow as tf >>> from transformers import Wav2Vec2Processor, TFWav2Vec2ForCTC >>> from datasets import load_dataset >>> import soundfile as sf >>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") >>> model = TFWav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") >>> def map_to_array(batch): >>> speech, _ = sf.read(batch["file"]) >>> batch["speech"] = speech >>> return batch >>> ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> input_values = processor(ds["speech"][0], return_tensors="tf").input_values # Batch size 1 >>> logits = model(input_values).logits >>> predicted_ids = tf.argmax(logits, axis=-1) >>> transcription = processor.decode(predicted_ids[0]) >>> # compute loss >>> target_transcription = "A MAN SAID TO THE UNIVERSE SIR I EXIST" >>> # wrap processor as target processor to encode labels >>> with processor.as_target_processor(): >>> labels = processor(transcription, return_tensors="tf").input_ids >>> loss = model(input_values, labels=labels).loss
- Return type
TFCausalLMOutput
ortuple(tf.Tensor)
FlaxWav2Vec2ModelΒΆ
-
class
transformers.
FlaxWav2Vec2Model
(config: transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config, input_shape: Tuple = (1, 1024), seed: int = 0, dtype: numpy.dtype = <class 'jax._src.numpy.lax_numpy.float32'>, **kwargs)[source]ΒΆ The bare Wav2Vec2 Model transformer outputting raw hidden-states without any specific head on top. Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from
FlaxPreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
- Parameters
config (
Wav2Vec2Config
) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
__call__
(input_values, attention_mask=None, mask_time_indices=None, params: dict = None, dropout_rng: jax._src.random.PRNGKey = None, train: bool = False, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None)ΒΆ The
FlaxWav2Vec2PreTrainedModel
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_values (
jnp.ndarray
of shape(batch_size, sequence_length)
) β Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, theWav2Vec2Processor
should be used for padding and conversion into a tensor of type jnp.ndarray. Seetransformers.Wav2Vec2Processor.__call__()
for details.attention_mask (
jnp.ndarray
of shape(batch_size, sequence_length)
, optional) βMask to avoid performing convolution and attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks? .. warning::
attention_mask
should only be passed if the corresponding processor hasconfig.return_attention_mask == True
. For all models whose processor hasconfig.return_attention_mask == False
, such as wav2vec2-base,attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such modelsinput_values
should simply be padded with 0 and passed withoutattention_mask
. Be aware that these models also yield slightly different results depending on whetherinput_values
is padded or not.mask_time_indices (
jnp.ndarray
of shape(batch_size, sequence_length)
, optional) β Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in config.proj_codevector_dim space.output_attentions (
bool
, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) β Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) β Whether or not to return aModelOutput
instead of a plain tuple.
- Returns
A
FlaxWav2Vec2BaseModelOutput
or a tuple oftorch.FloatTensor
(ifreturn_dict=False
is passed or whenconfig.return_dict=False
) comprising various elements depending on the configuration (~transformers.
) and inputs.last_hidden_state (
jnp.ndarray
of shape(batch_size, sequence_length, hidden_size)
) β Sequence of hidden-states at the output of the last layer of the model.extract_features (
jnp.ndarray
of shape(batch_size, sequence_length, last_conv_dim)
) β Sequence of extracted feature vectors of the last convolutional layer of the model withlast_conv_dim
being the dimension of the last convolutional layer.hidden_states (
tuple(jnp.ndarray)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple ofjnp.ndarray
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(jnp.ndarray)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple ofjnp.ndarray
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- Return type
FlaxWav2Vec2BaseModelOutput
ortuple(torch.FloatTensor)
Example:
>>> from transformers import Wav2Vec2Processor, FlaxWav2Vec2Model >>> from datasets import load_dataset >>> import soundfile as sf >>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-lv60") >>> model = FlaxWav2Vec2Model.from_pretrained("facebook/wav2vec2-large-lv60") >>> def map_to_array(batch): >>> speech, _ = sf.read(batch["file"]) >>> batch["speech"] = speech >>> return batch >>> ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> input_values = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="np").input_values # Batch size 1 >>> hidden_states = model(input_values).last_hidden_state
FlaxWav2Vec2ForCTCΒΆ
-
class
transformers.
FlaxWav2Vec2ForCTC
(config: transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config, input_shape: Tuple = (1, 1024), seed: int = 0, dtype: numpy.dtype = <class 'jax._src.numpy.lax_numpy.float32'>, **kwargs)[source]ΒΆ Wav2Vec2 Model with a language modeling head on top for Connectionist Temporal Classification (CTC). Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from
FlaxPreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
- Parameters
config (
Wav2Vec2Config
) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
__call__
(input_values, attention_mask=None, mask_time_indices=None, params: dict = None, dropout_rng: jax._src.random.PRNGKey = None, train: bool = False, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None)ΒΆ The
FlaxWav2Vec2PreTrainedModel
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_values (
jnp.ndarray
of shape(batch_size, sequence_length)
) β Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, theWav2Vec2Processor
should be used for padding and conversion into a tensor of type jnp.ndarray. Seetransformers.Wav2Vec2Processor.__call__()
for details.attention_mask (
jnp.ndarray
of shape(batch_size, sequence_length)
, optional) βMask to avoid performing convolution and attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks? .. warning::
attention_mask
should only be passed if the corresponding processor hasconfig.return_attention_mask == True
. For all models whose processor hasconfig.return_attention_mask == False
, such as wav2vec2-base,attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such modelsinput_values
should simply be padded with 0 and passed withoutattention_mask
. Be aware that these models also yield slightly different results depending on whetherinput_values
is padded or not.mask_time_indices (
jnp.ndarray
of shape(batch_size, sequence_length)
, optional) β Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in config.proj_codevector_dim space.output_attentions (
bool
, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) β Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) β Whether or not to return aModelOutput
instead of a plain tuple.
- Returns
A
FlaxMaskedLMOutput
or a tuple oftorch.FloatTensor
(ifreturn_dict=False
is passed or whenconfig.return_dict=False
) comprising various elements depending on the configuration (~transformers.
) and inputs.logits (
jnp.ndarray
of shape(batch_size, sequence_length, config.vocab_size)
) β Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).hidden_states (
tuple(jnp.ndarray)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple ofjnp.ndarray
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(jnp.ndarray)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple ofjnp.ndarray
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- Return type
FlaxMaskedLMOutput
ortuple(torch.FloatTensor)
Example:
>>> import jax.numpy as jnp >>> from transformers import Wav2Vec2Processor, FlaxWav2Vec2ForCTC >>> from datasets import load_dataset >>> import soundfile as sf >>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60") >>> model = FlaxWav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60") >>> def map_to_array(batch): >>> speech, _ = sf.read(batch["file"]) >>> batch["speech"] = speech >>> return batch >>> ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> input_values = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="np").input_values # Batch size 1 >>> logits = model(input_values).logits >>> predicted_ids = jnp.argmax(logits, axis=-1) >>> transcription = processor.decode(predicted_ids[0]) >>> # should give: "A MAN SAID TO THE UNIVERSE SIR I EXIST"
FlaxWav2Vec2ForPreTrainingΒΆ
-
class
transformers.
FlaxWav2Vec2ForPreTraining
(config: transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config, input_shape: Tuple = (1, 1024), seed: int = 0, dtype: numpy.dtype = <class 'jax._src.numpy.lax_numpy.float32'>, **kwargs)[source]ΒΆ Wav2Vec2 Model with a quantizer and VQ head on top. Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from
FlaxPreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
- Parameters
config (
Wav2Vec2Config
) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
__call__
(input_values, attention_mask=None, mask_time_indices=None, gumbel_temperature: int = 1, params: dict = None, dropout_rng: jax._src.random.PRNGKey = None, gumbel_rng: jax._src.random.PRNGKey = None, train: bool = False, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None)[source]ΒΆ The
FlaxWav2Vec2ForPreTraining
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_values (
jnp.ndarray
of shape(batch_size, sequence_length)
) β Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, theWav2Vec2Processor
should be used for padding and conversion into a tensor of type jnp.ndarray. Seetransformers.Wav2Vec2Processor.__call__()
for details.attention_mask (
jnp.ndarray
of shape(batch_size, sequence_length)
, optional) βMask to avoid performing convolution and attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks? .. warning::
attention_mask
should only be passed if the corresponding processor hasconfig.return_attention_mask == True
. For all models whose processor hasconfig.return_attention_mask == False
, such as wav2vec2-base,attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such modelsinput_values
should simply be padded with 0 and passed withoutattention_mask
. Be aware that these models also yield slightly different results depending on whetherinput_values
is padded or not.mask_time_indices (
jnp.ndarray
of shape(batch_size, sequence_length)
, optional) β Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in config.proj_codevector_dim space.output_attentions (
bool
, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) β Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) β Whether or not to return aModelOutput
instead of a plain tuple.
- Returns
A
FlaxWav2Vec2ForPreTrainingOutput
or a tuple oftorch.FloatTensor
(ifreturn_dict=False
is passed or whenconfig.return_dict=False
) comprising various elements depending on the configuration (~transformers.
) and inputs.loss (optional, returned when model is in train mode,
jnp.ndarray
of shape(1,)
) β Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official paper . (classification) loss.projected_states (
jnp.ndarray
of shape(batch_size, sequence_length, config.proj_codevector_dim)
) β Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked projected quantized states.projected_quantized_states (
jnp.ndarray
of shape(batch_size, sequence_length, config.proj_codevector_dim)
) β Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive target vectors for contrastive loss.hidden_states (
tuple(jnp.ndarray)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple ofjnp.ndarray
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(jnp.ndarray)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple ofjnp.ndarray
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- Return type
FlaxWav2Vec2ForPreTrainingOutput
ortuple(torch.FloatTensor)
Example:
>>> import optax >>> import numpy as np >>> import jax.numpy as jnp >>> from transformers import Wav2Vec2FeatureExtractor, FlaxWav2Vec2ForPreTraining >>> from transformers.models.wav2vec2.modeling_flax_wav2vec2 import _compute_mask_indices >>> from datasets import load_dataset >>> import soundfile as sf >>> feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("facebook/wav2vec2-large-lv60") >>> model = FlaxWav2Vec2ForPreTraining.from_pretrained("facebook/wav2vec2-large-lv60") >>> def map_to_array(batch): ... speech, _ = sf.read(batch["file"]) ... batch["speech"] = speech ... return batch >>> ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> input_values = feature_extractor(ds["speech"][0], return_tensors="np").input_values # Batch size 1 >>> # compute masked indices >>> batch_size, raw_sequence_length = input_values.shape >>> sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length) >>> mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.2, mask_length=2) >>> outputs = model(input_values, mask_time_indices=mask_time_indices) >>> # compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states) >>> cosine_sim = optax.cosine_similarity( ... outputs.projected_states, outputs.projected_quantized_states ... ) >>> # show that cosine similarity is much higher than random >>> assert np.asarray(cosine_sim)[mask_time_indices].mean() > 0.5